domain
stringclasses 48
values | url
stringlengths 35
137
| text
stringlengths 0
836k
| topic
stringclasses 13
values |
---|---|---|---|
brouwern.github.io | https://brouwern.github.io/lbrb/functions.html |
Chapter 7 Functions
===================
* `seq()`
* `is()`, `is.vector()`, `is.matrix()`
* `gsub()`
7\.1 Vectors in R
-----------------
**Variables** in R include **scalars**, **vectors**, and **lists**. **Functions** in R carry out operations on variables, for example, using the `log10()` function to calculate the log to the base 10 of a scalar variable `x`, or using the `mean()` function to calculate the average of the values in a vector variable `myvector`. For example, we can use `log10()` on a scalar object like this:
```
# store value in object
x <- 100
# take log base 10 of object
log10(x)
```
```
## [1] 2
```
Note that while mathematically x is a single number, or a scalar, R considers it to be a vector:
```
is.vector(x)
```
```
## [1] TRUE
```
There are many “is” commands. What is returned when you run `is.matrix()` on a vector?
```
is.matrix(x)
```
```
## [1] FALSE
```
Mathematically this is a bit odd, since often a vector is defined as a one\-dimensional matrix, e.g., a single column or single row of a matrix. But in *R* land, a vector is a vector, and matrix is a matrix, and there are no explicit scalars.
7\.2 Math on vectors
--------------------
Vectors can serve as the input for mathematical operations. When this is done *R* does the mathematical operation separately on each element of the vector. This is a unique feature of *R* that can be hard to get used to even for people with previous programming experience.
Let’s make a vector of numbers:
```
myvector <- c(30,16,303,99,11,111)
```
What happens when we multiply `myvector` by 10?
```
myvector*10
```
```
## [1] 300 160 3030 990 110 1110
```
R has taken each of the 6 values, 30 through 111, of `myvector` and multiplied each one by 10, giving us 6 results. That is, what R did was
```
## 30*10 # first value of myvector
## 16*10 # second value of myvector
## 303*10 # ....
## 99*10
## 111*10 # last value of myvector
```
The normal order of operations rules apply to vectors as they do to operations we’re more used to. So multiplying `myvector` by 10 is the same whether you put he 10 before or after vector. That is `myvector\*10` is the same as `10\*myvector`.
```
myvector*10
```
```
## [1] 300 160 3030 990 110 1110
```
```
10*myvector
```
```
## [1] 300 160 3030 990 110 1110
```
What happen when you subtract 30 from myvector? Write the code below.
```
myvector-30
```
```
## [1] 0 -14 273 69 -19 81
```
So, what R did was
```
## 30-30 # first value of myvector
## 16-30 # second value of myvector
## 303-30 # ....
## 99-30
## 111-30 # last value of myvector
```
Again, `myvector-30` is vectorized operation.
You can also square a vector
```
myvector^2
```
```
## [1] 900 256 91809 9801 121 12321
```
Which is the same as
```
## 30^2 # first value of myvector
## 16^2 # second value of myvector
## 303^2 # ....
## 99^2
## 111^2 # last value of myvector
```
Also you can take the square root of a vector using the functions `sqrt()`…
```
sqrt(myvector)
```
```
## [1] 5.477226 4.000000 17.406895 9.949874 3.316625 10.535654
```
…and take the log of a vector with `log()`…
```
log(myvector)
```
```
## [1] 3.401197 2.772589 5.713733 4.595120 2.397895 4.709530
```
…and just about any other mathematical operation. Here we are working on a separate vector object; all of these rules apply to a column in a matrix or a dataframe.
This attribute of R is called **vectorization**. When you run the code `myvector*10` or `log(myvector)` you are doing a **vectorized** operation \- its like normal math with special vector\-based super power to get more done faster than you normally could.
7\.3 Functions on vectors
-------------------------
As we just saw, we can use functions on vectors. Typically these use the vectors as an input and all the numbers are processed into an output. Call the `mean()` function on the vector we made called `myvector`.
```
mean(myvector)
```
```
## [1] 95
```
Note how we get a single value back \- the mean of all the values in the vector. R saw that we had a vector of multiple and knew that the mean is a function that doesn’t get applied to single number, but sets of numbers.
The function `sd()` calculates the standard deviation. Apply the `sd()` to myvector:
```
sd(myvector)
```
```
## [1] 110.5061
```
7\.4 Operations with two vectors
--------------------------------
You can also subtract one vector from another vector. This can be a little weird when you first see it. Make another vector with the numbers 5, 10, 15, 20, 25, 30\. Call this myvector2:
```
myvector2 <- c(5, 10, 15, 20, 25, 30)
```
Now subtract myvector2 from myvector. What happens?
```
myvector-myvector2
```
```
## [1] 25 6 288 79 -14 81
```
7\.5 Subsetting vectors
-----------------------
You can extract an **element** of a vector by typing the vector name with the index of that element given in **square brackets**. For example, to get the value of the 3rd element in the vector `myvector`, we type:
```
myvector[3]
```
```
## [1] 303
```
Extract the 4th element of the vector:
```
myvector[4]
```
```
## [1] 99
```
You can extract more than one element by using a vector in the brackets:
First, say I want to extract the 3rd and the 4th element. I can make a vector with 3 and 4 in it:
```
nums <- c(3,4)
```
Then put that vector in the brackets:
```
myvector[nums]
```
```
## [1] 303 99
```
We can also do it directly like this, skipping the vector\-creation step:
```
myvector[c(3,4)]
```
```
## [1] 303 99
```
In the chunk below extract the 1st and 2nd elements:
```
myvector[c(1,2)]
```
```
## [1] 30 16
```
7\.6 Sequences of numbers
-------------------------
Often we want a vector of numbers in **sequential order**. That is, a vector with the numbers 1, 2, 3, 4, … or 5, 10, 15, 20, … The easiest way to do this is using a colon
```
1:10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
Note that in R 1:10 is equivalent to c(1:10\)
```
c(1:10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
Usually to emphasize that a vector is being created I will use c(1:10\)
We can do any number to any numbers
```
c(20:30)
```
```
## [1] 20 21 22 23 24 25 26 27 28 29 30
```
We can also do it in *reverse*. In the code below put 30 before 20:
```
c(30:20)
```
```
## [1] 30 29 28 27 26 25 24 23 22 21 20
```
A useful function in *R* is the `seq()` function, which is an explicit function that can be used to create a vector containing a sequence of numbers that run from a particular number to another particular number.
```
seq(1, 10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
Using `seq()` instead of a `:` can be useful for readability to make it explicit what is going on. More importantly, `seq` has an argument `by = ...` so you can make a sequence of number with any interval between For example, if we want to create the sequence of numbers from 1 to 10 in steps of 1 (i.e.. 1, 2, 3, 4, … 10\), we can type:
```
seq(1, 10,
by = 1)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
We can change the **step size** by altering the value of the `by` argument given to the function `seq()`. For example, if we want to create a sequence of numbers from 1\-100 in steps of 20 (i.e.. 1, 21, 41, … 101\), we can type:
```
seq(1, 101,
by = 20)
```
```
## [1] 1 21 41 61 81 101
```
7\.7 Vectors can hold numeric or character data
-----------------------------------------------
The vector we created above holds numeric data, as indicated by `class()`
```
class(myvector)
```
```
## [1] "numeric"
```
Vectors can also holder character data, like the genetic code:
```
# vector of character data
myvector <- c("A","T","G")
# how it looks
myvector
```
```
## [1] "A" "T" "G"
```
```
# what is "is"
class(myvector)
```
```
## [1] "character"
```
7\.8 Regular expressions can modify character data
--------------------------------------------------
We can use **regular expressions** to modify character data. For example, change the Ts to Us
```
myvector <- gsub("T", "U", myvector)
```
Now check it out
```
myvector
```
```
## [1] "A" "U" "G"
```
Regular expressions are a deep subject in computing. You can find some more information about them [here](https://rstudio-pubs-static.s3.amazonaws.com/74603_76cd14d5983f47408fdf0b323550b846.html).
7\.1 Vectors in R
-----------------
**Variables** in R include **scalars**, **vectors**, and **lists**. **Functions** in R carry out operations on variables, for example, using the `log10()` function to calculate the log to the base 10 of a scalar variable `x`, or using the `mean()` function to calculate the average of the values in a vector variable `myvector`. For example, we can use `log10()` on a scalar object like this:
```
# store value in object
x <- 100
# take log base 10 of object
log10(x)
```
```
## [1] 2
```
Note that while mathematically x is a single number, or a scalar, R considers it to be a vector:
```
is.vector(x)
```
```
## [1] TRUE
```
There are many “is” commands. What is returned when you run `is.matrix()` on a vector?
```
is.matrix(x)
```
```
## [1] FALSE
```
Mathematically this is a bit odd, since often a vector is defined as a one\-dimensional matrix, e.g., a single column or single row of a matrix. But in *R* land, a vector is a vector, and matrix is a matrix, and there are no explicit scalars.
7\.2 Math on vectors
--------------------
Vectors can serve as the input for mathematical operations. When this is done *R* does the mathematical operation separately on each element of the vector. This is a unique feature of *R* that can be hard to get used to even for people with previous programming experience.
Let’s make a vector of numbers:
```
myvector <- c(30,16,303,99,11,111)
```
What happens when we multiply `myvector` by 10?
```
myvector*10
```
```
## [1] 300 160 3030 990 110 1110
```
R has taken each of the 6 values, 30 through 111, of `myvector` and multiplied each one by 10, giving us 6 results. That is, what R did was
```
## 30*10 # first value of myvector
## 16*10 # second value of myvector
## 303*10 # ....
## 99*10
## 111*10 # last value of myvector
```
The normal order of operations rules apply to vectors as they do to operations we’re more used to. So multiplying `myvector` by 10 is the same whether you put he 10 before or after vector. That is `myvector\*10` is the same as `10\*myvector`.
```
myvector*10
```
```
## [1] 300 160 3030 990 110 1110
```
```
10*myvector
```
```
## [1] 300 160 3030 990 110 1110
```
What happen when you subtract 30 from myvector? Write the code below.
```
myvector-30
```
```
## [1] 0 -14 273 69 -19 81
```
So, what R did was
```
## 30-30 # first value of myvector
## 16-30 # second value of myvector
## 303-30 # ....
## 99-30
## 111-30 # last value of myvector
```
Again, `myvector-30` is vectorized operation.
You can also square a vector
```
myvector^2
```
```
## [1] 900 256 91809 9801 121 12321
```
Which is the same as
```
## 30^2 # first value of myvector
## 16^2 # second value of myvector
## 303^2 # ....
## 99^2
## 111^2 # last value of myvector
```
Also you can take the square root of a vector using the functions `sqrt()`…
```
sqrt(myvector)
```
```
## [1] 5.477226 4.000000 17.406895 9.949874 3.316625 10.535654
```
…and take the log of a vector with `log()`…
```
log(myvector)
```
```
## [1] 3.401197 2.772589 5.713733 4.595120 2.397895 4.709530
```
…and just about any other mathematical operation. Here we are working on a separate vector object; all of these rules apply to a column in a matrix or a dataframe.
This attribute of R is called **vectorization**. When you run the code `myvector*10` or `log(myvector)` you are doing a **vectorized** operation \- its like normal math with special vector\-based super power to get more done faster than you normally could.
7\.3 Functions on vectors
-------------------------
As we just saw, we can use functions on vectors. Typically these use the vectors as an input and all the numbers are processed into an output. Call the `mean()` function on the vector we made called `myvector`.
```
mean(myvector)
```
```
## [1] 95
```
Note how we get a single value back \- the mean of all the values in the vector. R saw that we had a vector of multiple and knew that the mean is a function that doesn’t get applied to single number, but sets of numbers.
The function `sd()` calculates the standard deviation. Apply the `sd()` to myvector:
```
sd(myvector)
```
```
## [1] 110.5061
```
7\.4 Operations with two vectors
--------------------------------
You can also subtract one vector from another vector. This can be a little weird when you first see it. Make another vector with the numbers 5, 10, 15, 20, 25, 30\. Call this myvector2:
```
myvector2 <- c(5, 10, 15, 20, 25, 30)
```
Now subtract myvector2 from myvector. What happens?
```
myvector-myvector2
```
```
## [1] 25 6 288 79 -14 81
```
7\.5 Subsetting vectors
-----------------------
You can extract an **element** of a vector by typing the vector name with the index of that element given in **square brackets**. For example, to get the value of the 3rd element in the vector `myvector`, we type:
```
myvector[3]
```
```
## [1] 303
```
Extract the 4th element of the vector:
```
myvector[4]
```
```
## [1] 99
```
You can extract more than one element by using a vector in the brackets:
First, say I want to extract the 3rd and the 4th element. I can make a vector with 3 and 4 in it:
```
nums <- c(3,4)
```
Then put that vector in the brackets:
```
myvector[nums]
```
```
## [1] 303 99
```
We can also do it directly like this, skipping the vector\-creation step:
```
myvector[c(3,4)]
```
```
## [1] 303 99
```
In the chunk below extract the 1st and 2nd elements:
```
myvector[c(1,2)]
```
```
## [1] 30 16
```
7\.6 Sequences of numbers
-------------------------
Often we want a vector of numbers in **sequential order**. That is, a vector with the numbers 1, 2, 3, 4, … or 5, 10, 15, 20, … The easiest way to do this is using a colon
```
1:10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
Note that in R 1:10 is equivalent to c(1:10\)
```
c(1:10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
Usually to emphasize that a vector is being created I will use c(1:10\)
We can do any number to any numbers
```
c(20:30)
```
```
## [1] 20 21 22 23 24 25 26 27 28 29 30
```
We can also do it in *reverse*. In the code below put 30 before 20:
```
c(30:20)
```
```
## [1] 30 29 28 27 26 25 24 23 22 21 20
```
A useful function in *R* is the `seq()` function, which is an explicit function that can be used to create a vector containing a sequence of numbers that run from a particular number to another particular number.
```
seq(1, 10)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
Using `seq()` instead of a `:` can be useful for readability to make it explicit what is going on. More importantly, `seq` has an argument `by = ...` so you can make a sequence of number with any interval between For example, if we want to create the sequence of numbers from 1 to 10 in steps of 1 (i.e.. 1, 2, 3, 4, … 10\), we can type:
```
seq(1, 10,
by = 1)
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
```
We can change the **step size** by altering the value of the `by` argument given to the function `seq()`. For example, if we want to create a sequence of numbers from 1\-100 in steps of 20 (i.e.. 1, 21, 41, … 101\), we can type:
```
seq(1, 101,
by = 20)
```
```
## [1] 1 21 41 61 81 101
```
7\.7 Vectors can hold numeric or character data
-----------------------------------------------
The vector we created above holds numeric data, as indicated by `class()`
```
class(myvector)
```
```
## [1] "numeric"
```
Vectors can also holder character data, like the genetic code:
```
# vector of character data
myvector <- c("A","T","G")
# how it looks
myvector
```
```
## [1] "A" "T" "G"
```
```
# what is "is"
class(myvector)
```
```
## [1] "character"
```
7\.8 Regular expressions can modify character data
--------------------------------------------------
We can use **regular expressions** to modify character data. For example, change the Ts to Us
```
myvector <- gsub("T", "U", myvector)
```
Now check it out
```
myvector
```
```
## [1] "A" "U" "G"
```
Regular expressions are a deep subject in computing. You can find some more information about them [here](https://rstudio-pubs-static.s3.amazonaws.com/74603_76cd14d5983f47408fdf0b323550b846.html).
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/plotting-vectors.html |
Chapter 8 Plotting vectors in base R
====================================
**By**: Avril Coghlan
**Adapted, edited and expanded**: Nathan Brouwer ([brouwern@gmail.com](mailto:brouwern@gmail.com)) under the Creative Commons 3\.0 Attribution License [(CC BY 3\.0\)](https://creativecommons.org/licenses/by/3.0/).
8\.1 Preface
------------
This is a modification of part of[“DNA Sequence Statistics (2\)”](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/src/chapter2.html) from Avril Coghlan’s [*A little book of R for bioinformatics.*](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/index.html). Most of text and code was originally written by Dr. Coghlan and distributed under the [Creative Commons 3\.0](https://creativecommons.org/licenses/by/3.0/us/) license.
8\.2 Plotting numeric data
--------------------------
R allows the production of a variety of plots, including **scatterplots**, **histograms**, **piecharts**, and **boxplots**. Usually we make plots from **dataframes** with 2 or more columns, but we can also make them from two separate vectors. This flexibility is useful, but also can cause some confusion.
For example, if you have two equal\-length vectors of numbers `numeric.vect1` and `numeric.vect1`, you can plot a [**scatterplot**](https://en.wikipedia.org/wiki/Scatter_plot) of the values in `myvector1` against the values in `myvector2` using the **base R** `plot()`function.
First, let’s make up some data in put it in vectors:
```
numeric.vect1 <- c(10, 15, 22, 35, 43)
numeric.vect2 <- c(3, 3.2, 3.9, 4.1, 5.2)
```
Not plot with the base R `plot()` function:
```
plot(numeric.vect1, numeric.vect2)
```
Note that there is a comma between the two vector names. When building plots from dataframes you usually see a tilde (\~), but when you have two vectors you can use just a comma.
Also note the order of the vectors within the `plot()` command and which axes they appear on. The first vector is `numeric.vect1` and it appears on the horizontal x\-axis.
If you want to label the axes on the plot, you can do this by giving the `plot()` function values for its optional arguments `xlab =` and `ylab =`:
```
plot(numeric.vect1, # note again the comma, not a ~
numeric.vect2,
xlab="vector1",
ylab="vector2")
```
We can store character data in vectors so if we want we could do this to set up our labels:
```
mylabels <- c("numeric.vect1","numeric.vect2")
```
Then use bracket notation to call the labels from the vector
```
plot(numeric.vect1,
numeric.vect2,
xlab=mylabels[1],
ylab=mylabels[2])
```
If we want we can use a tilde to make our plot like this:
```
plot(numeric.vect2 ~ numeric.vect1)
```
Note that now, `numeric.vect2` is on the left and `numeric.vect1` is on the right. This flexibility can be tricky to keep track of.
We can also combine these vectors into a dataframe and plot the data by referencing the data frame. First, we combine the two separate vectors into a dataframe using the `cbind()` command.
```
df <- cbind(numeric.vect1, numeric.vect2)
```
Then we plot it like this, referencing the dataframe `df` via the `data = ...` argument.
```
plot(numeric.vect2 ~ numeric.vect1, data = df)
```
8\.3 Other plotting packages
----------------------------
Base R has lots of plotting functions; additionally, people have written packages to implement new plotting capabilities. The package `ggplot2` is currently the most popular plotting package, and `ggpubr` is a package which makes `ggplot2` easier to use. For quick plots we’ll use base R functions, and when we get to more important things we’ll use ggplot2 and ggpubr.
8\.1 Preface
------------
This is a modification of part of[“DNA Sequence Statistics (2\)”](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/src/chapter2.html) from Avril Coghlan’s [*A little book of R for bioinformatics.*](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/index.html). Most of text and code was originally written by Dr. Coghlan and distributed under the [Creative Commons 3\.0](https://creativecommons.org/licenses/by/3.0/us/) license.
8\.2 Plotting numeric data
--------------------------
R allows the production of a variety of plots, including **scatterplots**, **histograms**, **piecharts**, and **boxplots**. Usually we make plots from **dataframes** with 2 or more columns, but we can also make them from two separate vectors. This flexibility is useful, but also can cause some confusion.
For example, if you have two equal\-length vectors of numbers `numeric.vect1` and `numeric.vect1`, you can plot a [**scatterplot**](https://en.wikipedia.org/wiki/Scatter_plot) of the values in `myvector1` against the values in `myvector2` using the **base R** `plot()`function.
First, let’s make up some data in put it in vectors:
```
numeric.vect1 <- c(10, 15, 22, 35, 43)
numeric.vect2 <- c(3, 3.2, 3.9, 4.1, 5.2)
```
Not plot with the base R `plot()` function:
```
plot(numeric.vect1, numeric.vect2)
```
Note that there is a comma between the two vector names. When building plots from dataframes you usually see a tilde (\~), but when you have two vectors you can use just a comma.
Also note the order of the vectors within the `plot()` command and which axes they appear on. The first vector is `numeric.vect1` and it appears on the horizontal x\-axis.
If you want to label the axes on the plot, you can do this by giving the `plot()` function values for its optional arguments `xlab =` and `ylab =`:
```
plot(numeric.vect1, # note again the comma, not a ~
numeric.vect2,
xlab="vector1",
ylab="vector2")
```
We can store character data in vectors so if we want we could do this to set up our labels:
```
mylabels <- c("numeric.vect1","numeric.vect2")
```
Then use bracket notation to call the labels from the vector
```
plot(numeric.vect1,
numeric.vect2,
xlab=mylabels[1],
ylab=mylabels[2])
```
If we want we can use a tilde to make our plot like this:
```
plot(numeric.vect2 ~ numeric.vect1)
```
Note that now, `numeric.vect2` is on the left and `numeric.vect1` is on the right. This flexibility can be tricky to keep track of.
We can also combine these vectors into a dataframe and plot the data by referencing the data frame. First, we combine the two separate vectors into a dataframe using the `cbind()` command.
```
df <- cbind(numeric.vect1, numeric.vect2)
```
Then we plot it like this, referencing the dataframe `df` via the `data = ...` argument.
```
plot(numeric.vect2 ~ numeric.vect1, data = df)
```
8\.3 Other plotting packages
----------------------------
Base R has lots of plotting functions; additionally, people have written packages to implement new plotting capabilities. The package `ggplot2` is currently the most popular plotting package, and `ggpubr` is a package which makes `ggplot2` easier to use. For quick plots we’ll use base R functions, and when we get to more important things we’ll use ggplot2 and ggpubr.
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/R-objects.html |
Chapter 9 Intro to R objects
============================
**By**: Nathan Brouwer
9\.1 Commands used
------------------
* \<\-
* c()
* length()
* dim()
* is()
9\.2 R Objects
--------------
* Everything in R is an object, works with an object, tells you about an object, etc
* We’ll do a simple data analysis with a t.test and then look at properties of R objects
* There are several types of objects: **vectors, matrices, lists, dataframes**
* R objects can hold numbers, text, or both
* A typical dataframe has columns of **numeric data** and columns of text that represent **factor variables** (aka “**categorical variables**”)
9\.3 Differences between objects
--------------------------------
Different objects are used and show up in different contexts.
* Most practical stats work in R is done with **dataframes** .
* A dataframe is kind of like a spreadsheet, loaded into R.
* For the sake of simplicity, we often load data in as a **vector**. This just makes things smoother when we are starting out.
* **vectors** pop up in many places, usually in a support role until you start doing more programming.
* **matrices** are occasionally used for applied stats stuff but show up more for programming. A matrix is like a stripped\-down dataframe.
* **lists** show up everywhere, but you often don’t know it; many R functions make lists
* Understanding **lists** will help you efficiently work with stats output and make plots.
9\.4 The Data
-------------
We’ll use the following data to explore R objects.
Motulsky 2nd Ed, Chapter 30, page 220, Table 30\.1\. Maximal relaxation of muscle strips of old and young rat bladders stimulated with high concentrations of [norepinephrine](https://en.wikipedia.org/wiki/Norepinephrine) (Frazier et al. 2006\). Response variable is % E.max
9\.5 The assignment operator “\<\-” makes object
------------------------------------------------
The code above has made two objects. We can use several commands to learn about these objects.
* is(): what an object is, i.e., vector, matrix, list, dataframe
* length():how long an object is; only works with vectors and lists, not dataframes!
* dim(): how long AND how wide and object is; doesn’t work with vectors, only dataframes and matrices :(
### 9\.5\.1 is()
What is our “old.E.max” object?
```
is(old.E.max)
```
```
## [1] "numeric" "vector"
```
```
is(young.E.max)
```
```
## [1] "numeric" "vector"
```
Its a vector, containing numeric data.
What if we made a vector like this?
```
cat.variables <- c("old","old","old","old",
"old","old","old","old","old")
```
And used `is()`
```
is(cat.variables)
```
```
## [1] "character" "vector" "data.frameRowLabels"
## [4] "SuperClassMethod"
```
It tells us we have a vector, containing character data. Not sure why it feels the need to tell us all the other stuff…
### 9\.5\.2 length()
Our vector has 9 elements, or is 9 elements long.
```
length(old.E.max)
```
```
## [1] 9
```
Note that `dim()`, for dimension, doesn’t work with vectors!
```
dim(old.E.max)
```
```
## NULL
```
It would be nice if it said something like “1 x 9” for 1 row tall and 9 elements long. So it goes.
### 9\.5\.3 str()
`str()` stands for “structure”.
* It summarizes info about an object;
* I find it most useful for looking at lists.
* If our vector here was really really long, str() would only show the first part of the vector
```
str(old.E.max)
```
```
## num [1:9] 20.8 2.8 50 33.3 29.4 38.9 29.4 52.6 14.3
```
### 9\.5\.4 c()
* We typically use `c()` to gather together things like numbers, as we did to make our objects above.
* note: this is *lower case* “c”!
* Uppercase is something else
* For me, R’s font makes it hard sometimes to tell the difference between “c” and “C”
* If code isn’t working, one problem might be a “C” instead of a “c”
Use `c()` to combine two objects
```
old.plus.new <- c(old.E.max, young.E.max)
```
Look at the length
```
length(old.plus.new)
```
```
## [1] 17
```
Note that `str()` just shows us the first few digits, not all 17
```
str(old.plus.new)
```
```
## num [1:17] 20.8 2.8 50 33.3 29.4 38.9 29.4 52.6 14.3 45.5 ...
```
9\.6 Debrief
------------
We can…
* learn about objects using length(), is(), str()
* access parts of list using $ (and also brackets)
* access parts of vectors using square brackets \[ ]
* save the output of a model / test to an object
* access part of lists for plotting instead of copying stuff
9\.1 Commands used
------------------
* \<\-
* c()
* length()
* dim()
* is()
9\.2 R Objects
--------------
* Everything in R is an object, works with an object, tells you about an object, etc
* We’ll do a simple data analysis with a t.test and then look at properties of R objects
* There are several types of objects: **vectors, matrices, lists, dataframes**
* R objects can hold numbers, text, or both
* A typical dataframe has columns of **numeric data** and columns of text that represent **factor variables** (aka “**categorical variables**”)
9\.3 Differences between objects
--------------------------------
Different objects are used and show up in different contexts.
* Most practical stats work in R is done with **dataframes** .
* A dataframe is kind of like a spreadsheet, loaded into R.
* For the sake of simplicity, we often load data in as a **vector**. This just makes things smoother when we are starting out.
* **vectors** pop up in many places, usually in a support role until you start doing more programming.
* **matrices** are occasionally used for applied stats stuff but show up more for programming. A matrix is like a stripped\-down dataframe.
* **lists** show up everywhere, but you often don’t know it; many R functions make lists
* Understanding **lists** will help you efficiently work with stats output and make plots.
9\.4 The Data
-------------
We’ll use the following data to explore R objects.
Motulsky 2nd Ed, Chapter 30, page 220, Table 30\.1\. Maximal relaxation of muscle strips of old and young rat bladders stimulated with high concentrations of [norepinephrine](https://en.wikipedia.org/wiki/Norepinephrine) (Frazier et al. 2006\). Response variable is % E.max
9\.5 The assignment operator “\<\-” makes object
------------------------------------------------
The code above has made two objects. We can use several commands to learn about these objects.
* is(): what an object is, i.e., vector, matrix, list, dataframe
* length():how long an object is; only works with vectors and lists, not dataframes!
* dim(): how long AND how wide and object is; doesn’t work with vectors, only dataframes and matrices :(
### 9\.5\.1 is()
What is our “old.E.max” object?
```
is(old.E.max)
```
```
## [1] "numeric" "vector"
```
```
is(young.E.max)
```
```
## [1] "numeric" "vector"
```
Its a vector, containing numeric data.
What if we made a vector like this?
```
cat.variables <- c("old","old","old","old",
"old","old","old","old","old")
```
And used `is()`
```
is(cat.variables)
```
```
## [1] "character" "vector" "data.frameRowLabels"
## [4] "SuperClassMethod"
```
It tells us we have a vector, containing character data. Not sure why it feels the need to tell us all the other stuff…
### 9\.5\.2 length()
Our vector has 9 elements, or is 9 elements long.
```
length(old.E.max)
```
```
## [1] 9
```
Note that `dim()`, for dimension, doesn’t work with vectors!
```
dim(old.E.max)
```
```
## NULL
```
It would be nice if it said something like “1 x 9” for 1 row tall and 9 elements long. So it goes.
### 9\.5\.3 str()
`str()` stands for “structure”.
* It summarizes info about an object;
* I find it most useful for looking at lists.
* If our vector here was really really long, str() would only show the first part of the vector
```
str(old.E.max)
```
```
## num [1:9] 20.8 2.8 50 33.3 29.4 38.9 29.4 52.6 14.3
```
### 9\.5\.4 c()
* We typically use `c()` to gather together things like numbers, as we did to make our objects above.
* note: this is *lower case* “c”!
* Uppercase is something else
* For me, R’s font makes it hard sometimes to tell the difference between “c” and “C”
* If code isn’t working, one problem might be a “C” instead of a “c”
Use `c()` to combine two objects
```
old.plus.new <- c(old.E.max, young.E.max)
```
Look at the length
```
length(old.plus.new)
```
```
## [1] 17
```
Note that `str()` just shows us the first few digits, not all 17
```
str(old.plus.new)
```
```
## num [1:17] 20.8 2.8 50 33.3 29.4 38.9 29.4 52.6 14.3 45.5 ...
```
### 9\.5\.1 is()
What is our “old.E.max” object?
```
is(old.E.max)
```
```
## [1] "numeric" "vector"
```
```
is(young.E.max)
```
```
## [1] "numeric" "vector"
```
Its a vector, containing numeric data.
What if we made a vector like this?
```
cat.variables <- c("old","old","old","old",
"old","old","old","old","old")
```
And used `is()`
```
is(cat.variables)
```
```
## [1] "character" "vector" "data.frameRowLabels"
## [4] "SuperClassMethod"
```
It tells us we have a vector, containing character data. Not sure why it feels the need to tell us all the other stuff…
### 9\.5\.2 length()
Our vector has 9 elements, or is 9 elements long.
```
length(old.E.max)
```
```
## [1] 9
```
Note that `dim()`, for dimension, doesn’t work with vectors!
```
dim(old.E.max)
```
```
## NULL
```
It would be nice if it said something like “1 x 9” for 1 row tall and 9 elements long. So it goes.
### 9\.5\.3 str()
`str()` stands for “structure”.
* It summarizes info about an object;
* I find it most useful for looking at lists.
* If our vector here was really really long, str() would only show the first part of the vector
```
str(old.E.max)
```
```
## num [1:9] 20.8 2.8 50 33.3 29.4 38.9 29.4 52.6 14.3
```
### 9\.5\.4 c()
* We typically use `c()` to gather together things like numbers, as we did to make our objects above.
* note: this is *lower case* “c”!
* Uppercase is something else
* For me, R’s font makes it hard sometimes to tell the difference between “c” and “C”
* If code isn’t working, one problem might be a “C” instead of a “c”
Use `c()` to combine two objects
```
old.plus.new <- c(old.E.max, young.E.max)
```
Look at the length
```
length(old.plus.new)
```
```
## [1] 17
```
Note that `str()` just shows us the first few digits, not all 17
```
str(old.plus.new)
```
```
## num [1:17] 20.8 2.8 50 33.3 29.4 38.9 29.4 52.6 14.3 45.5 ...
```
9\.6 Debrief
------------
We can…
* learn about objects using length(), is(), str()
* access parts of list using $ (and also brackets)
* access parts of vectors using square brackets \[ ]
* save the output of a model / test to an object
* access part of lists for plotting instead of copying stuff
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/introduction-to-biological-sequences-databases.html |
Chapter 13 Introduction to biological sequences databases
=========================================================
```
library(compbio4all)
```
**By**: Avril Coghlan.
**Adapted, edited and expanded**: Nathan Brouwer under the Creative Commons 3\.0 Attribution License [(CC BY 3\.0\)](https://creativecommons.org/licenses/by/3.0/).
13\.1 Topics
------------
* NCBI vs. EMBL vs. DDBJ
* annotation
* Nucleotide vs. Protein vs. EST vs. Genome
* PubMed
* GenBank Records
* FASTA file format
* FASTA header line
* RefSeq
* refining searches
13\.2 Introduction
------------------
**NCBI** is the National Center for Biotechnology Information. The [NCBI Webiste](https://www.ncbi.nlm.nih.gov/) www.ncbi.nlm.nih.gov/ is the entry point to a large number of databases giving access to **biological sequences** (DNA, RNA, protein) and biology\-related publications.
When scientists sequence DNA, RNA and proteins they typically publish their data via databases with the NCBI. Each is given a unique identification number known as an **accession number**. For example, each time a unique human genome sequence is produced it is uploaded to the relevant databases, assigned a unique **accession**, and a website created to access it. Sequence are also cross\-referenced to related papers, so you can start with a sequence and find out what scientific paper it was used in, or start with a paper and see if any sequences are associated with it.
This chapter provides an introduction to the general search features of the NCBI databases via the interface on the website, including how to locate sequences using accession numbers and other search parameters, such as specific authors or papers. Subsequent chapters will introduce advanced search features, and how to carry out searches using R commands.
One consequence of the explosion of biological sequences used in publications is that the system of databases has become fairly complex. There are databases for different types of data, different types of molecules, data from individual experiments showing **genetic variation** and also the **consensus sequence** for a given molecule. Luckily, if you know the accession number of the sequence you are looking for – which will be our starting point throughout this book – its fairly straight forward. There are numerous other books on bioinformatics and genomics that provide all details if you need to do more complex searches.
In this chapter we’ll typically refer generically to “NCBI data” and “NCBI databases.” This is a simplification, since NCBI is the name of the organization and the databases and search engines often have specific names.
13\.3 Biological sequence databases
-----------------------------------
Almost published biological sequences are available online, as it is a requirement of every scientific journal that any published DNA or RNA or protein sequence must be deposited in a public database. The main resources for storing and distributing sequence data are three large databases:
1. USA: **[NCBI database](https://www.ncbi.nlm.nih.gov/)** (www.ncbi.nlm.nih.gov/)
2. Europe: **European Molecular Biology Laboratory (EMBL)** database (<https://www.ebi.ac.uk/ena>)
3. Japan: **DNA Database of Japan (DDBJ)** database (www.ddbj.nig.ac.jp/).
These databases collect all publicly available DNA, RNA and protein sequence data and make it available for free. They exchange data nightly, so contain essentially the same data. The redundancy among the databases allows them to serve different communities (e.g. native languages), provide different additional services such as tutorials, and assure that the world’s scientists have their data backed up in different physical locations – a key component of good data management!
13\.4 The NCBI Sequence Database
--------------------------------
In this chapter we will explore the **NCBI sequence database** using the accession number NC\_001477, which is for the complete DEN\-1 Dengue virus genome sequence. The accession number is reported in scientific papers originally describing the sequence, and also in subsequent papers that use that particular sequence.
In addition to the sequence itself, for each sequence the NCBI database also stores some additional **annotation** data, such as the name of the species it comes from, references to publications describing that sequence, information on the structure of the proteins coded by the sequence, etc. Some of this annotation data was added by the person who sequenced a sequence and submitted it to the NCBI database, while some may have been added later by a human curator working for NCBI.
13\.5 The NCBI Sub\-Databases
-----------------------------
The NCBI database contains several sub\-databases, the most important of which are:
* **Nucleotide database**: contains DNA and RNA sequences
* **Protein database**: contains protein sequences
* **EST database**: contains ESTs (expressed sequence tags), which are short sequences derived from mRNAs. (This terminology is likely to be unfamiliar because it is not often used in introductory biology courses. The “Expressed” of EST comes from the fact that mRNA is the result of gene expression.)
* **Genome database**: contains the DNA sequences for entire genomes
* **PubMed**: contains data on scientific publications
From the main NCBI website you can initiate a search and it will look for hits across all the databases. You can narrow your search by selecting a particular database.
13\.6 NCBI GenBank Record Format
--------------------------------
As mentioned above, for each sequence the NCBI database stores some extra information such as the species that it came from, publications describing the sequence, etc. This information is stored in the GenBank entry (aka GenBank Record) for the sequence. The GenBank entry for a sequence can be viewed by searching the NCBI database for the accession number for that sequence.
To view the GenBank entry for the DEN\-1 Dengue virus, follow these steps:
1. Go to the [NCBI website](https://www.ncbi.nlm.nih.gov) (www.ncbi.nlm.nih.gov).
2. Search for the accession number NC\_001477\.
3. Since we searched for a particular accession we are only returned a single main result which is titled “NUCLEOTIDE SEQUENCE: Dengue virus 1, complete genome.”
4. Click on “Dengue virus 1, complete genome” to go to the GenBank entry.
The GenBank entry for an accession contains a LOT of information about the sequence, such as papers describing it, features in the sequence, etc. The **DEFINITION** field gives a short description for the sequence. The **ORGANISM** field in the NCBI entry identifies the species that the sequence came from. The **REFERENCE** field contains scientific publications describing the sequence. The **FEATURES** field contains information about the location of features of interest inside the sequence, such as regulatory sequences or genes that lie inside the sequence. The **ORIGIN** field gives the sequence itself.
13\.7 The FASTA file format
---------------------------
The **FASTA** file format is a simple file format commonly used to store and share sequence information. When you download sequences from databases such as NCBI you usually want FASTA files.
The first line of a FASTA file starts with the “greater than” character (\>) followed by a name and/or description for the sequence. Subsequent lines contain the sequence itself. A short FASTA file might contain just something like this:
```
## >mysequence1
## ACATGAGACAGACAGACCCCCAGAGACAGACCCCTAGACACAGAGAGAG
## TATGCAGGACAGGGTTTTTGCCCAGGGTGGCAGTATG
```
A FASTA file can contain the sequence for a single, an entire genome, or more than one sequence. If a FASTA file contains many sequences, then for each sequence it will have a **header line** starting with a greater than character followed by the sequence itself.
This is what a FASTA file with two sequence looks like.
```
## >mysequence1
## ACATGAGACAGACAGACCCCCAGAGACAGACCCCTAGACACAGAGAGAG
## TATGCAGGACAGGGTTTTTGCCCAGGGTGGCAGTATG
##
## >mysequence2
## AGGATTGAGGTATGGGTATGTTCCCGATTGAGTAGCCAGTATGAGCCAG
## AGTTTTTTACAAGTATTTTTCCCAGTAGCCAGAGAGAGAGTCACCCAGT
## ACAGAGAGC
```
13\.8 RefSeq
------------
When carrying out searches of the NCBI database, it is important to bear in mind that the database may contain **redundant sequences** for the same gene that were sequenced by different laboratories and experimental. This is because many different labs have sequenced the gene, and submitted their sequences to the NCBI database, and variation exists between individual organisms due to population\-level variation due to previous mutations and also potential recent spontaneous mutations. There also can be some error in the sequencing process that results in differences between sequences.
There are also many different types of nucleotide sequences and protein sequences in the NCBI database. With respect to nucleotide sequences, some many be entire genomic DNA sequences, some may be mRNAs, and some may be lower quality sequences such as expressed sequence tags (ESTs, which are derived from parts of mRNAs), or DNA sequences of **contigs** from genome projects. That is, you can end up with an entry in the protein database based on sequence derived from a genomic sequence, from sequencing just the gene, and from other routes. Furthermore, some sequences may be **manually curated** by NCBI staff so that the associated entries contain extra information, but the majority of sequences are **uncurated.**
Therefore, NCBI databases often contains redundant information for a gene, contains sequences of varying quality, and contains both uncurated and curated data. As a result, NCBI has made a special database called **RefSeq (reference sequence database)**, which is a subset of the NCBI database. The data in RefSeq is manually curated, is high quality sequence data, and is non\-redundant; this means that each gene (or **splice\-form / isoform** of a gene, in the case of eukaryotes), protein, or genome sequence is only represented once.
The data in RefSeq is curated and is of much higher quality than the rest of the NCBI Sequence Database. However, unfortunately, because of the high level of manual curation required, RefSeq does not cover all species, and is not comprehensive for the species that are covered so far. To speed up searches and simplify the results in to can be very useful to just search RefSeq. However, for detailed and thorough work the full database should probably be searched and the results scrutinized.
You can easily tell that a sequence comes from RefSeq because its accession number starts with particular sequence of letters. That is, accessions of RefSeq sequences corresponding to protein records usually start with **NP\_**, and accessions of RefSeq curated complete genome sequences usually start with **NC\_** or **NS\_**.
13\.9 Querying the NCBI Database
--------------------------------
You may need to interrogate the NCBI Database to find particular sequences or a set of sequences matching given criteria, such as:
* The sequence with accession NC\_001477
* The sequences published in Nature 460:352\-358
* All sequences from *Chlamydia trachomatis*
* Sequences submitted by Caroline Cameron, a syphilis researcher
* Flagellin or fibrinogen sequences
* The glutamine synthetase gene from *Mycobacteriuma leprae*
* Just the upstream control region of the *Mycobacterium leprae* dnaA gene
* The sequence of the *Mycobacterium leprae* DnaA protein
* The genome sequence of syphilis, *Treponema pallidum* subspp. *pallidum*
* All human nucleotide sequences associated with malaria
There are two main ways that you can query the NCBI database to find these sets of sequences. The first possibility is to carry out searches on the NCBI website. The second possibility is to carry out searches from R using one of several packages that can interface with NCBI. As of October 2019 rentrez seems to be the best package for this..
Below, I will explain how to manually carry out queries on the NCBI database.
13\.10 Querying the NCBI Database via the NCBI Website (for reference)
----------------------------------------------------------------------
**NOTE**: The following section is here for reference; you need to know its *possible* to refine searches but do not need to know any of these actual tags.
If you are carrying out searches on the NCBI website, to narrow down your searches to specific types of sequences or to specific organisms, you will need to use **“search tags”.**
For example, the search tags “\[PROP]” and “\[ORGN]” let you restrict your search to a specific subset of the NCBI Sequence Database, or to sequences from a particular taxon, respectively. Here is a list of useful search tags, which we will explain how to use below:
* \[AC], e.g. NC\_001477\[AC] With a particular accession number
* \[ORGN], e.g. Fungi\[ORGN] From a particular organism or taxon
* \[PROP], e.g. biomol\_mRNA\[PROP] Of a specific type (eg. mRNA) or from a specific database (eg. RefSeq)
* \[JOUR], e.g. Nature\[JOUR] Described in a paper published in a particular journal
* \[VOL], e.g. 531\[VOL] Described in a paper published in a particular journal volume
* \[PAGE], e.g. 27\[PAGE] Described in a paper with a particular start\-page in a journal
* \[AU], e.g. “Smith J”\[AU] Described in a paper, or submitted to NCBI, by a particular author
To carry out searches of the NCBI database, you first need to go to the NCBI website, and type your search query into the search box at the top. For example, to search for all sequences from Fungi, you would type “Fungi\[ORGN]” into the search box on the NCBI website.
You can combine the search tags above by using “AND”, to make more complex searches. For example, to find all mRNA sequences from Fungi, you could type “Fungi\[ORGN] AND biomol\_mRNA\[PROP]” in the search box on the NCBI website.
Likewise, you can also combine search tags by using “OR”, for example, to search for all mRNA sequences from Fungi or Bacteria, you would type “(Fungi\[ORGN] OR Bacteria\[ORGN]) AND biomol\_mRNA\[PROP]” in the search box. Note that you need to put brackets around “Fungi\[ORGN] OR Bacteria\[ORGN]” to specify that the word “OR” refers to these two search tags.
Here are some examples of searches, some of them made by combining search terms using “AND”:
* NC\_001477\[AC] \- With accession number NC\_001477
* Nature\[JOUR] AND 460\[VOL] AND 352\[PAGE] \- Published in Nature 460:352\-358
* “Chlamydia trachomatis”\[ORGN] \- From the bacterium Chlamydia trachomatis
* “Berriman M”\[AU] \- Published in a paper, or submitted to NCBI, by M. Berriman
* flagellin OR fibrinogen \- Which contain the word “flagellin” or “fibrinogen” in their NCBI record
* “Mycobacterium leprae”\[ORGN] AND dnaA \- Which are from M. leprae, and contain “dnaA” in their NCBI record
* “Homo sapiens”\[ORGN] AND “colon cancer” \- Which are from human, and contain “colon cancer” in their NCBI record
* “Homo sapiens”\[ORGN] AND malaria \- Which are from human, and contain “malaria” in their NCBI record
* “Homo sapiens”\[ORGN] AND biomol\_mrna\[PROP] \- Which are mRNA sequences from human
* “Bacteria”\[ORGN] AND srcdb\_refseq\[PROP] \- Which are RefSeq sequences from Bacteria
* “colon cancer” AND srcdb\_refseq\[PROP] \- From RefSeq, which contain “colon cancer” in their NCBI record
Note that if you are searching for a phrase such as “colon cancer” or “Chlamydia trachomatis”, you need to put the phrase in quotes when typing it into the search box. This is because if you type the phrase in the search box without quotes, the search will be for NCBI records that contain either of the two words “colon” or “cancer” (or either of the two words “Chlamydia” or “trachomatis”), not necessarily both words.
As mentioned above, the NCBI database contains several sub\-databases, including the NCBI **Nucleotide database** and the NCBI **Protein database**. If you go to the NCBI website, and type one of the search queries above in the search box at the top of the page, the results page will tell you how many matching NCBI records were found in each of the NCBI sub\-databases.
For example, if you search for “Chlamydia trachomatis\[ORGN]”, you will get matches to proteins from C. trachomatis in the NCBI Protein database, matches to DNA and RNA sequences from *C. trachomatis* in the NCBI Nucleotide database, matches to whole genome sequences for C. trachomatis strains in the NCBI Genome database, and so on:
Alternatively, if you know in advance that you want to search a particular sub\-database, for example, the NCBI Protein database, when you go to the NCBI website, you can select that sub\-database from the drop\-down list above the search box, so that you will search that sub\-database.
13\.11 Example: finding the sequences published in Nature 460:352\-358 (for reference)
--------------------------------------------------------------------------------------
**NOTE**: The following section is here for reference; you need to know its *possible* to refine searches but do not need to know any of these actual tags.
For example, if you want to find sequences published in Nature 460:352\-358, you can use the “\[JOUR]”, “\[VOL]” and “\[PAGE]” search terms. That is, you would go to the NCBI website and type in the search box on the top: “Nature”\[JOUR] AND 460\[VOL] AND 352\[PAGE], where \[JOUR] specifies the journal name, \[VOL] the volume of the journal the paper is in, and \[PAGE] the page number.
This should bring up a results page with “50890” beside the word “Nucleotide”, and “1” beside the word “Genome”, and “25701” beside the word “Protein”, indicating that there were 50890 hits to sequence records in the Nucleotide database, which contains DNA and RNA sequences, and 1 hit to the Genome database, which contains genome sequences, and 25701 hits to the Protein database, which contains protein sequences.
If you click on the word “Nucleotide”, it will bring up a webpage with a list of links to the NCBI sequence records for those 50890 hits. The 50890 hits are all contigs from the schistosome worm *Schistosoma mansoni*.
Likewise, if you click on the word “Protein”, it will bring up a webpage with a list of links to the NCBI sequence records for the 25701 hits, and you will see that the hits are all predicted proteins for *Schistosoma mansoni*.
If you click on the word “Genome”, it will bring you to the NCBI record for the *Schistosoma mansoni* genome sequence, which has NCBI accession NS\_00200\. Note that the accession starts with “NS\_”, which indicates that it is a RefSeq accession.
Therefore, in Nature volume 460, page 352, the *Schistosoma mansoni* genome sequence was published, along with all the DNA sequence contigs that were sequenced for the genome project, and all the predicted proteins for the gene predictions made in the genome sequence. You can view the original paper on the Nature website at [http://www.nature.com/nature/journal/v460/n7253/abs/nature08160\.html](http://www.nature.com/nature/journal/v460/n7253/abs/nature08160.html).
Note: *Schistmosoma mansoni* is a parasitic worm that is responsible for causing **schistosomiasis**, which is classified by the WHO as a **neglected tropical disease**.
13\.1 Topics
------------
* NCBI vs. EMBL vs. DDBJ
* annotation
* Nucleotide vs. Protein vs. EST vs. Genome
* PubMed
* GenBank Records
* FASTA file format
* FASTA header line
* RefSeq
* refining searches
13\.2 Introduction
------------------
**NCBI** is the National Center for Biotechnology Information. The [NCBI Webiste](https://www.ncbi.nlm.nih.gov/) www.ncbi.nlm.nih.gov/ is the entry point to a large number of databases giving access to **biological sequences** (DNA, RNA, protein) and biology\-related publications.
When scientists sequence DNA, RNA and proteins they typically publish their data via databases with the NCBI. Each is given a unique identification number known as an **accession number**. For example, each time a unique human genome sequence is produced it is uploaded to the relevant databases, assigned a unique **accession**, and a website created to access it. Sequence are also cross\-referenced to related papers, so you can start with a sequence and find out what scientific paper it was used in, or start with a paper and see if any sequences are associated with it.
This chapter provides an introduction to the general search features of the NCBI databases via the interface on the website, including how to locate sequences using accession numbers and other search parameters, such as specific authors or papers. Subsequent chapters will introduce advanced search features, and how to carry out searches using R commands.
One consequence of the explosion of biological sequences used in publications is that the system of databases has become fairly complex. There are databases for different types of data, different types of molecules, data from individual experiments showing **genetic variation** and also the **consensus sequence** for a given molecule. Luckily, if you know the accession number of the sequence you are looking for – which will be our starting point throughout this book – its fairly straight forward. There are numerous other books on bioinformatics and genomics that provide all details if you need to do more complex searches.
In this chapter we’ll typically refer generically to “NCBI data” and “NCBI databases.” This is a simplification, since NCBI is the name of the organization and the databases and search engines often have specific names.
13\.3 Biological sequence databases
-----------------------------------
Almost published biological sequences are available online, as it is a requirement of every scientific journal that any published DNA or RNA or protein sequence must be deposited in a public database. The main resources for storing and distributing sequence data are three large databases:
1. USA: **[NCBI database](https://www.ncbi.nlm.nih.gov/)** (www.ncbi.nlm.nih.gov/)
2. Europe: **European Molecular Biology Laboratory (EMBL)** database (<https://www.ebi.ac.uk/ena>)
3. Japan: **DNA Database of Japan (DDBJ)** database (www.ddbj.nig.ac.jp/).
These databases collect all publicly available DNA, RNA and protein sequence data and make it available for free. They exchange data nightly, so contain essentially the same data. The redundancy among the databases allows them to serve different communities (e.g. native languages), provide different additional services such as tutorials, and assure that the world’s scientists have their data backed up in different physical locations – a key component of good data management!
13\.4 The NCBI Sequence Database
--------------------------------
In this chapter we will explore the **NCBI sequence database** using the accession number NC\_001477, which is for the complete DEN\-1 Dengue virus genome sequence. The accession number is reported in scientific papers originally describing the sequence, and also in subsequent papers that use that particular sequence.
In addition to the sequence itself, for each sequence the NCBI database also stores some additional **annotation** data, such as the name of the species it comes from, references to publications describing that sequence, information on the structure of the proteins coded by the sequence, etc. Some of this annotation data was added by the person who sequenced a sequence and submitted it to the NCBI database, while some may have been added later by a human curator working for NCBI.
13\.5 The NCBI Sub\-Databases
-----------------------------
The NCBI database contains several sub\-databases, the most important of which are:
* **Nucleotide database**: contains DNA and RNA sequences
* **Protein database**: contains protein sequences
* **EST database**: contains ESTs (expressed sequence tags), which are short sequences derived from mRNAs. (This terminology is likely to be unfamiliar because it is not often used in introductory biology courses. The “Expressed” of EST comes from the fact that mRNA is the result of gene expression.)
* **Genome database**: contains the DNA sequences for entire genomes
* **PubMed**: contains data on scientific publications
From the main NCBI website you can initiate a search and it will look for hits across all the databases. You can narrow your search by selecting a particular database.
13\.6 NCBI GenBank Record Format
--------------------------------
As mentioned above, for each sequence the NCBI database stores some extra information such as the species that it came from, publications describing the sequence, etc. This information is stored in the GenBank entry (aka GenBank Record) for the sequence. The GenBank entry for a sequence can be viewed by searching the NCBI database for the accession number for that sequence.
To view the GenBank entry for the DEN\-1 Dengue virus, follow these steps:
1. Go to the [NCBI website](https://www.ncbi.nlm.nih.gov) (www.ncbi.nlm.nih.gov).
2. Search for the accession number NC\_001477\.
3. Since we searched for a particular accession we are only returned a single main result which is titled “NUCLEOTIDE SEQUENCE: Dengue virus 1, complete genome.”
4. Click on “Dengue virus 1, complete genome” to go to the GenBank entry.
The GenBank entry for an accession contains a LOT of information about the sequence, such as papers describing it, features in the sequence, etc. The **DEFINITION** field gives a short description for the sequence. The **ORGANISM** field in the NCBI entry identifies the species that the sequence came from. The **REFERENCE** field contains scientific publications describing the sequence. The **FEATURES** field contains information about the location of features of interest inside the sequence, such as regulatory sequences or genes that lie inside the sequence. The **ORIGIN** field gives the sequence itself.
13\.7 The FASTA file format
---------------------------
The **FASTA** file format is a simple file format commonly used to store and share sequence information. When you download sequences from databases such as NCBI you usually want FASTA files.
The first line of a FASTA file starts with the “greater than” character (\>) followed by a name and/or description for the sequence. Subsequent lines contain the sequence itself. A short FASTA file might contain just something like this:
```
## >mysequence1
## ACATGAGACAGACAGACCCCCAGAGACAGACCCCTAGACACAGAGAGAG
## TATGCAGGACAGGGTTTTTGCCCAGGGTGGCAGTATG
```
A FASTA file can contain the sequence for a single, an entire genome, or more than one sequence. If a FASTA file contains many sequences, then for each sequence it will have a **header line** starting with a greater than character followed by the sequence itself.
This is what a FASTA file with two sequence looks like.
```
## >mysequence1
## ACATGAGACAGACAGACCCCCAGAGACAGACCCCTAGACACAGAGAGAG
## TATGCAGGACAGGGTTTTTGCCCAGGGTGGCAGTATG
##
## >mysequence2
## AGGATTGAGGTATGGGTATGTTCCCGATTGAGTAGCCAGTATGAGCCAG
## AGTTTTTTACAAGTATTTTTCCCAGTAGCCAGAGAGAGAGTCACCCAGT
## ACAGAGAGC
```
13\.8 RefSeq
------------
When carrying out searches of the NCBI database, it is important to bear in mind that the database may contain **redundant sequences** for the same gene that were sequenced by different laboratories and experimental. This is because many different labs have sequenced the gene, and submitted their sequences to the NCBI database, and variation exists between individual organisms due to population\-level variation due to previous mutations and also potential recent spontaneous mutations. There also can be some error in the sequencing process that results in differences between sequences.
There are also many different types of nucleotide sequences and protein sequences in the NCBI database. With respect to nucleotide sequences, some many be entire genomic DNA sequences, some may be mRNAs, and some may be lower quality sequences such as expressed sequence tags (ESTs, which are derived from parts of mRNAs), or DNA sequences of **contigs** from genome projects. That is, you can end up with an entry in the protein database based on sequence derived from a genomic sequence, from sequencing just the gene, and from other routes. Furthermore, some sequences may be **manually curated** by NCBI staff so that the associated entries contain extra information, but the majority of sequences are **uncurated.**
Therefore, NCBI databases often contains redundant information for a gene, contains sequences of varying quality, and contains both uncurated and curated data. As a result, NCBI has made a special database called **RefSeq (reference sequence database)**, which is a subset of the NCBI database. The data in RefSeq is manually curated, is high quality sequence data, and is non\-redundant; this means that each gene (or **splice\-form / isoform** of a gene, in the case of eukaryotes), protein, or genome sequence is only represented once.
The data in RefSeq is curated and is of much higher quality than the rest of the NCBI Sequence Database. However, unfortunately, because of the high level of manual curation required, RefSeq does not cover all species, and is not comprehensive for the species that are covered so far. To speed up searches and simplify the results in to can be very useful to just search RefSeq. However, for detailed and thorough work the full database should probably be searched and the results scrutinized.
You can easily tell that a sequence comes from RefSeq because its accession number starts with particular sequence of letters. That is, accessions of RefSeq sequences corresponding to protein records usually start with **NP\_**, and accessions of RefSeq curated complete genome sequences usually start with **NC\_** or **NS\_**.
13\.9 Querying the NCBI Database
--------------------------------
You may need to interrogate the NCBI Database to find particular sequences or a set of sequences matching given criteria, such as:
* The sequence with accession NC\_001477
* The sequences published in Nature 460:352\-358
* All sequences from *Chlamydia trachomatis*
* Sequences submitted by Caroline Cameron, a syphilis researcher
* Flagellin or fibrinogen sequences
* The glutamine synthetase gene from *Mycobacteriuma leprae*
* Just the upstream control region of the *Mycobacterium leprae* dnaA gene
* The sequence of the *Mycobacterium leprae* DnaA protein
* The genome sequence of syphilis, *Treponema pallidum* subspp. *pallidum*
* All human nucleotide sequences associated with malaria
There are two main ways that you can query the NCBI database to find these sets of sequences. The first possibility is to carry out searches on the NCBI website. The second possibility is to carry out searches from R using one of several packages that can interface with NCBI. As of October 2019 rentrez seems to be the best package for this..
Below, I will explain how to manually carry out queries on the NCBI database.
13\.10 Querying the NCBI Database via the NCBI Website (for reference)
----------------------------------------------------------------------
**NOTE**: The following section is here for reference; you need to know its *possible* to refine searches but do not need to know any of these actual tags.
If you are carrying out searches on the NCBI website, to narrow down your searches to specific types of sequences or to specific organisms, you will need to use **“search tags”.**
For example, the search tags “\[PROP]” and “\[ORGN]” let you restrict your search to a specific subset of the NCBI Sequence Database, or to sequences from a particular taxon, respectively. Here is a list of useful search tags, which we will explain how to use below:
* \[AC], e.g. NC\_001477\[AC] With a particular accession number
* \[ORGN], e.g. Fungi\[ORGN] From a particular organism or taxon
* \[PROP], e.g. biomol\_mRNA\[PROP] Of a specific type (eg. mRNA) or from a specific database (eg. RefSeq)
* \[JOUR], e.g. Nature\[JOUR] Described in a paper published in a particular journal
* \[VOL], e.g. 531\[VOL] Described in a paper published in a particular journal volume
* \[PAGE], e.g. 27\[PAGE] Described in a paper with a particular start\-page in a journal
* \[AU], e.g. “Smith J”\[AU] Described in a paper, or submitted to NCBI, by a particular author
To carry out searches of the NCBI database, you first need to go to the NCBI website, and type your search query into the search box at the top. For example, to search for all sequences from Fungi, you would type “Fungi\[ORGN]” into the search box on the NCBI website.
You can combine the search tags above by using “AND”, to make more complex searches. For example, to find all mRNA sequences from Fungi, you could type “Fungi\[ORGN] AND biomol\_mRNA\[PROP]” in the search box on the NCBI website.
Likewise, you can also combine search tags by using “OR”, for example, to search for all mRNA sequences from Fungi or Bacteria, you would type “(Fungi\[ORGN] OR Bacteria\[ORGN]) AND biomol\_mRNA\[PROP]” in the search box. Note that you need to put brackets around “Fungi\[ORGN] OR Bacteria\[ORGN]” to specify that the word “OR” refers to these two search tags.
Here are some examples of searches, some of them made by combining search terms using “AND”:
* NC\_001477\[AC] \- With accession number NC\_001477
* Nature\[JOUR] AND 460\[VOL] AND 352\[PAGE] \- Published in Nature 460:352\-358
* “Chlamydia trachomatis”\[ORGN] \- From the bacterium Chlamydia trachomatis
* “Berriman M”\[AU] \- Published in a paper, or submitted to NCBI, by M. Berriman
* flagellin OR fibrinogen \- Which contain the word “flagellin” or “fibrinogen” in their NCBI record
* “Mycobacterium leprae”\[ORGN] AND dnaA \- Which are from M. leprae, and contain “dnaA” in their NCBI record
* “Homo sapiens”\[ORGN] AND “colon cancer” \- Which are from human, and contain “colon cancer” in their NCBI record
* “Homo sapiens”\[ORGN] AND malaria \- Which are from human, and contain “malaria” in their NCBI record
* “Homo sapiens”\[ORGN] AND biomol\_mrna\[PROP] \- Which are mRNA sequences from human
* “Bacteria”\[ORGN] AND srcdb\_refseq\[PROP] \- Which are RefSeq sequences from Bacteria
* “colon cancer” AND srcdb\_refseq\[PROP] \- From RefSeq, which contain “colon cancer” in their NCBI record
Note that if you are searching for a phrase such as “colon cancer” or “Chlamydia trachomatis”, you need to put the phrase in quotes when typing it into the search box. This is because if you type the phrase in the search box without quotes, the search will be for NCBI records that contain either of the two words “colon” or “cancer” (or either of the two words “Chlamydia” or “trachomatis”), not necessarily both words.
As mentioned above, the NCBI database contains several sub\-databases, including the NCBI **Nucleotide database** and the NCBI **Protein database**. If you go to the NCBI website, and type one of the search queries above in the search box at the top of the page, the results page will tell you how many matching NCBI records were found in each of the NCBI sub\-databases.
For example, if you search for “Chlamydia trachomatis\[ORGN]”, you will get matches to proteins from C. trachomatis in the NCBI Protein database, matches to DNA and RNA sequences from *C. trachomatis* in the NCBI Nucleotide database, matches to whole genome sequences for C. trachomatis strains in the NCBI Genome database, and so on:
Alternatively, if you know in advance that you want to search a particular sub\-database, for example, the NCBI Protein database, when you go to the NCBI website, you can select that sub\-database from the drop\-down list above the search box, so that you will search that sub\-database.
13\.11 Example: finding the sequences published in Nature 460:352\-358 (for reference)
--------------------------------------------------------------------------------------
**NOTE**: The following section is here for reference; you need to know its *possible* to refine searches but do not need to know any of these actual tags.
For example, if you want to find sequences published in Nature 460:352\-358, you can use the “\[JOUR]”, “\[VOL]” and “\[PAGE]” search terms. That is, you would go to the NCBI website and type in the search box on the top: “Nature”\[JOUR] AND 460\[VOL] AND 352\[PAGE], where \[JOUR] specifies the journal name, \[VOL] the volume of the journal the paper is in, and \[PAGE] the page number.
This should bring up a results page with “50890” beside the word “Nucleotide”, and “1” beside the word “Genome”, and “25701” beside the word “Protein”, indicating that there were 50890 hits to sequence records in the Nucleotide database, which contains DNA and RNA sequences, and 1 hit to the Genome database, which contains genome sequences, and 25701 hits to the Protein database, which contains protein sequences.
If you click on the word “Nucleotide”, it will bring up a webpage with a list of links to the NCBI sequence records for those 50890 hits. The 50890 hits are all contigs from the schistosome worm *Schistosoma mansoni*.
Likewise, if you click on the word “Protein”, it will bring up a webpage with a list of links to the NCBI sequence records for the 25701 hits, and you will see that the hits are all predicted proteins for *Schistosoma mansoni*.
If you click on the word “Genome”, it will bring you to the NCBI record for the *Schistosoma mansoni* genome sequence, which has NCBI accession NS\_00200\. Note that the accession starts with “NS\_”, which indicates that it is a RefSeq accession.
Therefore, in Nature volume 460, page 352, the *Schistosoma mansoni* genome sequence was published, along with all the DNA sequence contigs that were sequenced for the genome project, and all the predicted proteins for the gene predictions made in the genome sequence. You can view the original paper on the Nature website at [http://www.nature.com/nature/journal/v460/n7253/abs/nature08160\.html](http://www.nature.com/nature/journal/v460/n7253/abs/nature08160.html).
Note: *Schistmosoma mansoni* is a parasitic worm that is responsible for causing **schistosomiasis**, which is classified by the WHO as a **neglected tropical disease**.
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/downloading-sequences-from-uniprot-by-hand.html |
Chapter 15 Downloading sequences from UniProt by hand
=====================================================
**By**: Avril Coghlan.
**Adapted, edited and expanded**: Nathan Brouwer under the Creative Commons 3\.0 Attribution License [(CC BY 3\.0\)](https://creativecommons.org/licenses/by/3.0/).
15\.1 Vocab
-----------
* RefSeq
* manual curation
* UniProt
* accession
15\.2 Downloading Protein data from UniProt
-------------------------------------------
In a previous vignette you learned how to retrieve sequences from the NCBI database. The NCBI database is a key database in bioinformatics because it contains essentially all DNA sequences ever sequenced.
As mentioned previously, a subsection of the NCBI database called **RefSeq** consists of high quality DNA and protein sequence data. Furthermore, the NCBI entries for the RefSeq sequences have been **manually curated**, which means that expert biologists employed by NCBI have added additional information to the NCBI entries for those sequences, such as details of scientific papers that describe the sequences.
Another extremely important manually curated database is [UniProt](https://www.uniprot.org/) www.uniprot.org, which focuses on protein sequences. UniProt aims to contains manually curated information on all known protein sequences. While many of the protein sequences in UniProt are also present in RefSeq, the amount and quality of manually curated information in UniProt is much higher than that in RefSeq.
For each protein in UniProt, the UniProt curators read all the scientific papers that they can find about that protein, and add information from those papers to the protein’s UniProt entry. For example, for a human protein, the UniProt entry for the protein usually includes information about the biological function of the protein, in what human tissues it is expressed, whether it interacts with other human proteins, and much more. All this information has been manually gathered by the UniProt curators from scientific papers, and the papers in which the found the information are always listed in the UniProt entry for the protein.
Just like NCBI, UniProt also assigns an **accession** to each sequence in the UniProt database. Although the same protein sequence may appear in both the NCBI database and the UniProt database, it will have *different* NCBI and UniProt accessions. However, there is usually a link on the NCBI entry for the protein sequence to the UniProt entry, and vice versa.
15\.3 Viewing the UniProt webpage for a protein sequence
--------------------------------------------------------
If you are given the UniProt accession for a protein, to find the UniProt entry for the protein, you first need to go the UniProt website, www.uniprot.org. At the top of the UniProt website, you will see a search box, and you can type the accession of the protein that you are looking for in this search box, and then click on the “Search” button to search for it.
For example, if you want to find the sequence for the chorismate lyase protein from *Mycobacterium leprae* (the bacterium which causes leprosy), which has UniProt accession Q9CD83, you would type just “Q9CD83” in the search box and press “Search”. The UniProt entry for UniProt accession Q9CD83 will then appear in your web browser.
Beside the heading “Organism” you can see the organism is given as *Mycobacterium leprae*. If you scroll down you’ll find a section **Names and Taxonomy** and beside the heading “Taxonomic lineage”, you can see “Bacteria \- Actinobacteria \- Actinobacteridae \- Actinomycetales \- Corynebacterineae \- Mycobacteriaceae\- Mycobacterium”.
This tells us that *Mycobacterium* is a species of bacteria, which belongs to a group of related bacteria called the Mycobacteriaceae, which itself belongs to a larger group of related bacteria called the Corynebacterineae, which itself belongs to an even larger group of related bacteria called the Actinomycetales, which itself belongs to the Actinobacteridae, which itself belongs to a huge group of bacteria called the Actinobacteria.
### 15\.3\.1 Protein function
Back up at the top under “organism” is says “Status”, which tells us the **annotation score** is 2 out of 5, that it is a “Protein inferred from homology”, which means what we know about it is derived from bioinformatics and computational tools, not lab work.
Beside the heading “Function”, it says that the function of this protein is that it “Removes the pyruvyl group from chorismate to provide 4\-hydroxybenzoate (4HB)”. This tells us this protein is an enzyme (a protein that increases the rate of a specific biochemical reaction), and tells us what is the particular biochemical reaction that this enzyme is involved in. At the end of this info it says “By similarity”, which again indicates that what we know about this protein comes from bioinformatics, not lab work.
### 15\.3\.2 Protein sequence and size
Under **Sequence** we see that the sequence length is 210 amino acids long (210 letters long) and has a mass of 24,045 daltons. We can access the sequence as a FASTA file from here if we want and also carry out a BLAST search from a link on the right.
### 15\.3\.3 Other information
Further down the UniProt page for this protein, you will see a lot more information, as well as many links to webpages in other biological databases, such as NCBI. The huge amount of information about proteins in UniProt means that if you want to find out about a particular protein, the UniProt page for that protein is a great place to start.
15\.4 Retrieving a UniProt protein sequence via the UniProt website
-------------------------------------------------------------------
There are a couple different ways to retrieve the sequence. At the top of the page is a tab that say “Format” which brings you to a page with th FASTA file. You can copy and paste the sequence from here if you want. To save it as a file, go to the “File” menu of your web browser, choose “Save page as”, and save the file. Remember to give the file a sensible name (eg. “Q9CD83\.fasta” for accession Q9CD83\), and in a place that you will remember (eg. in the “My Documents” folder).
For example, you can retrieve the protein sequences for the chorismate lyase protein from *Mycobacterium leprae* (which has UniProt accession Q9CD83\) and for the chorismate lyase protein from *Mycobacterium ulcerans* (UniProt accession A0PQ23\), and save them as FASTA\-format files (eg. “Q9CD83\.fasta” and “A0PQ23\.fasta”, as described above.
You can also put the UniProt information into an online **Basket**. If you do this for both Q9CD83 and A0PQ23 you can think click on **Basket**, select both entries, and carry out a pairwise alignment by clicking on **Align**.
*Mycobacterium leprae* is the bacterium which causes leprosy, while *Mycobacterium ulcerans* is a related bacterium which causes Buruli ulcer, both of which are classified by the WHO as neglected tropical diseases. The *M. leprae* and *M. ulcerans* chorismate lyase proteins are an example of a pair of **homologous** (related) proteins in two related species of bacteria.
If you downloaded the protein sequences for UniProt accessions Q9CD83 and A0PQ23 and saved them as FASTA\-format files (eg. “Q9CD83\.fasta” and “A0PQ23\.fasta”), you could read them into R using the read.fasta() function in the SeqinR R package (as detailed in another Vignette) or a similar function from another package.
Note that the read.fasta() function normally expects that you have put your FASTA\-format files in the the **working directory** of R. For convenience so you can explore these sequences they have been saved in a special folder in the *compbio4all* package and can be accessed like this for the Leprosy sequence
```
# load compbio4all
library(compbio4all)
# locate the file within the package using system.file()
file.1 <- system.file("./extdata/Q9CD83.fasta",package = "compbio4all")
# load seqinr
library("seqinr")
# load fasta
leprae <- read.fasta(file = file.1)
lepraeseq <- leprae[[1]]
```
We can confirm
```
str(lepraeseq)
```
```
# 'SeqFastadna' chr [1:210] "m" "t" "n" "r" "t" "l" "s" "r" "e" "e" "i" ...
# - attr(*, "name")= chr "sp|Q9CD83|PHBS_MYCLE"
# - attr(*, "Annot")= chr ">sp|Q9CD83|PHBS_MYCLE Chorismate pyruvate-lyase
# OS=Mycobacterium leprae (strain TN) OX=272631 GN=ML0133 PE=3 SV=1"
```
For the other sequence
```
# locate the file within the package using system.file()
file.2 <- system.file("./extdata/A0PQ23.fasta", package = "compbio4all")
# load fasta
ulcerans <- read.fasta(file = file.2)
ulceransseq <- ulcerans[[1]]
str(ulceransseq)[[1]]
```
```
# 'SeqFastadna' chr [1:212] "m" "l" "a" "v" "l" "p" "e" "k" "r" "e" "m" ...
# - attr(*, "name")= chr "tr|A0PQ23|A0PQ23_MYCUA"
# - attr(*, "Annot")= chr ">tr|A0PQ23|A0PQ23_MYCUA Chorismate pyruvate-lyase
# OS=Mycobacterium ulcerans (strain Agy99) OX=362242 GN=MUL_2003 PE=4 SV=1"
```
15\.1 Vocab
-----------
* RefSeq
* manual curation
* UniProt
* accession
15\.2 Downloading Protein data from UniProt
-------------------------------------------
In a previous vignette you learned how to retrieve sequences from the NCBI database. The NCBI database is a key database in bioinformatics because it contains essentially all DNA sequences ever sequenced.
As mentioned previously, a subsection of the NCBI database called **RefSeq** consists of high quality DNA and protein sequence data. Furthermore, the NCBI entries for the RefSeq sequences have been **manually curated**, which means that expert biologists employed by NCBI have added additional information to the NCBI entries for those sequences, such as details of scientific papers that describe the sequences.
Another extremely important manually curated database is [UniProt](https://www.uniprot.org/) www.uniprot.org, which focuses on protein sequences. UniProt aims to contains manually curated information on all known protein sequences. While many of the protein sequences in UniProt are also present in RefSeq, the amount and quality of manually curated information in UniProt is much higher than that in RefSeq.
For each protein in UniProt, the UniProt curators read all the scientific papers that they can find about that protein, and add information from those papers to the protein’s UniProt entry. For example, for a human protein, the UniProt entry for the protein usually includes information about the biological function of the protein, in what human tissues it is expressed, whether it interacts with other human proteins, and much more. All this information has been manually gathered by the UniProt curators from scientific papers, and the papers in which the found the information are always listed in the UniProt entry for the protein.
Just like NCBI, UniProt also assigns an **accession** to each sequence in the UniProt database. Although the same protein sequence may appear in both the NCBI database and the UniProt database, it will have *different* NCBI and UniProt accessions. However, there is usually a link on the NCBI entry for the protein sequence to the UniProt entry, and vice versa.
15\.3 Viewing the UniProt webpage for a protein sequence
--------------------------------------------------------
If you are given the UniProt accession for a protein, to find the UniProt entry for the protein, you first need to go the UniProt website, www.uniprot.org. At the top of the UniProt website, you will see a search box, and you can type the accession of the protein that you are looking for in this search box, and then click on the “Search” button to search for it.
For example, if you want to find the sequence for the chorismate lyase protein from *Mycobacterium leprae* (the bacterium which causes leprosy), which has UniProt accession Q9CD83, you would type just “Q9CD83” in the search box and press “Search”. The UniProt entry for UniProt accession Q9CD83 will then appear in your web browser.
Beside the heading “Organism” you can see the organism is given as *Mycobacterium leprae*. If you scroll down you’ll find a section **Names and Taxonomy** and beside the heading “Taxonomic lineage”, you can see “Bacteria \- Actinobacteria \- Actinobacteridae \- Actinomycetales \- Corynebacterineae \- Mycobacteriaceae\- Mycobacterium”.
This tells us that *Mycobacterium* is a species of bacteria, which belongs to a group of related bacteria called the Mycobacteriaceae, which itself belongs to a larger group of related bacteria called the Corynebacterineae, which itself belongs to an even larger group of related bacteria called the Actinomycetales, which itself belongs to the Actinobacteridae, which itself belongs to a huge group of bacteria called the Actinobacteria.
### 15\.3\.1 Protein function
Back up at the top under “organism” is says “Status”, which tells us the **annotation score** is 2 out of 5, that it is a “Protein inferred from homology”, which means what we know about it is derived from bioinformatics and computational tools, not lab work.
Beside the heading “Function”, it says that the function of this protein is that it “Removes the pyruvyl group from chorismate to provide 4\-hydroxybenzoate (4HB)”. This tells us this protein is an enzyme (a protein that increases the rate of a specific biochemical reaction), and tells us what is the particular biochemical reaction that this enzyme is involved in. At the end of this info it says “By similarity”, which again indicates that what we know about this protein comes from bioinformatics, not lab work.
### 15\.3\.2 Protein sequence and size
Under **Sequence** we see that the sequence length is 210 amino acids long (210 letters long) and has a mass of 24,045 daltons. We can access the sequence as a FASTA file from here if we want and also carry out a BLAST search from a link on the right.
### 15\.3\.3 Other information
Further down the UniProt page for this protein, you will see a lot more information, as well as many links to webpages in other biological databases, such as NCBI. The huge amount of information about proteins in UniProt means that if you want to find out about a particular protein, the UniProt page for that protein is a great place to start.
### 15\.3\.1 Protein function
Back up at the top under “organism” is says “Status”, which tells us the **annotation score** is 2 out of 5, that it is a “Protein inferred from homology”, which means what we know about it is derived from bioinformatics and computational tools, not lab work.
Beside the heading “Function”, it says that the function of this protein is that it “Removes the pyruvyl group from chorismate to provide 4\-hydroxybenzoate (4HB)”. This tells us this protein is an enzyme (a protein that increases the rate of a specific biochemical reaction), and tells us what is the particular biochemical reaction that this enzyme is involved in. At the end of this info it says “By similarity”, which again indicates that what we know about this protein comes from bioinformatics, not lab work.
### 15\.3\.2 Protein sequence and size
Under **Sequence** we see that the sequence length is 210 amino acids long (210 letters long) and has a mass of 24,045 daltons. We can access the sequence as a FASTA file from here if we want and also carry out a BLAST search from a link on the right.
### 15\.3\.3 Other information
Further down the UniProt page for this protein, you will see a lot more information, as well as many links to webpages in other biological databases, such as NCBI. The huge amount of information about proteins in UniProt means that if you want to find out about a particular protein, the UniProt page for that protein is a great place to start.
15\.4 Retrieving a UniProt protein sequence via the UniProt website
-------------------------------------------------------------------
There are a couple different ways to retrieve the sequence. At the top of the page is a tab that say “Format” which brings you to a page with th FASTA file. You can copy and paste the sequence from here if you want. To save it as a file, go to the “File” menu of your web browser, choose “Save page as”, and save the file. Remember to give the file a sensible name (eg. “Q9CD83\.fasta” for accession Q9CD83\), and in a place that you will remember (eg. in the “My Documents” folder).
For example, you can retrieve the protein sequences for the chorismate lyase protein from *Mycobacterium leprae* (which has UniProt accession Q9CD83\) and for the chorismate lyase protein from *Mycobacterium ulcerans* (UniProt accession A0PQ23\), and save them as FASTA\-format files (eg. “Q9CD83\.fasta” and “A0PQ23\.fasta”, as described above.
You can also put the UniProt information into an online **Basket**. If you do this for both Q9CD83 and A0PQ23 you can think click on **Basket**, select both entries, and carry out a pairwise alignment by clicking on **Align**.
*Mycobacterium leprae* is the bacterium which causes leprosy, while *Mycobacterium ulcerans* is a related bacterium which causes Buruli ulcer, both of which are classified by the WHO as neglected tropical diseases. The *M. leprae* and *M. ulcerans* chorismate lyase proteins are an example of a pair of **homologous** (related) proteins in two related species of bacteria.
If you downloaded the protein sequences for UniProt accessions Q9CD83 and A0PQ23 and saved them as FASTA\-format files (eg. “Q9CD83\.fasta” and “A0PQ23\.fasta”), you could read them into R using the read.fasta() function in the SeqinR R package (as detailed in another Vignette) or a similar function from another package.
Note that the read.fasta() function normally expects that you have put your FASTA\-format files in the the **working directory** of R. For convenience so you can explore these sequences they have been saved in a special folder in the *compbio4all* package and can be accessed like this for the Leprosy sequence
```
# load compbio4all
library(compbio4all)
# locate the file within the package using system.file()
file.1 <- system.file("./extdata/Q9CD83.fasta",package = "compbio4all")
# load seqinr
library("seqinr")
# load fasta
leprae <- read.fasta(file = file.1)
lepraeseq <- leprae[[1]]
```
We can confirm
```
str(lepraeseq)
```
```
# 'SeqFastadna' chr [1:210] "m" "t" "n" "r" "t" "l" "s" "r" "e" "e" "i" ...
# - attr(*, "name")= chr "sp|Q9CD83|PHBS_MYCLE"
# - attr(*, "Annot")= chr ">sp|Q9CD83|PHBS_MYCLE Chorismate pyruvate-lyase
# OS=Mycobacterium leprae (strain TN) OX=272631 GN=ML0133 PE=3 SV=1"
```
For the other sequence
```
# locate the file within the package using system.file()
file.2 <- system.file("./extdata/A0PQ23.fasta", package = "compbio4all")
# load fasta
ulcerans <- read.fasta(file = file.2)
ulceransseq <- ulcerans[[1]]
str(ulceransseq)[[1]]
```
```
# 'SeqFastadna' chr [1:212] "m" "l" "a" "v" "l" "p" "e" "k" "r" "e" "m" ...
# - attr(*, "name")= chr "tr|A0PQ23|A0PQ23_MYCUA"
# - attr(*, "Annot")= chr ">tr|A0PQ23|A0PQ23_MYCUA Chorismate pyruvate-lyase
# OS=Mycobacterium ulcerans (strain Agy99) OX=362242 GN=MUL_2003 PE=4 SV=1"
```
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/introducingFASTA.html |
Chapter 16 Introducing FASTA Files
==================================
Adapted from [Wikipedia](https://en.wikipedia.org/wiki/FASTA_format): <https://en.wikipedia.org/wiki/FASTA_format>
In bioinformatics, the FASTA format is a text\-based format for representing either nucleotide sequences or amino acid (protein) sequences, in which nucleotides or amino acids are represented using single\-letter codes. The format allows for sequence names and comments to precede the sequences. The format originates from the FASTA alignment software, but has now become a near universal standard in the field of bioinformatics.
The simplicity of FASTA format makes it easy to manipulate and parse sequences using text\-processing tools and scripting languages like the R programming language and Python.
The first line in a FASTA file starts with a “\>” (greater\-than) symbol and holds summary information about the sequence, often starting with a unique accession number and followed by information like the name of the gene, the type of sequence, and the organism it is from.
On the next is the sequence itself in a standard one\-letter character string. Anything other than a valid character is be ignored (including spaces, tabs, asterisks, etc…).
A multiple sequence FASTA format can be obtained by concatenating several single sequence FASTA files in a common file (also known as multi\-FASTA format).
Following the header line, the actual sequence is represented. Sequences may be protein sequences or nucleic acid sequences, and they can contain gaps or alignment characters. Sequences are expected to be represented in the standard amino acid and nucleic acid codes. Lower\-case letters are accepted and are mapped into upper\-case; a single hyphen or dash can be used to represent a gap character; and in amino acid sequences, U and \* are acceptable letters.
FASTQ format is a form of FASTA format extended to indicate information related to sequencing. It is created by the Sanger Centre in Cambridge.
Bioconductor.org’s Biostrings package can be used to read and manipulate FASTA files in R
> “FASTA format is a text\-based format for representing either nucleotide sequences or peptide sequences, in which base pairs or amino acids are represented using single\-letter codes. A sequence in FASTA format begins with a single\-line description, followed by lines of sequence data. The description line is distinguished from the sequence data by a greater\-than (”\>“) symbol in the first column. It is recommended that all lines of text be shorter than 80 characters in length.” (<https://zhanglab.dcmb.med.umich.edu/FASTA/>)
16\.1 Example FASTA file
------------------------
Here is an example of the contents of a FASTA file. (If your are viewing this chapter in the form of the source .Rmd file, the `cat()` function is included just to print out the content properly and is not part of the FASTA format).
```
cat(">gi|186681228|ref|YP_001864424.1| phycoerythrobilin:ferredoxin oxidoreductase
MNSERSDVTLYQPFLDYAIAYMRSRLDLEPYPIPTGFESNSAVVGKGKNQEEVVTTSYAFQTAKLRQIRA
AHVQGGNSLQVLNFVIFPHLNYDLPFFGADLVTLPGGHLIALDMQPLFRDDSAYQAKYTEPILPIFHAHQ
QHLSWGGDFPEEAQPFFSPAFLWTRPQETAVVETQVFAAFKDYLKAYLDFVEQAEAVTDSQNLVAIKQAQ
LRYLRYRAEKDPARGMFKRFYGAEWTEEYIHGFLFDLERKLTVVK")
```
```
## >gi|186681228|ref|YP_001864424.1| phycoerythrobilin:ferredoxin oxidoreductase
## MNSERSDVTLYQPFLDYAIAYMRSRLDLEPYPIPTGFESNSAVVGKGKNQEEVVTTSYAFQTAKLRQIRA
## AHVQGGNSLQVLNFVIFPHLNYDLPFFGADLVTLPGGHLIALDMQPLFRDDSAYQAKYTEPILPIFHAHQ
## QHLSWGGDFPEEAQPFFSPAFLWTRPQETAVVETQVFAAFKDYLKAYLDFVEQAEAVTDSQNLVAIKQAQ
## LRYLRYRAEKDPARGMFKRFYGAEWTEEYIHGFLFDLERKLTVVK
```
16\.2 Multiple sequences in a single FASTA file
-----------------------------------------------
Multiple sequences can be stored in a single FASTA file, each on separated by a line and have its own headline.
```
cat(">LCBO - Prolactin precursor - Bovine
MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS
EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL
VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED
ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC*
>MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken
MADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID
FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA
DIDGDGQVNYEEFVQMMTAK*
>gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus]
LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV
EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG
LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL
GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX
IENY")
```
```
## >LCBO - Prolactin precursor - Bovine
## MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS
## EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL
## VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED
## ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC*
##
## >MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken
## MADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID
## FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA
## DIDGDGQVNYEEFVQMMTAK*
##
## >gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus]
## LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV
## EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG
## LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL
## GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX
## IENY
```
16\.3 Multiple sequence alignments can be stored in FASTA format
----------------------------------------------------------------
**Aligned FASTA format** can be used to store the output of **Multiple Sequence Alignment (MSA)**. This format contains
1. Multiple entries, each with their own header line
2. **Gaps** inserted to align sequences are indicated by `.`
3. Each spaces added to the beginning and end of sequences that vary in length are indicated by `~`
In the sample FASTA file below, the `example1` sequence has a gap of 8 near its beginning. The `example2` sequence has numerous `~` indicating that this sequence is missing data from its beginning that are present in the other sequences. The `example3` sequence has numerous `~` at its end, indicating that this sequence is shorter than the others.
```
cat(">example1
MKALWALLLVPLLTGCLA........EGELEVTDQLPGQSDQP.WEQALNRFWDYLRWVQ
GNQARDRLEEVREQMEEVRSKMEEQTQQIRLQAEIFQARIKGWFEPLVEDMQRQWANLME
KIQASVATNSIASTTVPLENQ
>example2
~~~~~~~~~~~~~~~~~~~~~~~~~~KVQQELEPEAGWQTGQP.WEAALARFWDYLRWVQ
SSRARGHLEEMREQIQEVRVKMEEQADQIRQKAEAFQARLKSWFEPLLEDMQRQWDGLVE
KVQAAVAT.IPTSKPVEEP~~
>example3
MRSLVVFFALAVLTGCQARSLFQAD..............APQPRWEEMVDRFWQYVSELN
AGALKEKLEETAENL...RTSLEGRVDELTSLLAPYSQKIREQLQEVMDKIKEATAALPT
QA~~~~~~~~~~~~~~~~~~~")
```
```
## >example1
## MKALWALLLVPLLTGCLA........EGELEVTDQLPGQSDQP.WEQALNRFWDYLRWVQ
## GNQARDRLEEVREQMEEVRSKMEEQTQQIRLQAEIFQARIKGWFEPLVEDMQRQWANLME
## KIQASVATNSIASTTVPLENQ
## >example2
## ~~~~~~~~~~~~~~~~~~~~~~~~~~KVQQELEPEAGWQTGQP.WEAALARFWDYLRWVQ
## SSRARGHLEEMREQIQEVRVKMEEQADQIRQKAEAFQARLKSWFEPLLEDMQRQWDGLVE
## KVQAAVAT.IPTSKPVEEP~~
## >example3
## MRSLVVFFALAVLTGCQARSLFQAD..............APQPRWEEMVDRFWQYVSELN
## AGALKEKLEETAENL...RTSLEGRVDELTSLLAPYSQKIREQLQEVMDKIKEATAALPT
## QA~~~~~~~~~~~~~~~~~~~
```
16\.4 FASTQ Format
------------------
Adapted from [Wikipedia](https://en.wikipedia.org/wiki/FASTQ_format): <https://en.wikipedia.org/wiki/FASTQ_format>
FASTQ format is a text\-based format for storing both a biological sequence (usually nucleotide sequence) and its corresponding quality scores. Both the sequence letter and quality score are each encoded with a single ASCII character for brevity.
It was originally developed at the Wellcome Trust Sanger Institute to bundle a FASTA formatted sequence and its quality data, but has recently become the *de facto* standard for storing the output of high\-throughput sequencing instruments such as the Illumina Genome Analyzer.
A FASTQ file normally uses four lines per sequence.
* Line 1 begins with a `@` character and is followed by a sequence identifier and an optional description (like a FASTA title line).
* Line 2 is the raw sequence letters.
* Line 3 begins with a `+` character and is optionally followed by the same sequence identifier (and any description) again.
* Line 4 encodes the **quality values** for the sequence in Line 2 of the file, and must contain the same number of symbols as letters in the sequence.
A FASTQ file containing a single sequence might look like this:”
```
cat("@SEQ_ID
GATTTGGGGTTCAAAGCAGTATCGATCAAATAGTAAATCCATTTGTTCAACTCACAGTTT
+
!''*((((***+))%%%++)(%%%%).1***-+*''))**55CCF>>>>>>CCCCCCC65")
```
```
## @SEQ_ID
## GATTTGGGGTTCAAAGCAGTATCGATCAAATAGTAAATCCATTTGTTCAACTCACAGTTT
## +
## !''*((((***+))%%%++)(%%%%).1***-+*''))**55CCF>>>>>>CCCCCCC65
```
Here are the quality value characters in left\-to\-right increasing order of quality (ASCII):”
```
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~
```
FASTQ files typically do not include line breaks and do not wrap around when they reach the width of a normal page or file.
16\.1 Example FASTA file
------------------------
Here is an example of the contents of a FASTA file. (If your are viewing this chapter in the form of the source .Rmd file, the `cat()` function is included just to print out the content properly and is not part of the FASTA format).
```
cat(">gi|186681228|ref|YP_001864424.1| phycoerythrobilin:ferredoxin oxidoreductase
MNSERSDVTLYQPFLDYAIAYMRSRLDLEPYPIPTGFESNSAVVGKGKNQEEVVTTSYAFQTAKLRQIRA
AHVQGGNSLQVLNFVIFPHLNYDLPFFGADLVTLPGGHLIALDMQPLFRDDSAYQAKYTEPILPIFHAHQ
QHLSWGGDFPEEAQPFFSPAFLWTRPQETAVVETQVFAAFKDYLKAYLDFVEQAEAVTDSQNLVAIKQAQ
LRYLRYRAEKDPARGMFKRFYGAEWTEEYIHGFLFDLERKLTVVK")
```
```
## >gi|186681228|ref|YP_001864424.1| phycoerythrobilin:ferredoxin oxidoreductase
## MNSERSDVTLYQPFLDYAIAYMRSRLDLEPYPIPTGFESNSAVVGKGKNQEEVVTTSYAFQTAKLRQIRA
## AHVQGGNSLQVLNFVIFPHLNYDLPFFGADLVTLPGGHLIALDMQPLFRDDSAYQAKYTEPILPIFHAHQ
## QHLSWGGDFPEEAQPFFSPAFLWTRPQETAVVETQVFAAFKDYLKAYLDFVEQAEAVTDSQNLVAIKQAQ
## LRYLRYRAEKDPARGMFKRFYGAEWTEEYIHGFLFDLERKLTVVK
```
16\.2 Multiple sequences in a single FASTA file
-----------------------------------------------
Multiple sequences can be stored in a single FASTA file, each on separated by a line and have its own headline.
```
cat(">LCBO - Prolactin precursor - Bovine
MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS
EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL
VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED
ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC*
>MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken
MADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID
FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA
DIDGDGQVNYEEFVQMMTAK*
>gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus]
LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV
EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG
LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL
GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX
IENY")
```
```
## >LCBO - Prolactin precursor - Bovine
## MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS
## EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL
## VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED
## ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC*
##
## >MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken
## MADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID
## FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA
## DIDGDGQVNYEEFVQMMTAK*
##
## >gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus]
## LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV
## EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG
## LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL
## GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX
## IENY
```
16\.3 Multiple sequence alignments can be stored in FASTA format
----------------------------------------------------------------
**Aligned FASTA format** can be used to store the output of **Multiple Sequence Alignment (MSA)**. This format contains
1. Multiple entries, each with their own header line
2. **Gaps** inserted to align sequences are indicated by `.`
3. Each spaces added to the beginning and end of sequences that vary in length are indicated by `~`
In the sample FASTA file below, the `example1` sequence has a gap of 8 near its beginning. The `example2` sequence has numerous `~` indicating that this sequence is missing data from its beginning that are present in the other sequences. The `example3` sequence has numerous `~` at its end, indicating that this sequence is shorter than the others.
```
cat(">example1
MKALWALLLVPLLTGCLA........EGELEVTDQLPGQSDQP.WEQALNRFWDYLRWVQ
GNQARDRLEEVREQMEEVRSKMEEQTQQIRLQAEIFQARIKGWFEPLVEDMQRQWANLME
KIQASVATNSIASTTVPLENQ
>example2
~~~~~~~~~~~~~~~~~~~~~~~~~~KVQQELEPEAGWQTGQP.WEAALARFWDYLRWVQ
SSRARGHLEEMREQIQEVRVKMEEQADQIRQKAEAFQARLKSWFEPLLEDMQRQWDGLVE
KVQAAVAT.IPTSKPVEEP~~
>example3
MRSLVVFFALAVLTGCQARSLFQAD..............APQPRWEEMVDRFWQYVSELN
AGALKEKLEETAENL...RTSLEGRVDELTSLLAPYSQKIREQLQEVMDKIKEATAALPT
QA~~~~~~~~~~~~~~~~~~~")
```
```
## >example1
## MKALWALLLVPLLTGCLA........EGELEVTDQLPGQSDQP.WEQALNRFWDYLRWVQ
## GNQARDRLEEVREQMEEVRSKMEEQTQQIRLQAEIFQARIKGWFEPLVEDMQRQWANLME
## KIQASVATNSIASTTVPLENQ
## >example2
## ~~~~~~~~~~~~~~~~~~~~~~~~~~KVQQELEPEAGWQTGQP.WEAALARFWDYLRWVQ
## SSRARGHLEEMREQIQEVRVKMEEQADQIRQKAEAFQARLKSWFEPLLEDMQRQWDGLVE
## KVQAAVAT.IPTSKPVEEP~~
## >example3
## MRSLVVFFALAVLTGCQARSLFQAD..............APQPRWEEMVDRFWQYVSELN
## AGALKEKLEETAENL...RTSLEGRVDELTSLLAPYSQKIREQLQEVMDKIKEATAALPT
## QA~~~~~~~~~~~~~~~~~~~
```
16\.4 FASTQ Format
------------------
Adapted from [Wikipedia](https://en.wikipedia.org/wiki/FASTQ_format): <https://en.wikipedia.org/wiki/FASTQ_format>
FASTQ format is a text\-based format for storing both a biological sequence (usually nucleotide sequence) and its corresponding quality scores. Both the sequence letter and quality score are each encoded with a single ASCII character for brevity.
It was originally developed at the Wellcome Trust Sanger Institute to bundle a FASTA formatted sequence and its quality data, but has recently become the *de facto* standard for storing the output of high\-throughput sequencing instruments such as the Illumina Genome Analyzer.
A FASTQ file normally uses four lines per sequence.
* Line 1 begins with a `@` character and is followed by a sequence identifier and an optional description (like a FASTA title line).
* Line 2 is the raw sequence letters.
* Line 3 begins with a `+` character and is optionally followed by the same sequence identifier (and any description) again.
* Line 4 encodes the **quality values** for the sequence in Line 2 of the file, and must contain the same number of symbols as letters in the sequence.
A FASTQ file containing a single sequence might look like this:”
```
cat("@SEQ_ID
GATTTGGGGTTCAAAGCAGTATCGATCAAATAGTAAATCCATTTGTTCAACTCACAGTTT
+
!''*((((***+))%%%++)(%%%%).1***-+*''))**55CCF>>>>>>CCCCCCC65")
```
```
## @SEQ_ID
## GATTTGGGGTTCAAAGCAGTATCGATCAAATAGTAAATCCATTTGTTCAACTCACAGTTT
## +
## !''*((((***+))%%%++)(%%%%).1***-+*''))**55CCF>>>>>>CCCCCCC65
```
Here are the quality value characters in left\-to\-right increasing order of quality (ASCII):”
```
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~
```
FASTQ files typically do not include line breaks and do not wrap around when they reach the width of a normal page or file.
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/worked-example-building-a-phylogeny-in-r.html |
Chapter 18 “Worked example: Building a phylogeny in R”
======================================================
18\.1 Introduction
------------------
Phylogenies play an important role in computational biology and bioinformatics. Phylogenetics itself is an obligatly computational field that only began rapid growth when computational power allowed the many algorithms it relies on to be done rapidly. Phylogeneies of species, genes and proteins are used to address many biological issues, including
* Patterns of protein evolution
* Origin and evolution of phenotypic traits
* Origin and progression of epidemics
* Origin of evolution of diseases (e.g., zooenoses)
* Prediction of protein function from its sequence
* … and many more
The actual building of a phylogeny is a computationally intensive task; moreover, there are many bioinformatics and computational tasks the precede the construction of a phylogeny:
* genome sequencing and assembly
* computational gene prediction and annotation
* database searching and results screening
* pairwise sequence alignment
* data organization and cleaning
* multiple sequence alignment
* evaluation and validation of alignment accuracy
Once all of these steps have been carried out, the building of a phylogeny involves
* picking a model of sequence evolution or other description of evolution
* picking a statistical approach to tree construction
* evaluating uncertainty in the final tree
In this chapter we will work through many of these steps. In most cases we will pick the easiest or fastest option; in later chapters we will unpack the various options. This chapter is written as an interactive R session. You can follow along by opening the .Rmd file of the chapter or typing the appropriate commands into your own script. I assume that all the necessary packages have been installed and they only need to be loaded into R using the *library()* command.
This lesson walks you through and entire workflow for a bioinformatics project, including
1. obtaining FASTA sequences
2. cleaning sequences
3. creating alignments
4. creating a distance matrix
5. building a phylogenetic tree
We’ll examine the Shroom family of genes, which produces Shroom proteins essential for tissue formation in many multicellular eukaryotes, including neural tube formation in vertebrates. We’ll examine shroom in several very different organism, including humans, mice and sea urchins. There is more than one type of shroom in vertebrates, and we’ll also look at two different Shroom genes: shroom 1 and shroom 2\.
This lesson draws on skills from previous sections of the book, but is written to act as a independent summary of these activities. There is therefore review of key aspects of R and bioinformatics throughout it.
### 18\.1\.1 Vocab
18\.2 Software Preliminaires
----------------------------
* argument
* function
* list
* named list
* vector
* named vector
* for() loop
* R console
### 18\.2\.1 R functions
* library()
* round()
* plot()
* mtext()
* nchar()
* rentrez::entrez\_fetch()
* compbio4all::entrez\_fetch\_list()
* compbio4all::print\_msa() (Coghlan 2011\)
* Biostrings::AAStringSet()
* msa::msa()
* msa::msaConvert()
* msa::msaPrettyPrint()
* seqinr::dist.alignment()
* ape::nj()
A few things need to be done to get started with our R session.
### 18\.2\.2 Download necessary packages
Many *R* sessions begin by downloading necessary software packages to augment *R’s* functionality.
If you don’t have them already, you’ll need the following packages from CRAN:
1. `ape`
2. `seqinr`
3. `rentrez`
4. `devtools`
The CRAN packages can be loaded with `install.packages()`.
You’ll also need these packages from Bioconductor:
1. `msa`
2. `Biostrings`
For installing packages from Bioconductor, see the chapter at the beginning of this book on this process.
Finally, you’ll need these package from GitHub
1. `compbio4all`
2. `ggmsa`
To install packages from GitHub you can use the code `devtools::install_github("brouwern/compbio4all")` and `devtools::install_github("YuLab-SMU/ggmsa")`
This code chunk downloads most of these packages. The code has been commented out because it only needs to be run once.
```
# Install needed packages
## CRAN PACKAGES
# downloaded using install.packages()
# install.packages("rentrez",dependencies = TRUE)
# install.packages("devtools")
# install.packages("ape")
# install.packages("seqinr")
### BiocManager - CRAN package to download
##### Bioconductor packages
# requires BiocManager
# install.packages("BiocManager")
## BioConductor packages
### downloaded with BiocManager::install(), NOT install.packages()
# BiocManager::install("msa")
# BiocManager::install("Biostrings")
## GitHub packages
### requires devtools package and its function install_github()
# library(devtools)
# devtools::install_github("brouwern/combio4all")
# devtools::install_github("YuLab-SMU/ggmsa")
```
### 18\.2\.3 Load packages into memory
We now need to load up all our bioinformatics and phylogenetics software into R. This is done with the `library()` command.
To run this code just click on the sideways green triangle all the way to the right of the code.
NOTE: You’ll likely see some red code appear on your screen. No worries, totally normal!
```
# github packages
library(compbio4all)
library(ggmsa)
# CRAN packages
library(rentrez)
library(seqinr)
library(ape)
# Bioconductor packages
## msa
### The msa package is having problems on some platforms
### You can skip the msa steps if necessary. The msa output
### is used to make a distance matrix and then phylogenetics trees,
### but I provide code to build the matrix by hand so
### you can proceed even if msa doesn't work for you.
library(msa)
## Biostrings
library(Biostrings)
```
18\.3 Downloading macro\-molecular sequences
--------------------------------------------
We’re going to explore some sequences. First we need to download them. To do this we’ll use a function, `entrez_fretch()`, which accesses the [**Entrez**](ncbi.nlm.nih.gov/search/) system of database (ncbi.nlm.nih.gov/search/). This function is from the `rentrez` package, which stands for “R\-Entrez.”
We need to tell `entrez_fetch()` several things
1. `db = ...` the type of entrez database.
2. `id = ...` the **accession** (ID) number of the sequence
3. `rettype = ...` file type what we want the function to return.
Formally, these things are called **arguments** by *R*.
We’ll use these settings:
1. `db = "protein"` to access the Entrez database of protein sequences
2. `rettype = "fasta"`, which is a standard file format for nucleic acid and protein sequences
We’ll set `id = ...` to sequences whose **accession numbers** are:
1. NP\_065910: Human shroom 3
2. AAF13269: Mouse shroom 3a
3. CAA58534: Human shroom 2
4. XP\_783573: Sea urchin shroom
There are two highly conserved regions of shroom 3
1\. ASD 1: aa 884 to aa 1062 in hShroom3
1\. ASD 2: aa 1671 to aa 1955 in hShroom3
Normally we’d have to download these sequences by hand through pointing and clicking on GeneBank records on the NCBI website. In *R* we can do it automatically; this might take a second.
All the code needed is this:
```
# Human shroom 3 (H. sapiens)
hShroom3 <- rentrez::entrez_fetch(db = "protein",
id = "NP_065910",
rettype = "fasta")
```
Note: if you aren’t connected to wifi or the server itself is having problem, then when you run this code you may get an error like this:
“Quitting from lines 244\-259 (23\-MSA\-walkthrough\-shroom.Rmd)
Error: HTTP failure: 500
{”error”:“error forwarding request”,“api\-key”:“71\.182\.228\.80”,“type”:“ip”,
“status”:“ok”}
Execution halted”
You may have to try again later to knit the code.
The output is in FASTA format. We can look at the raw output by calling up the object we created. This will run as single continuous string of characters without line breaks
```
hShroom3
```
```
## [1] ">NP_065910.3 protein Shroom3 [Homo sapiens]\nMMRTTEDFHKPSATLNSNTATKGRYIYLEAFLEGGAPWGFTLKGGLEHGEPLIISKVEEGGKADTLSSKL\nQAGDEVVHINEVTLSSSRKEAVSLVKGSYKTLRLVVRRDVCTDPGHADTGASNFVSPEHLTSGPQHRKAA\nWSGGVKLRLKHRRSEPAGRPHSWHTTKSGEKQPDASMMQISQGMIGPPWHQSYHSSSSTSDLSNYDHAYL\nRRSPDQCSSQGSMESLEPSGAYPPCHLSPAKSTGSIDQLSHFHNKRDSAYSSFSTSSSILEYPHPGISGR\nERSGSMDNTSARGGLLEGMRQADIRYVKTVYDTRRGVSAEYEVNSSALLLQGREARASANGQGYDKWSNI\nPRGKGVPPPSWSQQCPSSLETATDNLPPKVGAPLPPARSDSYAAFRHRERPSSWSSLDQKRLCRPQANSL\nGSLKSPFIEEQLHTVLEKSPENSPPVKPKHNYTQKAQPGQPLLPTSIYPVPSLEPHFAQVPQPSVSSNGM\nLYPALAKESGYIAPQGACNKMATIDENGNQNGSGRPGFAFCQPLEHDLLSPVEKKPEATAKYVPSKVHFC\nSVPENEEDASLKRHLTPPQGNSPHSNERKSTHSNKPSSHPHSLKCPQAQAWQAGEDKRSSRLSEPWEGDF\nQEDHNANLWRRLEREGLGQSLSGNFGKTKSAFSSLQNIPESLRRHSSLELGRGTQEGYPGGRPTCAVNTK\nAEDPGRKAAPDLGSHLDRQVSYPRPEGRTGASASFNSTDPSPEEPPAPSHPHTSSLGRRGPGPGSASALQ\nGFQYGKPHCSVLEKVSKFEQREQGSQRPSVGGSGFGHNYRPHRTVSTSSTSGNDFEETKAHIRFSESAEP\nLGNGEQHFKNGELKLEEASRQPCGQQLSGGASDSGRGPQRPDARLLRSQSTFQLSSEPEREPEWRDRPGS\nPESPLLDAPFSRAYRNSIKDAQSRVLGATSFRRRDLELGAPVASRSWRPRPSSAHVGLRSPEASASASPH\nTPRERHSVTPAEGDLARPVPPAARRGARRRLTPEQKKRSYSEPEKMNEVGIVEEAEPAPLGPQRNGMRFP\nESSVADRRRLFERDGKACSTLSLSGPELKQFQQSALADYIQRKTGKRPTSAAGCSLQEPGPLRERAQSAY\nLQPGPAALEGSGLASASSLSSLREPSLQPRREATLLPATVAETQQAPRDRSSSFAGGRRLGERRRGDLLS\nGANGGTRGTQRGDETPREPSSWGARAGKSMSAEDLLERSDVLAGPVHVRSRSSPATADKRQDVLLGQDSG\nFGLVKDPCYLAGPGSRSLSCSERGQEEMLPLFHHLTPRWGGSGCKAIGDSSVPSECPGTLDHQRQASRTP\nCPRPPLAGTQGLVTDTRAAPLTPIGTPLPSAIPSGYCSQDGQTGRQPLPPYTPAMMHRSNGHTLTQPPGP\nRGCEGDGPEHGVEEGTRKRVSLPQWPPPSRAKWAHAAREDSLPEESSAPDFANLKHYQKQQSLPSLCSTS\nDPDTPLGAPSTPGRISLRISESVLRDSPPPHEDYEDEVFVRDPHPKATSSPTFEPLPPPPPPPPSQETPV\nYSMDDFPPPPPHTVCEAQLDSEDPEGPRPSFNKLSKVTIARERHMPGAAHVVGSQTLASRLQTSIKGSEA\nESTPPSFMSVHAQLAGSLGGQPAPIQTQSLSHDPVSGTQGLEKKVSPDPQKSSEDIRTEALAKEIVHQDK\nSLADILDPDSRLKTTMDLMEGLFPRDVNLLKENSVKRKAIQRTVSSSGCEGKRNEDKEAVSMLVNCPAYY\nSVSAPKAELLNKIKEMPAEVNEEEEQADVNEKKAELIGSLTHKLETLQEAKGSLLTDIKLNNALGEEVEA\nLISELCKPNEFDKYRMFIGDLDKVVNLLLSLSGRLARVENVLSGLGEDASNEERSSLYEKRKILAGQHED\nARELKENLDRRERVVLGILANYLSEEQLQDYQHFVKMKSTLLIEQRKLDDKIKLGQEQVKCLLESLPSDF\nIPKAGALALPPNLTSEPIPAGGCTFSGIFPTLTSPL\n\n"
```
We’ll use the `cat()` function to do a little formatting for us; it essentially just enforces the **lines breaks**:
```
cat(hShroom3)
```
```
## >NP_065910.3 protein Shroom3 [Homo sapiens]
## MMRTTEDFHKPSATLNSNTATKGRYIYLEAFLEGGAPWGFTLKGGLEHGEPLIISKVEEGGKADTLSSKL
## QAGDEVVHINEVTLSSSRKEAVSLVKGSYKTLRLVVRRDVCTDPGHADTGASNFVSPEHLTSGPQHRKAA
## WSGGVKLRLKHRRSEPAGRPHSWHTTKSGEKQPDASMMQISQGMIGPPWHQSYHSSSSTSDLSNYDHAYL
## RRSPDQCSSQGSMESLEPSGAYPPCHLSPAKSTGSIDQLSHFHNKRDSAYSSFSTSSSILEYPHPGISGR
## ERSGSMDNTSARGGLLEGMRQADIRYVKTVYDTRRGVSAEYEVNSSALLLQGREARASANGQGYDKWSNI
## PRGKGVPPPSWSQQCPSSLETATDNLPPKVGAPLPPARSDSYAAFRHRERPSSWSSLDQKRLCRPQANSL
## GSLKSPFIEEQLHTVLEKSPENSPPVKPKHNYTQKAQPGQPLLPTSIYPVPSLEPHFAQVPQPSVSSNGM
## LYPALAKESGYIAPQGACNKMATIDENGNQNGSGRPGFAFCQPLEHDLLSPVEKKPEATAKYVPSKVHFC
## SVPENEEDASLKRHLTPPQGNSPHSNERKSTHSNKPSSHPHSLKCPQAQAWQAGEDKRSSRLSEPWEGDF
## QEDHNANLWRRLEREGLGQSLSGNFGKTKSAFSSLQNIPESLRRHSSLELGRGTQEGYPGGRPTCAVNTK
## AEDPGRKAAPDLGSHLDRQVSYPRPEGRTGASASFNSTDPSPEEPPAPSHPHTSSLGRRGPGPGSASALQ
## GFQYGKPHCSVLEKVSKFEQREQGSQRPSVGGSGFGHNYRPHRTVSTSSTSGNDFEETKAHIRFSESAEP
## LGNGEQHFKNGELKLEEASRQPCGQQLSGGASDSGRGPQRPDARLLRSQSTFQLSSEPEREPEWRDRPGS
## PESPLLDAPFSRAYRNSIKDAQSRVLGATSFRRRDLELGAPVASRSWRPRPSSAHVGLRSPEASASASPH
## TPRERHSVTPAEGDLARPVPPAARRGARRRLTPEQKKRSYSEPEKMNEVGIVEEAEPAPLGPQRNGMRFP
## ESSVADRRRLFERDGKACSTLSLSGPELKQFQQSALADYIQRKTGKRPTSAAGCSLQEPGPLRERAQSAY
## LQPGPAALEGSGLASASSLSSLREPSLQPRREATLLPATVAETQQAPRDRSSSFAGGRRLGERRRGDLLS
## GANGGTRGTQRGDETPREPSSWGARAGKSMSAEDLLERSDVLAGPVHVRSRSSPATADKRQDVLLGQDSG
## FGLVKDPCYLAGPGSRSLSCSERGQEEMLPLFHHLTPRWGGSGCKAIGDSSVPSECPGTLDHQRQASRTP
## CPRPPLAGTQGLVTDTRAAPLTPIGTPLPSAIPSGYCSQDGQTGRQPLPPYTPAMMHRSNGHTLTQPPGP
## RGCEGDGPEHGVEEGTRKRVSLPQWPPPSRAKWAHAAREDSLPEESSAPDFANLKHYQKQQSLPSLCSTS
## DPDTPLGAPSTPGRISLRISESVLRDSPPPHEDYEDEVFVRDPHPKATSSPTFEPLPPPPPPPPSQETPV
## YSMDDFPPPPPHTVCEAQLDSEDPEGPRPSFNKLSKVTIARERHMPGAAHVVGSQTLASRLQTSIKGSEA
## ESTPPSFMSVHAQLAGSLGGQPAPIQTQSLSHDPVSGTQGLEKKVSPDPQKSSEDIRTEALAKEIVHQDK
## SLADILDPDSRLKTTMDLMEGLFPRDVNLLKENSVKRKAIQRTVSSSGCEGKRNEDKEAVSMLVNCPAYY
## SVSAPKAELLNKIKEMPAEVNEEEEQADVNEKKAELIGSLTHKLETLQEAKGSLLTDIKLNNALGEEVEA
## LISELCKPNEFDKYRMFIGDLDKVVNLLLSLSGRLARVENVLSGLGEDASNEERSSLYEKRKILAGQHED
## ARELKENLDRRERVVLGILANYLSEEQLQDYQHFVKMKSTLLIEQRKLDDKIKLGQEQVKCLLESLPSDF
## IPKAGALALPPNLTSEPIPAGGCTFSGIFPTLTSPL
```
Note the initial `>`, then the header line of `NP_065910.3 protein Shroom3 [Homo sapiens]`. After that is the amino acid sequence. The underlying data also includes the **newline character** `\n` to designate where each line of amino acids stops (that is, the location of line breaks).
We can get the rest of the data by just changing the `id = ...` argument:
```
# Mouse shroom 3a (M. musculus)
mShroom3a <- entrez_fetch(db = "protein",
id = "AAF13269",
rettype = "fasta")
# Human shroom 2 (H. sapiens)
hShroom2 <- entrez_fetch(db = "protein",
id = "CAA58534",
rettype = "fasta")
# Sea-urchin shroom
sShroom <- entrez_fetch(db = "protein",
id = "XP_783573",
rettype = "fasta")
```
Here, I’ve pasted the function I used above three times into the code chunk and changed the id \= … statement. Later in this script will avoid this clunky type of coding by using **for loops**.
I’m going to check about how long each of these sequences is \- each should have an at least slightly different length. If any are identical, I might have repeated an accession name or re\-used an object name. The function `nchar()` counts of the number of characters in an *R* object.
```
nchar(hShroom3)
```
```
## [1] 2070
```
```
nchar(mShroom3a)
```
```
## [1] 2083
```
```
nchar(sShroom)
```
```
## [1] 1758
```
```
nchar(hShroom2)
```
```
## [1] 1673
```
18\.4 Prepping macromolecular sequences
---------------------------------------
> “90% of data analysis is data cleaning” (\-Just about every data analyst and data scientist)
We have our sequences, but the current format isn’t directly usable for us yet because there are several things that aren’t sequence information
1. metadata (the header)
2. page formatting information (the newline character)
We can remove this non\-sequence information using a function I wrote called `fasta_cleaner()`, which is in the `compbio4all` package. The function uses **regular expressions** to remove the info we don’t need.
If you had trouble downloading the compbio4all package function you can add fasta\_cleaner() to your R session directly by running this code:
```
fasta_cleaner <- function(fasta_object, parse = TRUE){
fasta_object <- sub("^(>)(.*?)(\\n)(.*)(\\n\\n)","\\4",fasta_object)
fasta_object <- gsub("\n", "", fasta_object)
if(parse == TRUE){
fasta_object <- stringr::str_split(fasta_object,
pattern = "",
simplify = FALSE)
}
return(fasta_object[[1]])
}
```
If we run the name of the command with out any quotation marks we can see the code:
```
fasta_cleaner
```
```
## function(fasta_object, parse = TRUE){
##
## fasta_object <- sub("^(>)(.*?)(\\n)(.*)(\\n\\n)","\\4",fasta_object)
## fasta_object <- gsub("\n", "", fasta_object)
##
## if(parse == TRUE){
## fasta_object <- stringr::str_split(fasta_object,
## pattern = "",
## simplify = FALSE)
## }
##
## return(fasta_object[[1]])
## }
```
Now use the function to clean our sequences; we won’t worry about what `parse = ...` is for.
```
hShroom3 <- fasta_cleaner(hShroom3, parse = F)
mShroom3a <- fasta_cleaner(mShroom3a, parse = F)
hShroom2 <- fasta_cleaner(hShroom2, parse = F)
sShroom <- fasta_cleaner(sShroom, parse = F)
```
Again, I want to do something four times, so I’ve repeated the same line of code four times with the necessary change. This gets the job done, but there are better ways to do this using for loops.
Now let’s take a peek at what our sequences look like:
```
hShroom3
```
```
## [1] "MMRTTEDFHKPSATLNSNTATKGRYIYLEAFLEGGAPWGFTLKGGLEHGEPLIISKVEEGGKADTLSSKLQAGDEVVHINEVTLSSSRKEAVSLVKGSYKTLRLVVRRDVCTDPGHADTGASNFVSPEHLTSGPQHRKAAWSGGVKLRLKHRRSEPAGRPHSWHTTKSGEKQPDASMMQISQGMIGPPWHQSYHSSSSTSDLSNYDHAYLRRSPDQCSSQGSMESLEPSGAYPPCHLSPAKSTGSIDQLSHFHNKRDSAYSSFSTSSSILEYPHPGISGRERSGSMDNTSARGGLLEGMRQADIRYVKTVYDTRRGVSAEYEVNSSALLLQGREARASANGQGYDKWSNIPRGKGVPPPSWSQQCPSSLETATDNLPPKVGAPLPPARSDSYAAFRHRERPSSWSSLDQKRLCRPQANSLGSLKSPFIEEQLHTVLEKSPENSPPVKPKHNYTQKAQPGQPLLPTSIYPVPSLEPHFAQVPQPSVSSNGMLYPALAKESGYIAPQGACNKMATIDENGNQNGSGRPGFAFCQPLEHDLLSPVEKKPEATAKYVPSKVHFCSVPENEEDASLKRHLTPPQGNSPHSNERKSTHSNKPSSHPHSLKCPQAQAWQAGEDKRSSRLSEPWEGDFQEDHNANLWRRLEREGLGQSLSGNFGKTKSAFSSLQNIPESLRRHSSLELGRGTQEGYPGGRPTCAVNTKAEDPGRKAAPDLGSHLDRQVSYPRPEGRTGASASFNSTDPSPEEPPAPSHPHTSSLGRRGPGPGSASALQGFQYGKPHCSVLEKVSKFEQREQGSQRPSVGGSGFGHNYRPHRTVSTSSTSGNDFEETKAHIRFSESAEPLGNGEQHFKNGELKLEEASRQPCGQQLSGGASDSGRGPQRPDARLLRSQSTFQLSSEPEREPEWRDRPGSPESPLLDAPFSRAYRNSIKDAQSRVLGATSFRRRDLELGAPVASRSWRPRPSSAHVGLRSPEASASASPHTPRERHSVTPAEGDLARPVPPAARRGARRRLTPEQKKRSYSEPEKMNEVGIVEEAEPAPLGPQRNGMRFPESSVADRRRLFERDGKACSTLSLSGPELKQFQQSALADYIQRKTGKRPTSAAGCSLQEPGPLRERAQSAYLQPGPAALEGSGLASASSLSSLREPSLQPRREATLLPATVAETQQAPRDRSSSFAGGRRLGERRRGDLLSGANGGTRGTQRGDETPREPSSWGARAGKSMSAEDLLERSDVLAGPVHVRSRSSPATADKRQDVLLGQDSGFGLVKDPCYLAGPGSRSLSCSERGQEEMLPLFHHLTPRWGGSGCKAIGDSSVPSECPGTLDHQRQASRTPCPRPPLAGTQGLVTDTRAAPLTPIGTPLPSAIPSGYCSQDGQTGRQPLPPYTPAMMHRSNGHTLTQPPGPRGCEGDGPEHGVEEGTRKRVSLPQWPPPSRAKWAHAAREDSLPEESSAPDFANLKHYQKQQSLPSLCSTSDPDTPLGAPSTPGRISLRISESVLRDSPPPHEDYEDEVFVRDPHPKATSSPTFEPLPPPPPPPPSQETPVYSMDDFPPPPPHTVCEAQLDSEDPEGPRPSFNKLSKVTIARERHMPGAAHVVGSQTLASRLQTSIKGSEAESTPPSFMSVHAQLAGSLGGQPAPIQTQSLSHDPVSGTQGLEKKVSPDPQKSSEDIRTEALAKEIVHQDKSLADILDPDSRLKTTMDLMEGLFPRDVNLLKENSVKRKAIQRTVSSSGCEGKRNEDKEAVSMLVNCPAYYSVSAPKAELLNKIKEMPAEVNEEEEQADVNEKKAELIGSLTHKLETLQEAKGSLLTDIKLNNALGEEVEALISELCKPNEFDKYRMFIGDLDKVVNLLLSLSGRLARVENVLSGLGEDASNEERSSLYEKRKILAGQHEDARELKENLDRRERVVLGILANYLSEEQLQDYQHFVKMKSTLLIEQRKLDDKIKLGQEQVKCLLESLPSDFIPKAGALALPPNLTSEPIPAGGCTFSGIFPTLTSPL"
```
THe header and slash\-n newline characters are gone. The sequence is now ready for use by our R alignment functions.
18\.5 Aligning sequences
------------------------
We can do a [**global alignment**](https://tinyurl.com/y4du73zq) of one sequence against another using the `pairwiseAlignment()` function from the **Bioconductor** package `Biostrings` (note that capital “B” in `Biostrings`; most *R* package names are all lower case, but not this one). Global alignment algorithms identify the best way to line up to sequences so that that optimal number of bases or amino acids are the same and the number of **indels** (insertions/deletions) are minimized. (Global alignment contrasts with **local alignment**, which works with portions of sequences and is used in database search programs like **BLAST**, the Basic Local Alignment Search Tool used by many biologists).
Let’s align human versus mouse shroom using the global alignment function pairwiseAlignment():
```
align.h3.vs.m3a <- Biostrings::pairwiseAlignment(
hShroom3,
mShroom3a)
```
We can peek at the alignment
```
align.h3.vs.m3a
```
```
## Global PairwiseAlignmentsSingleSubject (1 of 1)
## pattern: MMRTTEDFHKPSATLN-SNTATKGRYIYLEAFLE...KAGALALPPNLTSEPIPAGGCTFSGIFPTLTSPL
## subject: MK-TPENLEEPSATPNPSRTPTE-RFVYLEALLE...KAGAISLPPALTGHATPGGTSVFGGVFPTLTSPL
## score: 2189.934
```
The **score** tells us how closely they are aligned; higher scores mean the sequences are more similar. In general, perfect matches increase scores the most, and indels decrease scores.
Its hard to interpret scores on their own so we can get the **percent sequence identity (PID)** (aka percent identical, proportion identity, proportion identical) using the `pid()` function.
```
Biostrings::pid(align.h3.vs.m3a)
```
```
## [1] 70.56511
```
So, *shroom3* from humans (hShroom3\) and *shroom3* from mice (mShroom3\) are \~71% similar (at least using this particular method of alignment, and there are many ways to do this!)
What about human shroom 3 and human shroom 2? Shroom is a **gene family**, and there are different versions of the gene within a genome.
```
align.h3.vs.h2 <- Biostrings::pairwiseAlignment(
hShroom3,
hShroom2)
```
If you take a look at the alignment you can see there are a lot of indels
```
align.h3.vs.h2
```
```
## Global PairwiseAlignmentsSingleSubject (1 of 1)
## pattern: MMRTTEDFHKPSATLNSNT--ATKGRYIYLEAFL...KAGALALPPNLTSEPIPAGGCTFSGIFPTLTSPL
## subject: MEGA-EPRARPERLAEAETRAADGGRLV--EVQL...----------------PERGK-------------
## score: -5673.853
```
Check out the score itself using `score()`, which accesses it directly without all the other information.
```
score(align.h3.vs.h2)
```
```
## [1] -5673.853
```
Its negative because there are a LOT of indels.
Now the percent sequence alignment with `pid()`:
```
Biostrings::pid(align.h3.vs.h2)
```
```
## [1] 33.83277
```
So Human shroom 3 and Mouse shroom 3 are 71% identical, but Human shroom 3 and human shroom 2 are only 34% similar? How does it work out evolutionarily that a human and mouse gene are more similar than a human and a human gene? What are the evolutionary relationships among these genes within the shroom gene family?
An important note: indels contribute negatively to an alignment score, but aren’t used in the most common calculations for PID.
18\.6 The shroom family of genes
--------------------------------
I’ve copied a table from a published paper which has accession numbers for 15 different Shroom genes. Some are different types of shroom from the same organism (e.g. shroom 1, 2, 3 and 4 from humans), or they are from different organism (e.g. frogs, mice, bees).
```
shroom_table <- c("CAA78718" , "X. laevis Apx" , "xShroom1",
"NP_597713" , "H. sapiens APXL2" , "hShroom1",
"CAA58534" , "H. sapiens APXL", "hShroom2",
"ABD19518" , "M. musculus Apxl" , "mShroom2",
"AAF13269" , "M. musculus ShroomL" , "mShroom3a",
"AAF13270" , "M. musculus ShroomS" , "mShroom3b",
"NP_065910", "H. sapiens Shroom" , "hShroom3",
"ABD59319" , "X. laevis Shroom-like", "xShroom3",
"NP_065768", "H. sapiens KIAA1202" , "hShroom4a",
"AAK95579" , "H. sapiens SHAP-A" , "hShroom4b",
#"DQ435686" , "M. musculus KIAA1202" , "mShroom4",
"ABA81834" , "D. melanogaster Shroom", "dmShroom",
"EAA12598" , "A. gambiae Shroom", "agShroom",
"XP_392427" , "A. mellifera Shroom" , "amShroom",
"XP_783573" , "S. purpuratus Shroom" , "spShroom") #sea urchin
```
What we just made is just one long vector with all the info.
```
is(shroom_table)
```
```
## [1] "character" "vector"
## [3] "data.frameRowLabels" "SuperClassMethod"
## [5] "EnumerationValue" "character_OR_connection"
## [7] "character_OR_NULL" "atomic"
## [9] "vector_OR_Vector" "vector_OR_factor"
```
```
class(shroom_table)
```
```
## [1] "character"
```
```
length(shroom_table)
```
```
## [1] 42
```
I’ll do a bit of formatting; you can ignore these details if you want
```
# convert the vector to matrix using matrix()
shroom_table_matrix <- matrix(shroom_table,
byrow = T,
nrow = 14)
# convert the matrix to a dataframe using data.frame()
shroom_table <- data.frame(shroom_table_matrix,
stringsAsFactors = F)
# name columns of dataframe using names() function
names(shroom_table) <- c("accession", "name.orig","name.new")
# Create simplified species names
## access species column using $ notation
shroom_table$spp <- "Homo"
shroom_table$spp[grep("laevis",shroom_table$name.orig)] <- "Xenopus"
shroom_table$spp[grep("musculus",shroom_table$name.orig)] <- "Mus"
shroom_table$spp[grep("melanogaster",shroom_table$name.orig)] <- "Drosophila"
shroom_table$spp[grep("gambiae",shroom_table$name.orig)] <- "mosquito"
shroom_table$spp[grep("mellifera",shroom_table$name.orig)] <- "bee"
shroom_table$spp[grep("purpuratus",shroom_table$name.orig)] <- "sea urchin"
```
Take a look at the finished table
```
shroom_table
```
```
## accession name.orig name.new spp
## 1 CAA78718 X. laevis Apx xShroom1 Xenopus
## 2 NP_597713 H. sapiens APXL2 hShroom1 Homo
## 3 CAA58534 H. sapiens APXL hShroom2 Homo
## 4 ABD19518 M. musculus Apxl mShroom2 Mus
## 5 AAF13269 M. musculus ShroomL mShroom3a Mus
## 6 AAF13270 M. musculus ShroomS mShroom3b Mus
## 7 NP_065910 H. sapiens Shroom hShroom3 Homo
## 8 ABD59319 X. laevis Shroom-like xShroom3 Xenopus
## 9 NP_065768 H. sapiens KIAA1202 hShroom4a Homo
## 10 AAK95579 H. sapiens SHAP-A hShroom4b Homo
## 11 ABA81834 D. melanogaster Shroom dmShroom Drosophila
## 12 EAA12598 A. gambiae Shroom agShroom mosquito
## 13 XP_392427 A. mellifera Shroom amShroom bee
## 14 XP_783573 S. purpuratus Shroom spShroom sea urchin
```
18\.7 Downloading multiple sequences
------------------------------------
Instead of getting one sequence at a time we can download several by accessing the “accession” column from the table
```
shroom_table$accession
```
We can give this whole set of accessions to `entrez_fetch()`, which is a **vectorized function** which knows how to handle a vector of inputs.
```
shrooms <- entrez_fetch(db = "protein",
id = shroom_table$accession,
rettype = "fasta")
```
We can look at what we got here with `cat()` (I won’t display this because it is very long!)
```
cat(shrooms)
```
The current format of these data is a single, very long set of data. This is a standard way to store, share and transmit FASTA files, but in *R* we’ll need a slightly different format.
We’ll download all of the sequences again, this time using a function from `compbio4all` called `entrez_fetch_list()` which is a **wrapper** function I wrote to put the output of `entrez_fetch()` into an *R* data format called a **list**.
This function is contained with the combio4all package; however, if you are having trouble with the package you can enter it directly into your R sessions by running the source code of the function.
```
entrez_fetch_list <- function(db, id, rettype, ...){
#setup list for storing output
n.seq <- length(id)
list.output <- as.list(rep(NA, n.seq))
names(list.output) <- id
# get output
for(i in 1:length(id)){
list.output[[i]] <- rentrez::entrez_fetch(db = db,
id = id[i],
rettype = rettype)
}
return(list.output)
}
```
However we get the function, we can use it to download a bunch of FASTA files and store them in an R list.
```
shrooms_list <- entrez_fetch_list(db = "protein",
id = shroom_table$accession,
rettype = "fasta")
```
Now we have an R **list** which as 14 **elements**, one for each sequence in our table.
```
length(shrooms_list)
```
```
## [1] 14
```
Each element of the list contains a FASTA entry for one sequence
```
shrooms_list[[1]]
```
```
## [1] ">CAA78718.1 apical protein [Xenopus laevis]\nMSAFGNTIERWNIKSTGVIAGLGHSERISPVRSMTTLVDSAYSSFSGSSYVPEYQNSFQHDGCHYNDEQL\nSYMDSEYVRAIYNPSLLDKDGVYNDIVSEHGSSKVALSGRSSSSLCSDNTTSVHRTSPAKLDNYVTNLDS\nEKNIYGDPINMKHKQNRPNHKAYGLQRNSPTGINSLQEKENQLYNPSNFMEIKDNYFGRSLDVLQADGDI\nMTQDSYTQNALYFPQNQPDQYRNTQYPGANRMSKEQFKVNDVQKSNEENTERDGPYLTKDGQFVQGQYAS\nDVRTSFKNIRRSLKKSASGKIVAHDSQGSCWIMKPGKDTPSFNSEGTITDMDYDNREQWDIRKSRLSTRA\nSQSLYYESNEDVSGPPLKAMNSKNEVDQTLSFQKDATVKSIPLLSQQLQQEKCKSHPLSDLNCEKITKAS\nTPMLYHLAGGRHSAFIAPVHNTNPAQQEKLKLESKTLERMNNISVLQLSEPRPDNHKLPKNKSLTQLADL\nHDSVEGGNSGNLNSSAEESLMNDYIEKLKVAQKKVLRETSFKRKDLQMSLPCRFKLNPPKRPTIDHFRSY\nSSSSANEESAYLQTKNSADSSYKKDDTEKVAVTRIGGRKRITKEQKKLCYSEPEKLDHLGIQKSNFAWKE\nEPTFANRREMSDSDISANRIKYLESKERTNSSSNLSKTELKQIQHNALVQYMERKTNQRPNSNPQVQMER\nTSLGLPNYNEWSIYSSETSSSDASQKYLRRRSAGASSSYDATVTWNDRFGKTSPLGRSAAEKTAGVQRKT\nFSDQRTLDGSQEHLEGSSPSLSQKTSKSTHNEQVSYVNMEFLPSSHSKNHMYNDRLTVPGDGTSAESGRM\nFVSKSRGKSMEEIGTTDIVKLAELSHSSDQLYHIKGPVISSRLENTRTTAASHQDRLLASTQIETGNLPR\nQTHQESVVGPCRSDLANLGQEAHSWPLRASDVSPGTDNPCSSSPSAEVQPGAPEPLHCLQTEDEVFTPAS\nTARNEEPNSTAFSYLLSTGKPVSQGEATALSFTFLPEQDRLEHPIVSETTPSSESDENVSDAAAEKETTT\nTQLPETSNVNKPLGFTVDNQEVEGDGEPMQPEFIDSSKQLELSSLPSSQVNIMQTAEPYLGDKNIGNEQK\nTEDLEQKSKNPEEDDLPKVKLKSPEDEILEELVKEIVAKDKSLLNCLQPVSVRESAMDLMKSLFPMDVTA\nAEKSRTRGLLGKDKGETLKKNNSDLESSSKLPSKITGMLQKRPDGESLDDITLKKMELLSKIGSKLEDLC\nEQREFLLSDISKNTTNGNNMQTMVKELCKPNEFERYMMFIGDLEKVVSLLFSLSTRLTRVENSLSKVDEN\nTDAEEMQSLKERHNLLSSQREDAKDLKANLDRREQVVTGILVKYLNEEQLQDYKHFVRLKTSLLIEQKNL\nEEKIKVYEEQFESIHNSLPP\n\n"
```
We now need to clean up each one of these sequences. We could do this by running our fasta\_cleaner() function on each of the elements of the list like this.
```
# clean sequence 1 in element 1 of list
shrooms_list[[1]] <- fasta_cleaner(shrooms_list[[1]], parse = F)
# clean sequence 2 in element 2 of list
shrooms_list[[2]] <- fasta_cleaner(shrooms_list[[2]], parse = F)
# clean sequence 3 in element 3 of list
shrooms_list[[3]] <- fasta_cleaner(shrooms_list[[3]], parse = F)
# clean sequence x in element x of list
## ...
```
Copying the same line of code 14 time and making the necessary changes takes time and is error prone. We can do this more easily using a simple `for()` loop:
```
for(i in 1:length(shrooms_list)){
shrooms_list[[i]] <- fasta_cleaner(shrooms_list[[i]], parse = F)
}
```
For loops all start with for(…) and contain the code we want to do repeatedly within curly brackets {…}. We won’t worry about the details of how for() loops work in R, but the upshot is that they allow us to easily repeatedly the same step many times while having making the necessary minor changes.
Now, for the second to last step of data preparation: we need to take each one of our sequences from our list and put it into a **vector**, in particular a **named vector**.
First, we need a vector to store stuff. We’ll make an empty vector that just has NAs in it using the rep() function.
```
shrooms_vector <- rep(NA, length(shrooms_list))
```
The result looks like this
```
shrooms_vector
```
```
## [1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA
```
Now we use a for() loop to take each element of shrooms\_list and put it into the vector shrooms\_vector (the precise details don’t matter; what is important is that we are using a for loop so we don’t have to repeat the same line of code 14 times)
```
# run the loop
for(i in 1:length(shrooms_vector)){
shrooms_vector[i] <- shrooms_list[[i]]
}
```
Now we name the vector. This is using the names() function. (The exact details don’t matter)
```
# name the vector
names(shrooms_vector) <- names(shrooms_list)
```
Now the final step: we need to convert our named vector to a **string set** using `Biostrings::AAStringSet()`. Note the `_ss` tag at the end of the object we’re assigning the output to, which designates this as a string set.
A string set is just a type of data format someone came up with to organize and annotate sequence data. (All of these steps are annoying, and someday I’ll write some functions to simplify all of this).
```
shrooms_vector_ss <- Biostrings::AAStringSet(shrooms_vector)
```
18\.8 Multiple sequence alignment
---------------------------------
We must **align** all of the sequences we downloaded and use that **alignment** to build a **phylogenetic tree**. This will tell us how the different genes, both within and between species, are likely to be related.
(Note: previously we’ve explored the similarities between sequences using pairwiseAlignments(). This helped us understand the data, but wasn’t actually necessary for making an MSA or phylogenetic tree).
### 18\.8\.1 Building an Multiple Sequence Alignment (MSA)
Multiple sequence alignments (MSA) appear frequently in papers in molecular biology, biochemistry, and molecular evolution. They are also the basis for almost all phylogenies made with modern software to build phylogenetic trees using macromolecues. MSA are extensions of global alignment of two sequences. However, while a function like pairwiseAlignment() tries to come up with the best way to align two sequences, and MSA algorithm tries to come up with the joint alignment that is best across all alignments. This not only takes longer to do, but can sometimes come up with slight different results than a bunch of individual pairs of alignments.
We’ll use the software `msa,` which implements the **ClustalW** multiple sequence alignment algorithm. Normally we’d have to download the ClustalW program and either point\-and\-click our way through it or use the **command line**\*, but these folks wrote up the algorithm in R so we can do this with a line of R code. This will take a second or two.
NOTE: While based in R, the msa package uses an R package called Rcpp (“R C\+\+”) to integrate R with code from the language C\+\+. There seem to some issues related to this process on some computers. If you can’t get msa to load or msa() to fun, you can comment\-out the msa\-related code.
```
shrooms_align <- msa(shrooms_vector_ss,
method = "ClustalW")
```
```
## use default substitution matrix
```
While msa() runs R tells you “use default substitution matrix”, which means its using the programs default way of scoring alignments; that is, how to assign values to matches, mismatches, and indels while trying to come up with the best alignment of all the sequences.
### 18\.8\.2 Viewing an MSA
Once we build an MSA we need to visualize it. There are several ways to do this, and it can be a bit tricky because gene and proteins are long and most easily viewed left to right. Often we’ll identify a subset of bases to focus on, such as a sequence motif or domain.
#### 18\.8\.2\.1 Viewing an MSA in R
msa produces a species MSA objects
```
class(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment"
## attr(,"package")
## [1] "msa"
```
```
is(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment" "AAMultipleAlignment" "MsaMetaData"
## [4] "MultipleAlignment"
```
We can look at the direct output from `msa()`, but its not very helpful \- its just a glimpse of part of the alignment. The “…” in the middle just means “a lot of other stuff in the middle.”
```
shrooms_align
```
```
## CLUSTAL 2.1
##
## Call:
## msa(shrooms_vector_ss, method = "ClustalW")
##
## MsaAAMultipleAlignment with 14 rows and 2252 columns
## aln names
## [1] -------------------------...------------------------- NP_065768
## [2] -------------------------...------------------------- AAK95579
## [3] -------------------------...SVFGGVFPTLTSPL----------- AAF13269
## [4] -------------------------...SVFGGVFPTLTSPL----------- AAF13270
## [5] -------------------------...CTFSGIFPTLTSPL----------- NP_065910
## [6] -------------------------...NKS--LPPPLTSSL----------- ABD59319
## [7] -------------------------...------------------------- CAA58534
## [8] -------------------------...------------------------- ABD19518
## [9] -------------------------...LT----------------------- NP_597713
## [10] -------------------------...------------------------- CAA78718
## [11] -------------------------...------------------------- EAA12598
## [12] -------------------------...------------------------- ABA81834
## [13] MTELQPSPPGYRVQDEAPGPPSCPP...------------------------- XP_392427
## [14] -------------------------...AATSSSSNGIGGPEQLNSNATSSYC XP_783573
## Con -------------------------...------------------------- Consensus
```
A function called `print_msa()` (Coghlan 2011\) which I’ve put into`combio4all` can give us more informative output by printing out the actual alignment into the R console.
To use `print_msa()` We need to make a few minor tweaks though first to the output of the msa() function. These are behind the scenes changes so don’t worry about the details right now. We’ll change the name to `shrooms_align_seqinr` to indicate that one of our changes is putting this into a format defined by the bioinformatics package `seqinr`.
First, we change **class** of the variable to let our functions know exactly what we’re working with.
The output of the class() function can sometimes be a bit complicated; in this case its telling us that the “class” of the shrooms\_align object is “MsaAAMultipleAlignment”, which is a special purpose type of R object created by the msa() function (“Msa…”) for amino acids (…“AA”…) that is a multiple sequence alignment (“…MultipleAlignment”)
```
class(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment"
## attr(,"package")
## [1] "msa"
```
To make shrooms\_align play nice with our other functions it has to just be of the class “AAMultipleAlignment”. (Again, this is annoying, and took my a while to figure out when I was creating this workflow).
You rarely have to change the class of an R object; the usual use of the class() function is to just get an idea of what an object is. You can also use the class() function to change the class of an object.
```
class(shrooms_align) <- "AAMultipleAlignment"
```
Now we need to use a function from msa called msaConvert() to make *another* tweak to the object to make it work with functions from the seqinr package. We’ll change the name of our msa object from shrooms\_align to shrooms\_align\_seqinr to reflect this change. (Another annoying step that took me a while to figure out when I first did this.)
```
shrooms_align_seqinr <- msaConvert(shrooms_align,
type = "seqinr::alignment")
```
I won’t display the raw output from `shrooms_align_seqinr` because its very long; we have 14 shroom genes, and shroom happens to be a rather long gene.
Now that I’ve done the necessary tweaks let me display the msa. This will only be really useful if I have a big monitor.
```
compbio4all::print_msa(alignment = shrooms_align_seqinr,
chunksize = 60)
```
#### 18\.8\.2\.2 Displaying an MSA as an R plot
Printing and msa to the R plot window can be useful to make nicer looking figures of parts of an MSA. I’m going to just show about 100 amino acids near the end of the alignment, where there is the most overlap across all of the sequences. This is set with the `start = ...` and `end = ...` arguments.
Note that we’re using the `shrooms_align` object again, but with the class reassigned.
```
# key step - must have class set properly
class(shrooms_align) <- "AAMultipleAlignment"
# run ggmsa
ggmsa::ggmsa(shrooms_align, # shrooms_align, NOT shrooms_align_seqinr
start = 2000,
end = 2100)
```
#### 18\.8\.2\.3 Saving an MSA as PDF
We can take a look at the alignment in PDF format if we want. I this case I’m going to just show about 100 amino acids near the end of the alignment, where there is the most overlap across all of the sequences. This is set with the `y = c(...)` argument.
In order for this to work you need to have a program called LaTex installed on your computer. LaTex can occasionally be tricky to install so you can skip this step if necessary.
If you want to try to install LaTex, you can run this code to see if it works
for you
```
install.packages("tinytex")
install_tinytex()
```
If you have LaTex working on your computer you can then run this code. (If this code doesn’t work, you can comment it out)
```
msaPrettyPrint(shrooms_align, # alignment
file = "shroom_msa.pdf", # file name
y=c(2000, 2100), # range
askForOverwrite=FALSE)
```
You can see where R is saving things by running `getwd()`
```
getwd()
```
```
## [1] "/Users/nlb24/OneDrive - University of Pittsburgh/0-books/lbrb-bk/lbrb"
```
On a Mac you can usually find the file by searching in Finder for the file name, which I set to be “shroom\_msa.pdf” using the `file = ...` argument above.
18\.9 A subset of sequences
---------------------------
To make things easier we’ll move forward with just a subset of sequences:
* XP\_392427: amShroom (bee shroom)
* EAA12598: agShroom (mosquito shroom)
* ABA81834: dmShroom (*Drosophila* shroom)
* XP\_783573: spShroom (sea urchin shroom)
* CAA78718: xShroom1 (frog shroom)
Our main working object shrooms\_vector\_ss has the names of our genes listed
```
names(shrooms_vector_ss)
```
```
## [1] "CAA78718" "NP_597713" "CAA58534" "ABD19518" "AAF13269" "AAF13270"
## [7] "NP_065910" "ABD59319" "NP_065768" "AAK95579" "ABA81834" "EAA12598"
## [13] "XP_392427" "XP_783573"
```
We can select the ones we want to focus on be first making a vector of the names
```
names.focal <- c("XP_392427","EAA12598","ABA81834","XP_783573","CAA78718")
```
We can use this vector and bracket notation to select the what we want from shrooms\_vector\_ss:
```
shrooms_vector_ss[names.focal]
```
```
## AAStringSet object of length 5:
## width seq names
## [1] 2126 MTELQPSPPGYRVQDEAPGPPSC...GREIQDKVKLGEEQLAALREAID XP_392427
## [2] 674 IPFSSSPKNRSNSKASYLPRQPR...ADKIKLGEEQLAALKDTLVQSEC EAA12598
## [3] 1576 MKMRNHKENGNGSEMGESTKSLA...AVRIKGSEEQLSSLSDALVQSDC ABA81834
## [4] 1661 MMKDAMYPTTTSTTSSSVNPLPK...TSSSSNGIGGPEQLNSNATSSYC XP_783573
## [5] 1420 MSAFGNTIERWNIKSTGVIAGLG...KNLEEKIKVYEEQFESIHNSLPP CAA78718
```
Let’s assign the subset of sequences to a new object called shrooms\_vector\_ss\_subset.
```
shrooms_vector_ss_subset <- shrooms_vector_ss[names.focal]
```
Let’s make another MSA with just this subset. If msa isn’t working for you you can comment this out.
```
shrooms_align_subset <- msa(shrooms_vector_ss_subset,
method = "ClustalW")
```
```
## use default substitution matrix
```
To view it using ggmsa we need to do those annoying conversions again.
```
class(shrooms_align_subset) <- "AAMultipleAlignment"
shrooms_align_subset_seqinr <- msaConvert(shrooms_align_subset, type = "seqinr::alignment")
```
THen we can plot it
```
ggmsa::ggmsa(shrooms_align_subset, # shrooms_align, NOT shrooms_align_seqinr
start = 2030,
end = 2100)
```
We can save our new smaller MSA like this.
```
msaPrettyPrint(shrooms_align_subset, # alignment
file = "shroom_msa_subset.pdf", # file name
y=c(2030, 2100), # range
askForOverwrite=FALSE)
```
18\.10 Genetic distances of sequence in subset
----------------------------------------------
While an MSA is a good way to examine a sequence its hard to assess all of the information visually. A phylogenetic tree allows you to summarize patterns in an MSA. The fastest way to make phylogenetic trees to is first summarize an MSA using a **genetic distance matrix**. The more amino acids that are identical to each other, the smaller the genetic distance is between them and the less evolution has occurred.
We usually work in terms of *difference* or **genetic distance** (a.k.a. **evolutionary distance**), though often we also talk in terms of similarity or identity.
Calculating genetic distance from an MSA is done using the `seqinr::dist.alignment()` function.
```
shrooms_subset_dist <- seqinr::dist.alignment(shrooms_align_subset_seqinr,
matrix = "identity")
```
This produces a “dist” class object.
```
is(shrooms_subset_dist)
```
```
## [1] "dist" "oldClass"
```
```
class(shrooms_subset_dist)
```
```
## [1] "dist"
```
If you’ve been having trouble with the MSA software, the data necessary to build the distance matrix directly in R is in this code chunk (you can ignore the details).
```
shrooms_subset_dist_alt <- matrix(data = NA,
nrow = 5,
ncol = 5)
distances <- c(0.8260049,
0.8478722, 0.9000568,
0.9244596, 0.9435187, 0.9372139,
0.9238779, 0.9370038, 0.9323225,0.9413209)
shrooms_subset_dist_alt[lower.tri(shrooms_subset_dist_alt)] <- distances
seqnames <- c("EAA12598","ABA81834","XP_392427", "XP_783573","CAA78718")
colnames(shrooms_subset_dist_alt) <- seqnames
row.names(shrooms_subset_dist_alt) <- seqnames
shrooms_subset_dist_alt <- as.dist(shrooms_subset_dist_alt)
shrooms_subset_dist <- shrooms_subset_dist_alt
```
We’ve made a matrix using `dist.alignment()`; let’s round it off so its easier to look at using the `round()` function.
```
shrooms_subset_dist_rounded <- round(shrooms_subset_dist,
digits = 3)
```
If we want to look at it we can type
```
shrooms_subset_dist_rounded
```
```
## EAA12598 ABA81834 XP_392427 XP_783573
## ABA81834 0.826
## XP_392427 0.848 0.944
## XP_783573 0.900 0.937 0.937
## CAA78718 0.924 0.924 0.932 0.941
```
Not that we have 5 sequence, but the matrix is 4 x 4\. This is because redundant information is dropped, including distances from one sequence to itself. This makes it so that the first column is EAA12598, but the first row is ABA81834\.
18\.11 Phylognetic trees of subset sequences (finally!)
-------------------------------------------------------
We got our sequences, built a multiple sequence alignment, and calculated the genetic distance between sequences. Now we are \- finally \- ready to build a phylogenetic tree.
First, we let R figure out the structure of the tree. There are **MANY** ways to build phylogenetic trees. We’ll use a common one used for exploring sequences called **neighbor joining** algorithm via the function `nj()`. Neighbor joining uses genetic distances to cluster sequences into **clades**.
nj() is simple function that takes only a single argument, a distance matrix.
```
# Note - not using rounded values
tree_subset <- nj(shrooms_subset_dist)
```
### 18\.11\.1 Plotting phylogenetic trees
Now we’ll make a quick plot of our tree using `plot()` (and add a little label using an important function called `mtext()`).
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
type = "unrooted",
use.edge.length = F)
# add label
mtext(text = "Shroom family gene tree - UNrooted, no branch lengths")
```
This is an **unrooted tree** with no outgroup defined. For the sake of plotting we’ve also ignored the evolutionary distance between the sequences, so the branch lengths don’t have meaning.
To make a rooted tree we remove `type = "unrooted`. In the case of neighbor joining, the algorithm tries to figure out the outgroup on its own.
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
use.edge.length = F)
# add label
mtext(text = "Shroom family gene tree - rooted, no branch lenths")
```
We can include information about branch length by setting `use.edge.length = ...` to `T`.
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
use.edge.length = T)
# add label
mtext(text = "Shroom family gene tree - rooted, with branch lenths")
```
Now the length of the branches indicates the evolutionary distance between sequences and correlate to the distances reported in our distance matrix. The branches are all very long, indicating that these genes have been evolving independently for many millions of years.
An important note: the vertical lines on the tree have no meaning, only the horizontal ones.
Because the branch lengths are all so long I find this tree a bit hard to view when its rooted. Let’s make it unrooted again.
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
type = "unrooted",
use.edge.length = T)
# add label
mtext(text = "Shroom family gene tree - rooted, with branch lenths")
```
Now you can see that the ABA and EAA sequences form a clade, and that the distance between them is somewhat smaller than the distance between other sequences. If we go back to our original distance matrix, we can see that the smallest genetic distance is between ABA and EAA at 0\.826\.
```
shrooms_subset_dist_rounded
```
```
## EAA12598 ABA81834 XP_392427 XP_783573
## ABA81834 0.826
## XP_392427 0.848 0.944
## XP_783573 0.900 0.937 0.937
## CAA78718 0.924 0.924 0.932 0.941
```
We can confirm that this is the minimum using the min() function.
```
min(shrooms_subset_dist_rounded)
```
```
## [1] 0.826
```
18\.1 Introduction
------------------
Phylogenies play an important role in computational biology and bioinformatics. Phylogenetics itself is an obligatly computational field that only began rapid growth when computational power allowed the many algorithms it relies on to be done rapidly. Phylogeneies of species, genes and proteins are used to address many biological issues, including
* Patterns of protein evolution
* Origin and evolution of phenotypic traits
* Origin and progression of epidemics
* Origin of evolution of diseases (e.g., zooenoses)
* Prediction of protein function from its sequence
* … and many more
The actual building of a phylogeny is a computationally intensive task; moreover, there are many bioinformatics and computational tasks the precede the construction of a phylogeny:
* genome sequencing and assembly
* computational gene prediction and annotation
* database searching and results screening
* pairwise sequence alignment
* data organization and cleaning
* multiple sequence alignment
* evaluation and validation of alignment accuracy
Once all of these steps have been carried out, the building of a phylogeny involves
* picking a model of sequence evolution or other description of evolution
* picking a statistical approach to tree construction
* evaluating uncertainty in the final tree
In this chapter we will work through many of these steps. In most cases we will pick the easiest or fastest option; in later chapters we will unpack the various options. This chapter is written as an interactive R session. You can follow along by opening the .Rmd file of the chapter or typing the appropriate commands into your own script. I assume that all the necessary packages have been installed and they only need to be loaded into R using the *library()* command.
This lesson walks you through and entire workflow for a bioinformatics project, including
1. obtaining FASTA sequences
2. cleaning sequences
3. creating alignments
4. creating a distance matrix
5. building a phylogenetic tree
We’ll examine the Shroom family of genes, which produces Shroom proteins essential for tissue formation in many multicellular eukaryotes, including neural tube formation in vertebrates. We’ll examine shroom in several very different organism, including humans, mice and sea urchins. There is more than one type of shroom in vertebrates, and we’ll also look at two different Shroom genes: shroom 1 and shroom 2\.
This lesson draws on skills from previous sections of the book, but is written to act as a independent summary of these activities. There is therefore review of key aspects of R and bioinformatics throughout it.
### 18\.1\.1 Vocab
### 18\.1\.1 Vocab
18\.2 Software Preliminaires
----------------------------
* argument
* function
* list
* named list
* vector
* named vector
* for() loop
* R console
### 18\.2\.1 R functions
* library()
* round()
* plot()
* mtext()
* nchar()
* rentrez::entrez\_fetch()
* compbio4all::entrez\_fetch\_list()
* compbio4all::print\_msa() (Coghlan 2011\)
* Biostrings::AAStringSet()
* msa::msa()
* msa::msaConvert()
* msa::msaPrettyPrint()
* seqinr::dist.alignment()
* ape::nj()
A few things need to be done to get started with our R session.
### 18\.2\.2 Download necessary packages
Many *R* sessions begin by downloading necessary software packages to augment *R’s* functionality.
If you don’t have them already, you’ll need the following packages from CRAN:
1. `ape`
2. `seqinr`
3. `rentrez`
4. `devtools`
The CRAN packages can be loaded with `install.packages()`.
You’ll also need these packages from Bioconductor:
1. `msa`
2. `Biostrings`
For installing packages from Bioconductor, see the chapter at the beginning of this book on this process.
Finally, you’ll need these package from GitHub
1. `compbio4all`
2. `ggmsa`
To install packages from GitHub you can use the code `devtools::install_github("brouwern/compbio4all")` and `devtools::install_github("YuLab-SMU/ggmsa")`
This code chunk downloads most of these packages. The code has been commented out because it only needs to be run once.
```
# Install needed packages
## CRAN PACKAGES
# downloaded using install.packages()
# install.packages("rentrez",dependencies = TRUE)
# install.packages("devtools")
# install.packages("ape")
# install.packages("seqinr")
### BiocManager - CRAN package to download
##### Bioconductor packages
# requires BiocManager
# install.packages("BiocManager")
## BioConductor packages
### downloaded with BiocManager::install(), NOT install.packages()
# BiocManager::install("msa")
# BiocManager::install("Biostrings")
## GitHub packages
### requires devtools package and its function install_github()
# library(devtools)
# devtools::install_github("brouwern/combio4all")
# devtools::install_github("YuLab-SMU/ggmsa")
```
### 18\.2\.3 Load packages into memory
We now need to load up all our bioinformatics and phylogenetics software into R. This is done with the `library()` command.
To run this code just click on the sideways green triangle all the way to the right of the code.
NOTE: You’ll likely see some red code appear on your screen. No worries, totally normal!
```
# github packages
library(compbio4all)
library(ggmsa)
# CRAN packages
library(rentrez)
library(seqinr)
library(ape)
# Bioconductor packages
## msa
### The msa package is having problems on some platforms
### You can skip the msa steps if necessary. The msa output
### is used to make a distance matrix and then phylogenetics trees,
### but I provide code to build the matrix by hand so
### you can proceed even if msa doesn't work for you.
library(msa)
## Biostrings
library(Biostrings)
```
### 18\.2\.1 R functions
* library()
* round()
* plot()
* mtext()
* nchar()
* rentrez::entrez\_fetch()
* compbio4all::entrez\_fetch\_list()
* compbio4all::print\_msa() (Coghlan 2011\)
* Biostrings::AAStringSet()
* msa::msa()
* msa::msaConvert()
* msa::msaPrettyPrint()
* seqinr::dist.alignment()
* ape::nj()
A few things need to be done to get started with our R session.
### 18\.2\.2 Download necessary packages
Many *R* sessions begin by downloading necessary software packages to augment *R’s* functionality.
If you don’t have them already, you’ll need the following packages from CRAN:
1. `ape`
2. `seqinr`
3. `rentrez`
4. `devtools`
The CRAN packages can be loaded with `install.packages()`.
You’ll also need these packages from Bioconductor:
1. `msa`
2. `Biostrings`
For installing packages from Bioconductor, see the chapter at the beginning of this book on this process.
Finally, you’ll need these package from GitHub
1. `compbio4all`
2. `ggmsa`
To install packages from GitHub you can use the code `devtools::install_github("brouwern/compbio4all")` and `devtools::install_github("YuLab-SMU/ggmsa")`
This code chunk downloads most of these packages. The code has been commented out because it only needs to be run once.
```
# Install needed packages
## CRAN PACKAGES
# downloaded using install.packages()
# install.packages("rentrez",dependencies = TRUE)
# install.packages("devtools")
# install.packages("ape")
# install.packages("seqinr")
### BiocManager - CRAN package to download
##### Bioconductor packages
# requires BiocManager
# install.packages("BiocManager")
## BioConductor packages
### downloaded with BiocManager::install(), NOT install.packages()
# BiocManager::install("msa")
# BiocManager::install("Biostrings")
## GitHub packages
### requires devtools package and its function install_github()
# library(devtools)
# devtools::install_github("brouwern/combio4all")
# devtools::install_github("YuLab-SMU/ggmsa")
```
### 18\.2\.3 Load packages into memory
We now need to load up all our bioinformatics and phylogenetics software into R. This is done with the `library()` command.
To run this code just click on the sideways green triangle all the way to the right of the code.
NOTE: You’ll likely see some red code appear on your screen. No worries, totally normal!
```
# github packages
library(compbio4all)
library(ggmsa)
# CRAN packages
library(rentrez)
library(seqinr)
library(ape)
# Bioconductor packages
## msa
### The msa package is having problems on some platforms
### You can skip the msa steps if necessary. The msa output
### is used to make a distance matrix and then phylogenetics trees,
### but I provide code to build the matrix by hand so
### you can proceed even if msa doesn't work for you.
library(msa)
## Biostrings
library(Biostrings)
```
18\.3 Downloading macro\-molecular sequences
--------------------------------------------
We’re going to explore some sequences. First we need to download them. To do this we’ll use a function, `entrez_fretch()`, which accesses the [**Entrez**](ncbi.nlm.nih.gov/search/) system of database (ncbi.nlm.nih.gov/search/). This function is from the `rentrez` package, which stands for “R\-Entrez.”
We need to tell `entrez_fetch()` several things
1. `db = ...` the type of entrez database.
2. `id = ...` the **accession** (ID) number of the sequence
3. `rettype = ...` file type what we want the function to return.
Formally, these things are called **arguments** by *R*.
We’ll use these settings:
1. `db = "protein"` to access the Entrez database of protein sequences
2. `rettype = "fasta"`, which is a standard file format for nucleic acid and protein sequences
We’ll set `id = ...` to sequences whose **accession numbers** are:
1. NP\_065910: Human shroom 3
2. AAF13269: Mouse shroom 3a
3. CAA58534: Human shroom 2
4. XP\_783573: Sea urchin shroom
There are two highly conserved regions of shroom 3
1\. ASD 1: aa 884 to aa 1062 in hShroom3
1\. ASD 2: aa 1671 to aa 1955 in hShroom3
Normally we’d have to download these sequences by hand through pointing and clicking on GeneBank records on the NCBI website. In *R* we can do it automatically; this might take a second.
All the code needed is this:
```
# Human shroom 3 (H. sapiens)
hShroom3 <- rentrez::entrez_fetch(db = "protein",
id = "NP_065910",
rettype = "fasta")
```
Note: if you aren’t connected to wifi or the server itself is having problem, then when you run this code you may get an error like this:
“Quitting from lines 244\-259 (23\-MSA\-walkthrough\-shroom.Rmd)
Error: HTTP failure: 500
{”error”:“error forwarding request”,“api\-key”:“71\.182\.228\.80”,“type”:“ip”,
“status”:“ok”}
Execution halted”
You may have to try again later to knit the code.
The output is in FASTA format. We can look at the raw output by calling up the object we created. This will run as single continuous string of characters without line breaks
```
hShroom3
```
```
## [1] ">NP_065910.3 protein Shroom3 [Homo sapiens]\nMMRTTEDFHKPSATLNSNTATKGRYIYLEAFLEGGAPWGFTLKGGLEHGEPLIISKVEEGGKADTLSSKL\nQAGDEVVHINEVTLSSSRKEAVSLVKGSYKTLRLVVRRDVCTDPGHADTGASNFVSPEHLTSGPQHRKAA\nWSGGVKLRLKHRRSEPAGRPHSWHTTKSGEKQPDASMMQISQGMIGPPWHQSYHSSSSTSDLSNYDHAYL\nRRSPDQCSSQGSMESLEPSGAYPPCHLSPAKSTGSIDQLSHFHNKRDSAYSSFSTSSSILEYPHPGISGR\nERSGSMDNTSARGGLLEGMRQADIRYVKTVYDTRRGVSAEYEVNSSALLLQGREARASANGQGYDKWSNI\nPRGKGVPPPSWSQQCPSSLETATDNLPPKVGAPLPPARSDSYAAFRHRERPSSWSSLDQKRLCRPQANSL\nGSLKSPFIEEQLHTVLEKSPENSPPVKPKHNYTQKAQPGQPLLPTSIYPVPSLEPHFAQVPQPSVSSNGM\nLYPALAKESGYIAPQGACNKMATIDENGNQNGSGRPGFAFCQPLEHDLLSPVEKKPEATAKYVPSKVHFC\nSVPENEEDASLKRHLTPPQGNSPHSNERKSTHSNKPSSHPHSLKCPQAQAWQAGEDKRSSRLSEPWEGDF\nQEDHNANLWRRLEREGLGQSLSGNFGKTKSAFSSLQNIPESLRRHSSLELGRGTQEGYPGGRPTCAVNTK\nAEDPGRKAAPDLGSHLDRQVSYPRPEGRTGASASFNSTDPSPEEPPAPSHPHTSSLGRRGPGPGSASALQ\nGFQYGKPHCSVLEKVSKFEQREQGSQRPSVGGSGFGHNYRPHRTVSTSSTSGNDFEETKAHIRFSESAEP\nLGNGEQHFKNGELKLEEASRQPCGQQLSGGASDSGRGPQRPDARLLRSQSTFQLSSEPEREPEWRDRPGS\nPESPLLDAPFSRAYRNSIKDAQSRVLGATSFRRRDLELGAPVASRSWRPRPSSAHVGLRSPEASASASPH\nTPRERHSVTPAEGDLARPVPPAARRGARRRLTPEQKKRSYSEPEKMNEVGIVEEAEPAPLGPQRNGMRFP\nESSVADRRRLFERDGKACSTLSLSGPELKQFQQSALADYIQRKTGKRPTSAAGCSLQEPGPLRERAQSAY\nLQPGPAALEGSGLASASSLSSLREPSLQPRREATLLPATVAETQQAPRDRSSSFAGGRRLGERRRGDLLS\nGANGGTRGTQRGDETPREPSSWGARAGKSMSAEDLLERSDVLAGPVHVRSRSSPATADKRQDVLLGQDSG\nFGLVKDPCYLAGPGSRSLSCSERGQEEMLPLFHHLTPRWGGSGCKAIGDSSVPSECPGTLDHQRQASRTP\nCPRPPLAGTQGLVTDTRAAPLTPIGTPLPSAIPSGYCSQDGQTGRQPLPPYTPAMMHRSNGHTLTQPPGP\nRGCEGDGPEHGVEEGTRKRVSLPQWPPPSRAKWAHAAREDSLPEESSAPDFANLKHYQKQQSLPSLCSTS\nDPDTPLGAPSTPGRISLRISESVLRDSPPPHEDYEDEVFVRDPHPKATSSPTFEPLPPPPPPPPSQETPV\nYSMDDFPPPPPHTVCEAQLDSEDPEGPRPSFNKLSKVTIARERHMPGAAHVVGSQTLASRLQTSIKGSEA\nESTPPSFMSVHAQLAGSLGGQPAPIQTQSLSHDPVSGTQGLEKKVSPDPQKSSEDIRTEALAKEIVHQDK\nSLADILDPDSRLKTTMDLMEGLFPRDVNLLKENSVKRKAIQRTVSSSGCEGKRNEDKEAVSMLVNCPAYY\nSVSAPKAELLNKIKEMPAEVNEEEEQADVNEKKAELIGSLTHKLETLQEAKGSLLTDIKLNNALGEEVEA\nLISELCKPNEFDKYRMFIGDLDKVVNLLLSLSGRLARVENVLSGLGEDASNEERSSLYEKRKILAGQHED\nARELKENLDRRERVVLGILANYLSEEQLQDYQHFVKMKSTLLIEQRKLDDKIKLGQEQVKCLLESLPSDF\nIPKAGALALPPNLTSEPIPAGGCTFSGIFPTLTSPL\n\n"
```
We’ll use the `cat()` function to do a little formatting for us; it essentially just enforces the **lines breaks**:
```
cat(hShroom3)
```
```
## >NP_065910.3 protein Shroom3 [Homo sapiens]
## MMRTTEDFHKPSATLNSNTATKGRYIYLEAFLEGGAPWGFTLKGGLEHGEPLIISKVEEGGKADTLSSKL
## QAGDEVVHINEVTLSSSRKEAVSLVKGSYKTLRLVVRRDVCTDPGHADTGASNFVSPEHLTSGPQHRKAA
## WSGGVKLRLKHRRSEPAGRPHSWHTTKSGEKQPDASMMQISQGMIGPPWHQSYHSSSSTSDLSNYDHAYL
## RRSPDQCSSQGSMESLEPSGAYPPCHLSPAKSTGSIDQLSHFHNKRDSAYSSFSTSSSILEYPHPGISGR
## ERSGSMDNTSARGGLLEGMRQADIRYVKTVYDTRRGVSAEYEVNSSALLLQGREARASANGQGYDKWSNI
## PRGKGVPPPSWSQQCPSSLETATDNLPPKVGAPLPPARSDSYAAFRHRERPSSWSSLDQKRLCRPQANSL
## GSLKSPFIEEQLHTVLEKSPENSPPVKPKHNYTQKAQPGQPLLPTSIYPVPSLEPHFAQVPQPSVSSNGM
## LYPALAKESGYIAPQGACNKMATIDENGNQNGSGRPGFAFCQPLEHDLLSPVEKKPEATAKYVPSKVHFC
## SVPENEEDASLKRHLTPPQGNSPHSNERKSTHSNKPSSHPHSLKCPQAQAWQAGEDKRSSRLSEPWEGDF
## QEDHNANLWRRLEREGLGQSLSGNFGKTKSAFSSLQNIPESLRRHSSLELGRGTQEGYPGGRPTCAVNTK
## AEDPGRKAAPDLGSHLDRQVSYPRPEGRTGASASFNSTDPSPEEPPAPSHPHTSSLGRRGPGPGSASALQ
## GFQYGKPHCSVLEKVSKFEQREQGSQRPSVGGSGFGHNYRPHRTVSTSSTSGNDFEETKAHIRFSESAEP
## LGNGEQHFKNGELKLEEASRQPCGQQLSGGASDSGRGPQRPDARLLRSQSTFQLSSEPEREPEWRDRPGS
## PESPLLDAPFSRAYRNSIKDAQSRVLGATSFRRRDLELGAPVASRSWRPRPSSAHVGLRSPEASASASPH
## TPRERHSVTPAEGDLARPVPPAARRGARRRLTPEQKKRSYSEPEKMNEVGIVEEAEPAPLGPQRNGMRFP
## ESSVADRRRLFERDGKACSTLSLSGPELKQFQQSALADYIQRKTGKRPTSAAGCSLQEPGPLRERAQSAY
## LQPGPAALEGSGLASASSLSSLREPSLQPRREATLLPATVAETQQAPRDRSSSFAGGRRLGERRRGDLLS
## GANGGTRGTQRGDETPREPSSWGARAGKSMSAEDLLERSDVLAGPVHVRSRSSPATADKRQDVLLGQDSG
## FGLVKDPCYLAGPGSRSLSCSERGQEEMLPLFHHLTPRWGGSGCKAIGDSSVPSECPGTLDHQRQASRTP
## CPRPPLAGTQGLVTDTRAAPLTPIGTPLPSAIPSGYCSQDGQTGRQPLPPYTPAMMHRSNGHTLTQPPGP
## RGCEGDGPEHGVEEGTRKRVSLPQWPPPSRAKWAHAAREDSLPEESSAPDFANLKHYQKQQSLPSLCSTS
## DPDTPLGAPSTPGRISLRISESVLRDSPPPHEDYEDEVFVRDPHPKATSSPTFEPLPPPPPPPPSQETPV
## YSMDDFPPPPPHTVCEAQLDSEDPEGPRPSFNKLSKVTIARERHMPGAAHVVGSQTLASRLQTSIKGSEA
## ESTPPSFMSVHAQLAGSLGGQPAPIQTQSLSHDPVSGTQGLEKKVSPDPQKSSEDIRTEALAKEIVHQDK
## SLADILDPDSRLKTTMDLMEGLFPRDVNLLKENSVKRKAIQRTVSSSGCEGKRNEDKEAVSMLVNCPAYY
## SVSAPKAELLNKIKEMPAEVNEEEEQADVNEKKAELIGSLTHKLETLQEAKGSLLTDIKLNNALGEEVEA
## LISELCKPNEFDKYRMFIGDLDKVVNLLLSLSGRLARVENVLSGLGEDASNEERSSLYEKRKILAGQHED
## ARELKENLDRRERVVLGILANYLSEEQLQDYQHFVKMKSTLLIEQRKLDDKIKLGQEQVKCLLESLPSDF
## IPKAGALALPPNLTSEPIPAGGCTFSGIFPTLTSPL
```
Note the initial `>`, then the header line of `NP_065910.3 protein Shroom3 [Homo sapiens]`. After that is the amino acid sequence. The underlying data also includes the **newline character** `\n` to designate where each line of amino acids stops (that is, the location of line breaks).
We can get the rest of the data by just changing the `id = ...` argument:
```
# Mouse shroom 3a (M. musculus)
mShroom3a <- entrez_fetch(db = "protein",
id = "AAF13269",
rettype = "fasta")
# Human shroom 2 (H. sapiens)
hShroom2 <- entrez_fetch(db = "protein",
id = "CAA58534",
rettype = "fasta")
# Sea-urchin shroom
sShroom <- entrez_fetch(db = "protein",
id = "XP_783573",
rettype = "fasta")
```
Here, I’ve pasted the function I used above three times into the code chunk and changed the id \= … statement. Later in this script will avoid this clunky type of coding by using **for loops**.
I’m going to check about how long each of these sequences is \- each should have an at least slightly different length. If any are identical, I might have repeated an accession name or re\-used an object name. The function `nchar()` counts of the number of characters in an *R* object.
```
nchar(hShroom3)
```
```
## [1] 2070
```
```
nchar(mShroom3a)
```
```
## [1] 2083
```
```
nchar(sShroom)
```
```
## [1] 1758
```
```
nchar(hShroom2)
```
```
## [1] 1673
```
18\.4 Prepping macromolecular sequences
---------------------------------------
> “90% of data analysis is data cleaning” (\-Just about every data analyst and data scientist)
We have our sequences, but the current format isn’t directly usable for us yet because there are several things that aren’t sequence information
1. metadata (the header)
2. page formatting information (the newline character)
We can remove this non\-sequence information using a function I wrote called `fasta_cleaner()`, which is in the `compbio4all` package. The function uses **regular expressions** to remove the info we don’t need.
If you had trouble downloading the compbio4all package function you can add fasta\_cleaner() to your R session directly by running this code:
```
fasta_cleaner <- function(fasta_object, parse = TRUE){
fasta_object <- sub("^(>)(.*?)(\\n)(.*)(\\n\\n)","\\4",fasta_object)
fasta_object <- gsub("\n", "", fasta_object)
if(parse == TRUE){
fasta_object <- stringr::str_split(fasta_object,
pattern = "",
simplify = FALSE)
}
return(fasta_object[[1]])
}
```
If we run the name of the command with out any quotation marks we can see the code:
```
fasta_cleaner
```
```
## function(fasta_object, parse = TRUE){
##
## fasta_object <- sub("^(>)(.*?)(\\n)(.*)(\\n\\n)","\\4",fasta_object)
## fasta_object <- gsub("\n", "", fasta_object)
##
## if(parse == TRUE){
## fasta_object <- stringr::str_split(fasta_object,
## pattern = "",
## simplify = FALSE)
## }
##
## return(fasta_object[[1]])
## }
```
Now use the function to clean our sequences; we won’t worry about what `parse = ...` is for.
```
hShroom3 <- fasta_cleaner(hShroom3, parse = F)
mShroom3a <- fasta_cleaner(mShroom3a, parse = F)
hShroom2 <- fasta_cleaner(hShroom2, parse = F)
sShroom <- fasta_cleaner(sShroom, parse = F)
```
Again, I want to do something four times, so I’ve repeated the same line of code four times with the necessary change. This gets the job done, but there are better ways to do this using for loops.
Now let’s take a peek at what our sequences look like:
```
hShroom3
```
```
## [1] "MMRTTEDFHKPSATLNSNTATKGRYIYLEAFLEGGAPWGFTLKGGLEHGEPLIISKVEEGGKADTLSSKLQAGDEVVHINEVTLSSSRKEAVSLVKGSYKTLRLVVRRDVCTDPGHADTGASNFVSPEHLTSGPQHRKAAWSGGVKLRLKHRRSEPAGRPHSWHTTKSGEKQPDASMMQISQGMIGPPWHQSYHSSSSTSDLSNYDHAYLRRSPDQCSSQGSMESLEPSGAYPPCHLSPAKSTGSIDQLSHFHNKRDSAYSSFSTSSSILEYPHPGISGRERSGSMDNTSARGGLLEGMRQADIRYVKTVYDTRRGVSAEYEVNSSALLLQGREARASANGQGYDKWSNIPRGKGVPPPSWSQQCPSSLETATDNLPPKVGAPLPPARSDSYAAFRHRERPSSWSSLDQKRLCRPQANSLGSLKSPFIEEQLHTVLEKSPENSPPVKPKHNYTQKAQPGQPLLPTSIYPVPSLEPHFAQVPQPSVSSNGMLYPALAKESGYIAPQGACNKMATIDENGNQNGSGRPGFAFCQPLEHDLLSPVEKKPEATAKYVPSKVHFCSVPENEEDASLKRHLTPPQGNSPHSNERKSTHSNKPSSHPHSLKCPQAQAWQAGEDKRSSRLSEPWEGDFQEDHNANLWRRLEREGLGQSLSGNFGKTKSAFSSLQNIPESLRRHSSLELGRGTQEGYPGGRPTCAVNTKAEDPGRKAAPDLGSHLDRQVSYPRPEGRTGASASFNSTDPSPEEPPAPSHPHTSSLGRRGPGPGSASALQGFQYGKPHCSVLEKVSKFEQREQGSQRPSVGGSGFGHNYRPHRTVSTSSTSGNDFEETKAHIRFSESAEPLGNGEQHFKNGELKLEEASRQPCGQQLSGGASDSGRGPQRPDARLLRSQSTFQLSSEPEREPEWRDRPGSPESPLLDAPFSRAYRNSIKDAQSRVLGATSFRRRDLELGAPVASRSWRPRPSSAHVGLRSPEASASASPHTPRERHSVTPAEGDLARPVPPAARRGARRRLTPEQKKRSYSEPEKMNEVGIVEEAEPAPLGPQRNGMRFPESSVADRRRLFERDGKACSTLSLSGPELKQFQQSALADYIQRKTGKRPTSAAGCSLQEPGPLRERAQSAYLQPGPAALEGSGLASASSLSSLREPSLQPRREATLLPATVAETQQAPRDRSSSFAGGRRLGERRRGDLLSGANGGTRGTQRGDETPREPSSWGARAGKSMSAEDLLERSDVLAGPVHVRSRSSPATADKRQDVLLGQDSGFGLVKDPCYLAGPGSRSLSCSERGQEEMLPLFHHLTPRWGGSGCKAIGDSSVPSECPGTLDHQRQASRTPCPRPPLAGTQGLVTDTRAAPLTPIGTPLPSAIPSGYCSQDGQTGRQPLPPYTPAMMHRSNGHTLTQPPGPRGCEGDGPEHGVEEGTRKRVSLPQWPPPSRAKWAHAAREDSLPEESSAPDFANLKHYQKQQSLPSLCSTSDPDTPLGAPSTPGRISLRISESVLRDSPPPHEDYEDEVFVRDPHPKATSSPTFEPLPPPPPPPPSQETPVYSMDDFPPPPPHTVCEAQLDSEDPEGPRPSFNKLSKVTIARERHMPGAAHVVGSQTLASRLQTSIKGSEAESTPPSFMSVHAQLAGSLGGQPAPIQTQSLSHDPVSGTQGLEKKVSPDPQKSSEDIRTEALAKEIVHQDKSLADILDPDSRLKTTMDLMEGLFPRDVNLLKENSVKRKAIQRTVSSSGCEGKRNEDKEAVSMLVNCPAYYSVSAPKAELLNKIKEMPAEVNEEEEQADVNEKKAELIGSLTHKLETLQEAKGSLLTDIKLNNALGEEVEALISELCKPNEFDKYRMFIGDLDKVVNLLLSLSGRLARVENVLSGLGEDASNEERSSLYEKRKILAGQHEDARELKENLDRRERVVLGILANYLSEEQLQDYQHFVKMKSTLLIEQRKLDDKIKLGQEQVKCLLESLPSDFIPKAGALALPPNLTSEPIPAGGCTFSGIFPTLTSPL"
```
THe header and slash\-n newline characters are gone. The sequence is now ready for use by our R alignment functions.
18\.5 Aligning sequences
------------------------
We can do a [**global alignment**](https://tinyurl.com/y4du73zq) of one sequence against another using the `pairwiseAlignment()` function from the **Bioconductor** package `Biostrings` (note that capital “B” in `Biostrings`; most *R* package names are all lower case, but not this one). Global alignment algorithms identify the best way to line up to sequences so that that optimal number of bases or amino acids are the same and the number of **indels** (insertions/deletions) are minimized. (Global alignment contrasts with **local alignment**, which works with portions of sequences and is used in database search programs like **BLAST**, the Basic Local Alignment Search Tool used by many biologists).
Let’s align human versus mouse shroom using the global alignment function pairwiseAlignment():
```
align.h3.vs.m3a <- Biostrings::pairwiseAlignment(
hShroom3,
mShroom3a)
```
We can peek at the alignment
```
align.h3.vs.m3a
```
```
## Global PairwiseAlignmentsSingleSubject (1 of 1)
## pattern: MMRTTEDFHKPSATLN-SNTATKGRYIYLEAFLE...KAGALALPPNLTSEPIPAGGCTFSGIFPTLTSPL
## subject: MK-TPENLEEPSATPNPSRTPTE-RFVYLEALLE...KAGAISLPPALTGHATPGGTSVFGGVFPTLTSPL
## score: 2189.934
```
The **score** tells us how closely they are aligned; higher scores mean the sequences are more similar. In general, perfect matches increase scores the most, and indels decrease scores.
Its hard to interpret scores on their own so we can get the **percent sequence identity (PID)** (aka percent identical, proportion identity, proportion identical) using the `pid()` function.
```
Biostrings::pid(align.h3.vs.m3a)
```
```
## [1] 70.56511
```
So, *shroom3* from humans (hShroom3\) and *shroom3* from mice (mShroom3\) are \~71% similar (at least using this particular method of alignment, and there are many ways to do this!)
What about human shroom 3 and human shroom 2? Shroom is a **gene family**, and there are different versions of the gene within a genome.
```
align.h3.vs.h2 <- Biostrings::pairwiseAlignment(
hShroom3,
hShroom2)
```
If you take a look at the alignment you can see there are a lot of indels
```
align.h3.vs.h2
```
```
## Global PairwiseAlignmentsSingleSubject (1 of 1)
## pattern: MMRTTEDFHKPSATLNSNT--ATKGRYIYLEAFL...KAGALALPPNLTSEPIPAGGCTFSGIFPTLTSPL
## subject: MEGA-EPRARPERLAEAETRAADGGRLV--EVQL...----------------PERGK-------------
## score: -5673.853
```
Check out the score itself using `score()`, which accesses it directly without all the other information.
```
score(align.h3.vs.h2)
```
```
## [1] -5673.853
```
Its negative because there are a LOT of indels.
Now the percent sequence alignment with `pid()`:
```
Biostrings::pid(align.h3.vs.h2)
```
```
## [1] 33.83277
```
So Human shroom 3 and Mouse shroom 3 are 71% identical, but Human shroom 3 and human shroom 2 are only 34% similar? How does it work out evolutionarily that a human and mouse gene are more similar than a human and a human gene? What are the evolutionary relationships among these genes within the shroom gene family?
An important note: indels contribute negatively to an alignment score, but aren’t used in the most common calculations for PID.
18\.6 The shroom family of genes
--------------------------------
I’ve copied a table from a published paper which has accession numbers for 15 different Shroom genes. Some are different types of shroom from the same organism (e.g. shroom 1, 2, 3 and 4 from humans), or they are from different organism (e.g. frogs, mice, bees).
```
shroom_table <- c("CAA78718" , "X. laevis Apx" , "xShroom1",
"NP_597713" , "H. sapiens APXL2" , "hShroom1",
"CAA58534" , "H. sapiens APXL", "hShroom2",
"ABD19518" , "M. musculus Apxl" , "mShroom2",
"AAF13269" , "M. musculus ShroomL" , "mShroom3a",
"AAF13270" , "M. musculus ShroomS" , "mShroom3b",
"NP_065910", "H. sapiens Shroom" , "hShroom3",
"ABD59319" , "X. laevis Shroom-like", "xShroom3",
"NP_065768", "H. sapiens KIAA1202" , "hShroom4a",
"AAK95579" , "H. sapiens SHAP-A" , "hShroom4b",
#"DQ435686" , "M. musculus KIAA1202" , "mShroom4",
"ABA81834" , "D. melanogaster Shroom", "dmShroom",
"EAA12598" , "A. gambiae Shroom", "agShroom",
"XP_392427" , "A. mellifera Shroom" , "amShroom",
"XP_783573" , "S. purpuratus Shroom" , "spShroom") #sea urchin
```
What we just made is just one long vector with all the info.
```
is(shroom_table)
```
```
## [1] "character" "vector"
## [3] "data.frameRowLabels" "SuperClassMethod"
## [5] "EnumerationValue" "character_OR_connection"
## [7] "character_OR_NULL" "atomic"
## [9] "vector_OR_Vector" "vector_OR_factor"
```
```
class(shroom_table)
```
```
## [1] "character"
```
```
length(shroom_table)
```
```
## [1] 42
```
I’ll do a bit of formatting; you can ignore these details if you want
```
# convert the vector to matrix using matrix()
shroom_table_matrix <- matrix(shroom_table,
byrow = T,
nrow = 14)
# convert the matrix to a dataframe using data.frame()
shroom_table <- data.frame(shroom_table_matrix,
stringsAsFactors = F)
# name columns of dataframe using names() function
names(shroom_table) <- c("accession", "name.orig","name.new")
# Create simplified species names
## access species column using $ notation
shroom_table$spp <- "Homo"
shroom_table$spp[grep("laevis",shroom_table$name.orig)] <- "Xenopus"
shroom_table$spp[grep("musculus",shroom_table$name.orig)] <- "Mus"
shroom_table$spp[grep("melanogaster",shroom_table$name.orig)] <- "Drosophila"
shroom_table$spp[grep("gambiae",shroom_table$name.orig)] <- "mosquito"
shroom_table$spp[grep("mellifera",shroom_table$name.orig)] <- "bee"
shroom_table$spp[grep("purpuratus",shroom_table$name.orig)] <- "sea urchin"
```
Take a look at the finished table
```
shroom_table
```
```
## accession name.orig name.new spp
## 1 CAA78718 X. laevis Apx xShroom1 Xenopus
## 2 NP_597713 H. sapiens APXL2 hShroom1 Homo
## 3 CAA58534 H. sapiens APXL hShroom2 Homo
## 4 ABD19518 M. musculus Apxl mShroom2 Mus
## 5 AAF13269 M. musculus ShroomL mShroom3a Mus
## 6 AAF13270 M. musculus ShroomS mShroom3b Mus
## 7 NP_065910 H. sapiens Shroom hShroom3 Homo
## 8 ABD59319 X. laevis Shroom-like xShroom3 Xenopus
## 9 NP_065768 H. sapiens KIAA1202 hShroom4a Homo
## 10 AAK95579 H. sapiens SHAP-A hShroom4b Homo
## 11 ABA81834 D. melanogaster Shroom dmShroom Drosophila
## 12 EAA12598 A. gambiae Shroom agShroom mosquito
## 13 XP_392427 A. mellifera Shroom amShroom bee
## 14 XP_783573 S. purpuratus Shroom spShroom sea urchin
```
18\.7 Downloading multiple sequences
------------------------------------
Instead of getting one sequence at a time we can download several by accessing the “accession” column from the table
```
shroom_table$accession
```
We can give this whole set of accessions to `entrez_fetch()`, which is a **vectorized function** which knows how to handle a vector of inputs.
```
shrooms <- entrez_fetch(db = "protein",
id = shroom_table$accession,
rettype = "fasta")
```
We can look at what we got here with `cat()` (I won’t display this because it is very long!)
```
cat(shrooms)
```
The current format of these data is a single, very long set of data. This is a standard way to store, share and transmit FASTA files, but in *R* we’ll need a slightly different format.
We’ll download all of the sequences again, this time using a function from `compbio4all` called `entrez_fetch_list()` which is a **wrapper** function I wrote to put the output of `entrez_fetch()` into an *R* data format called a **list**.
This function is contained with the combio4all package; however, if you are having trouble with the package you can enter it directly into your R sessions by running the source code of the function.
```
entrez_fetch_list <- function(db, id, rettype, ...){
#setup list for storing output
n.seq <- length(id)
list.output <- as.list(rep(NA, n.seq))
names(list.output) <- id
# get output
for(i in 1:length(id)){
list.output[[i]] <- rentrez::entrez_fetch(db = db,
id = id[i],
rettype = rettype)
}
return(list.output)
}
```
However we get the function, we can use it to download a bunch of FASTA files and store them in an R list.
```
shrooms_list <- entrez_fetch_list(db = "protein",
id = shroom_table$accession,
rettype = "fasta")
```
Now we have an R **list** which as 14 **elements**, one for each sequence in our table.
```
length(shrooms_list)
```
```
## [1] 14
```
Each element of the list contains a FASTA entry for one sequence
```
shrooms_list[[1]]
```
```
## [1] ">CAA78718.1 apical protein [Xenopus laevis]\nMSAFGNTIERWNIKSTGVIAGLGHSERISPVRSMTTLVDSAYSSFSGSSYVPEYQNSFQHDGCHYNDEQL\nSYMDSEYVRAIYNPSLLDKDGVYNDIVSEHGSSKVALSGRSSSSLCSDNTTSVHRTSPAKLDNYVTNLDS\nEKNIYGDPINMKHKQNRPNHKAYGLQRNSPTGINSLQEKENQLYNPSNFMEIKDNYFGRSLDVLQADGDI\nMTQDSYTQNALYFPQNQPDQYRNTQYPGANRMSKEQFKVNDVQKSNEENTERDGPYLTKDGQFVQGQYAS\nDVRTSFKNIRRSLKKSASGKIVAHDSQGSCWIMKPGKDTPSFNSEGTITDMDYDNREQWDIRKSRLSTRA\nSQSLYYESNEDVSGPPLKAMNSKNEVDQTLSFQKDATVKSIPLLSQQLQQEKCKSHPLSDLNCEKITKAS\nTPMLYHLAGGRHSAFIAPVHNTNPAQQEKLKLESKTLERMNNISVLQLSEPRPDNHKLPKNKSLTQLADL\nHDSVEGGNSGNLNSSAEESLMNDYIEKLKVAQKKVLRETSFKRKDLQMSLPCRFKLNPPKRPTIDHFRSY\nSSSSANEESAYLQTKNSADSSYKKDDTEKVAVTRIGGRKRITKEQKKLCYSEPEKLDHLGIQKSNFAWKE\nEPTFANRREMSDSDISANRIKYLESKERTNSSSNLSKTELKQIQHNALVQYMERKTNQRPNSNPQVQMER\nTSLGLPNYNEWSIYSSETSSSDASQKYLRRRSAGASSSYDATVTWNDRFGKTSPLGRSAAEKTAGVQRKT\nFSDQRTLDGSQEHLEGSSPSLSQKTSKSTHNEQVSYVNMEFLPSSHSKNHMYNDRLTVPGDGTSAESGRM\nFVSKSRGKSMEEIGTTDIVKLAELSHSSDQLYHIKGPVISSRLENTRTTAASHQDRLLASTQIETGNLPR\nQTHQESVVGPCRSDLANLGQEAHSWPLRASDVSPGTDNPCSSSPSAEVQPGAPEPLHCLQTEDEVFTPAS\nTARNEEPNSTAFSYLLSTGKPVSQGEATALSFTFLPEQDRLEHPIVSETTPSSESDENVSDAAAEKETTT\nTQLPETSNVNKPLGFTVDNQEVEGDGEPMQPEFIDSSKQLELSSLPSSQVNIMQTAEPYLGDKNIGNEQK\nTEDLEQKSKNPEEDDLPKVKLKSPEDEILEELVKEIVAKDKSLLNCLQPVSVRESAMDLMKSLFPMDVTA\nAEKSRTRGLLGKDKGETLKKNNSDLESSSKLPSKITGMLQKRPDGESLDDITLKKMELLSKIGSKLEDLC\nEQREFLLSDISKNTTNGNNMQTMVKELCKPNEFERYMMFIGDLEKVVSLLFSLSTRLTRVENSLSKVDEN\nTDAEEMQSLKERHNLLSSQREDAKDLKANLDRREQVVTGILVKYLNEEQLQDYKHFVRLKTSLLIEQKNL\nEEKIKVYEEQFESIHNSLPP\n\n"
```
We now need to clean up each one of these sequences. We could do this by running our fasta\_cleaner() function on each of the elements of the list like this.
```
# clean sequence 1 in element 1 of list
shrooms_list[[1]] <- fasta_cleaner(shrooms_list[[1]], parse = F)
# clean sequence 2 in element 2 of list
shrooms_list[[2]] <- fasta_cleaner(shrooms_list[[2]], parse = F)
# clean sequence 3 in element 3 of list
shrooms_list[[3]] <- fasta_cleaner(shrooms_list[[3]], parse = F)
# clean sequence x in element x of list
## ...
```
Copying the same line of code 14 time and making the necessary changes takes time and is error prone. We can do this more easily using a simple `for()` loop:
```
for(i in 1:length(shrooms_list)){
shrooms_list[[i]] <- fasta_cleaner(shrooms_list[[i]], parse = F)
}
```
For loops all start with for(…) and contain the code we want to do repeatedly within curly brackets {…}. We won’t worry about the details of how for() loops work in R, but the upshot is that they allow us to easily repeatedly the same step many times while having making the necessary minor changes.
Now, for the second to last step of data preparation: we need to take each one of our sequences from our list and put it into a **vector**, in particular a **named vector**.
First, we need a vector to store stuff. We’ll make an empty vector that just has NAs in it using the rep() function.
```
shrooms_vector <- rep(NA, length(shrooms_list))
```
The result looks like this
```
shrooms_vector
```
```
## [1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA
```
Now we use a for() loop to take each element of shrooms\_list and put it into the vector shrooms\_vector (the precise details don’t matter; what is important is that we are using a for loop so we don’t have to repeat the same line of code 14 times)
```
# run the loop
for(i in 1:length(shrooms_vector)){
shrooms_vector[i] <- shrooms_list[[i]]
}
```
Now we name the vector. This is using the names() function. (The exact details don’t matter)
```
# name the vector
names(shrooms_vector) <- names(shrooms_list)
```
Now the final step: we need to convert our named vector to a **string set** using `Biostrings::AAStringSet()`. Note the `_ss` tag at the end of the object we’re assigning the output to, which designates this as a string set.
A string set is just a type of data format someone came up with to organize and annotate sequence data. (All of these steps are annoying, and someday I’ll write some functions to simplify all of this).
```
shrooms_vector_ss <- Biostrings::AAStringSet(shrooms_vector)
```
18\.8 Multiple sequence alignment
---------------------------------
We must **align** all of the sequences we downloaded and use that **alignment** to build a **phylogenetic tree**. This will tell us how the different genes, both within and between species, are likely to be related.
(Note: previously we’ve explored the similarities between sequences using pairwiseAlignments(). This helped us understand the data, but wasn’t actually necessary for making an MSA or phylogenetic tree).
### 18\.8\.1 Building an Multiple Sequence Alignment (MSA)
Multiple sequence alignments (MSA) appear frequently in papers in molecular biology, biochemistry, and molecular evolution. They are also the basis for almost all phylogenies made with modern software to build phylogenetic trees using macromolecues. MSA are extensions of global alignment of two sequences. However, while a function like pairwiseAlignment() tries to come up with the best way to align two sequences, and MSA algorithm tries to come up with the joint alignment that is best across all alignments. This not only takes longer to do, but can sometimes come up with slight different results than a bunch of individual pairs of alignments.
We’ll use the software `msa,` which implements the **ClustalW** multiple sequence alignment algorithm. Normally we’d have to download the ClustalW program and either point\-and\-click our way through it or use the **command line**\*, but these folks wrote up the algorithm in R so we can do this with a line of R code. This will take a second or two.
NOTE: While based in R, the msa package uses an R package called Rcpp (“R C\+\+”) to integrate R with code from the language C\+\+. There seem to some issues related to this process on some computers. If you can’t get msa to load or msa() to fun, you can comment\-out the msa\-related code.
```
shrooms_align <- msa(shrooms_vector_ss,
method = "ClustalW")
```
```
## use default substitution matrix
```
While msa() runs R tells you “use default substitution matrix”, which means its using the programs default way of scoring alignments; that is, how to assign values to matches, mismatches, and indels while trying to come up with the best alignment of all the sequences.
### 18\.8\.2 Viewing an MSA
Once we build an MSA we need to visualize it. There are several ways to do this, and it can be a bit tricky because gene and proteins are long and most easily viewed left to right. Often we’ll identify a subset of bases to focus on, such as a sequence motif or domain.
#### 18\.8\.2\.1 Viewing an MSA in R
msa produces a species MSA objects
```
class(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment"
## attr(,"package")
## [1] "msa"
```
```
is(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment" "AAMultipleAlignment" "MsaMetaData"
## [4] "MultipleAlignment"
```
We can look at the direct output from `msa()`, but its not very helpful \- its just a glimpse of part of the alignment. The “…” in the middle just means “a lot of other stuff in the middle.”
```
shrooms_align
```
```
## CLUSTAL 2.1
##
## Call:
## msa(shrooms_vector_ss, method = "ClustalW")
##
## MsaAAMultipleAlignment with 14 rows and 2252 columns
## aln names
## [1] -------------------------...------------------------- NP_065768
## [2] -------------------------...------------------------- AAK95579
## [3] -------------------------...SVFGGVFPTLTSPL----------- AAF13269
## [4] -------------------------...SVFGGVFPTLTSPL----------- AAF13270
## [5] -------------------------...CTFSGIFPTLTSPL----------- NP_065910
## [6] -------------------------...NKS--LPPPLTSSL----------- ABD59319
## [7] -------------------------...------------------------- CAA58534
## [8] -------------------------...------------------------- ABD19518
## [9] -------------------------...LT----------------------- NP_597713
## [10] -------------------------...------------------------- CAA78718
## [11] -------------------------...------------------------- EAA12598
## [12] -------------------------...------------------------- ABA81834
## [13] MTELQPSPPGYRVQDEAPGPPSCPP...------------------------- XP_392427
## [14] -------------------------...AATSSSSNGIGGPEQLNSNATSSYC XP_783573
## Con -------------------------...------------------------- Consensus
```
A function called `print_msa()` (Coghlan 2011\) which I’ve put into`combio4all` can give us more informative output by printing out the actual alignment into the R console.
To use `print_msa()` We need to make a few minor tweaks though first to the output of the msa() function. These are behind the scenes changes so don’t worry about the details right now. We’ll change the name to `shrooms_align_seqinr` to indicate that one of our changes is putting this into a format defined by the bioinformatics package `seqinr`.
First, we change **class** of the variable to let our functions know exactly what we’re working with.
The output of the class() function can sometimes be a bit complicated; in this case its telling us that the “class” of the shrooms\_align object is “MsaAAMultipleAlignment”, which is a special purpose type of R object created by the msa() function (“Msa…”) for amino acids (…“AA”…) that is a multiple sequence alignment (“…MultipleAlignment”)
```
class(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment"
## attr(,"package")
## [1] "msa"
```
To make shrooms\_align play nice with our other functions it has to just be of the class “AAMultipleAlignment”. (Again, this is annoying, and took my a while to figure out when I was creating this workflow).
You rarely have to change the class of an R object; the usual use of the class() function is to just get an idea of what an object is. You can also use the class() function to change the class of an object.
```
class(shrooms_align) <- "AAMultipleAlignment"
```
Now we need to use a function from msa called msaConvert() to make *another* tweak to the object to make it work with functions from the seqinr package. We’ll change the name of our msa object from shrooms\_align to shrooms\_align\_seqinr to reflect this change. (Another annoying step that took me a while to figure out when I first did this.)
```
shrooms_align_seqinr <- msaConvert(shrooms_align,
type = "seqinr::alignment")
```
I won’t display the raw output from `shrooms_align_seqinr` because its very long; we have 14 shroom genes, and shroom happens to be a rather long gene.
Now that I’ve done the necessary tweaks let me display the msa. This will only be really useful if I have a big monitor.
```
compbio4all::print_msa(alignment = shrooms_align_seqinr,
chunksize = 60)
```
#### 18\.8\.2\.2 Displaying an MSA as an R plot
Printing and msa to the R plot window can be useful to make nicer looking figures of parts of an MSA. I’m going to just show about 100 amino acids near the end of the alignment, where there is the most overlap across all of the sequences. This is set with the `start = ...` and `end = ...` arguments.
Note that we’re using the `shrooms_align` object again, but with the class reassigned.
```
# key step - must have class set properly
class(shrooms_align) <- "AAMultipleAlignment"
# run ggmsa
ggmsa::ggmsa(shrooms_align, # shrooms_align, NOT shrooms_align_seqinr
start = 2000,
end = 2100)
```
#### 18\.8\.2\.3 Saving an MSA as PDF
We can take a look at the alignment in PDF format if we want. I this case I’m going to just show about 100 amino acids near the end of the alignment, where there is the most overlap across all of the sequences. This is set with the `y = c(...)` argument.
In order for this to work you need to have a program called LaTex installed on your computer. LaTex can occasionally be tricky to install so you can skip this step if necessary.
If you want to try to install LaTex, you can run this code to see if it works
for you
```
install.packages("tinytex")
install_tinytex()
```
If you have LaTex working on your computer you can then run this code. (If this code doesn’t work, you can comment it out)
```
msaPrettyPrint(shrooms_align, # alignment
file = "shroom_msa.pdf", # file name
y=c(2000, 2100), # range
askForOverwrite=FALSE)
```
You can see where R is saving things by running `getwd()`
```
getwd()
```
```
## [1] "/Users/nlb24/OneDrive - University of Pittsburgh/0-books/lbrb-bk/lbrb"
```
On a Mac you can usually find the file by searching in Finder for the file name, which I set to be “shroom\_msa.pdf” using the `file = ...` argument above.
### 18\.8\.1 Building an Multiple Sequence Alignment (MSA)
Multiple sequence alignments (MSA) appear frequently in papers in molecular biology, biochemistry, and molecular evolution. They are also the basis for almost all phylogenies made with modern software to build phylogenetic trees using macromolecues. MSA are extensions of global alignment of two sequences. However, while a function like pairwiseAlignment() tries to come up with the best way to align two sequences, and MSA algorithm tries to come up with the joint alignment that is best across all alignments. This not only takes longer to do, but can sometimes come up with slight different results than a bunch of individual pairs of alignments.
We’ll use the software `msa,` which implements the **ClustalW** multiple sequence alignment algorithm. Normally we’d have to download the ClustalW program and either point\-and\-click our way through it or use the **command line**\*, but these folks wrote up the algorithm in R so we can do this with a line of R code. This will take a second or two.
NOTE: While based in R, the msa package uses an R package called Rcpp (“R C\+\+”) to integrate R with code from the language C\+\+. There seem to some issues related to this process on some computers. If you can’t get msa to load or msa() to fun, you can comment\-out the msa\-related code.
```
shrooms_align <- msa(shrooms_vector_ss,
method = "ClustalW")
```
```
## use default substitution matrix
```
While msa() runs R tells you “use default substitution matrix”, which means its using the programs default way of scoring alignments; that is, how to assign values to matches, mismatches, and indels while trying to come up with the best alignment of all the sequences.
### 18\.8\.2 Viewing an MSA
Once we build an MSA we need to visualize it. There are several ways to do this, and it can be a bit tricky because gene and proteins are long and most easily viewed left to right. Often we’ll identify a subset of bases to focus on, such as a sequence motif or domain.
#### 18\.8\.2\.1 Viewing an MSA in R
msa produces a species MSA objects
```
class(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment"
## attr(,"package")
## [1] "msa"
```
```
is(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment" "AAMultipleAlignment" "MsaMetaData"
## [4] "MultipleAlignment"
```
We can look at the direct output from `msa()`, but its not very helpful \- its just a glimpse of part of the alignment. The “…” in the middle just means “a lot of other stuff in the middle.”
```
shrooms_align
```
```
## CLUSTAL 2.1
##
## Call:
## msa(shrooms_vector_ss, method = "ClustalW")
##
## MsaAAMultipleAlignment with 14 rows and 2252 columns
## aln names
## [1] -------------------------...------------------------- NP_065768
## [2] -------------------------...------------------------- AAK95579
## [3] -------------------------...SVFGGVFPTLTSPL----------- AAF13269
## [4] -------------------------...SVFGGVFPTLTSPL----------- AAF13270
## [5] -------------------------...CTFSGIFPTLTSPL----------- NP_065910
## [6] -------------------------...NKS--LPPPLTSSL----------- ABD59319
## [7] -------------------------...------------------------- CAA58534
## [8] -------------------------...------------------------- ABD19518
## [9] -------------------------...LT----------------------- NP_597713
## [10] -------------------------...------------------------- CAA78718
## [11] -------------------------...------------------------- EAA12598
## [12] -------------------------...------------------------- ABA81834
## [13] MTELQPSPPGYRVQDEAPGPPSCPP...------------------------- XP_392427
## [14] -------------------------...AATSSSSNGIGGPEQLNSNATSSYC XP_783573
## Con -------------------------...------------------------- Consensus
```
A function called `print_msa()` (Coghlan 2011\) which I’ve put into`combio4all` can give us more informative output by printing out the actual alignment into the R console.
To use `print_msa()` We need to make a few minor tweaks though first to the output of the msa() function. These are behind the scenes changes so don’t worry about the details right now. We’ll change the name to `shrooms_align_seqinr` to indicate that one of our changes is putting this into a format defined by the bioinformatics package `seqinr`.
First, we change **class** of the variable to let our functions know exactly what we’re working with.
The output of the class() function can sometimes be a bit complicated; in this case its telling us that the “class” of the shrooms\_align object is “MsaAAMultipleAlignment”, which is a special purpose type of R object created by the msa() function (“Msa…”) for amino acids (…“AA”…) that is a multiple sequence alignment (“…MultipleAlignment”)
```
class(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment"
## attr(,"package")
## [1] "msa"
```
To make shrooms\_align play nice with our other functions it has to just be of the class “AAMultipleAlignment”. (Again, this is annoying, and took my a while to figure out when I was creating this workflow).
You rarely have to change the class of an R object; the usual use of the class() function is to just get an idea of what an object is. You can also use the class() function to change the class of an object.
```
class(shrooms_align) <- "AAMultipleAlignment"
```
Now we need to use a function from msa called msaConvert() to make *another* tweak to the object to make it work with functions from the seqinr package. We’ll change the name of our msa object from shrooms\_align to shrooms\_align\_seqinr to reflect this change. (Another annoying step that took me a while to figure out when I first did this.)
```
shrooms_align_seqinr <- msaConvert(shrooms_align,
type = "seqinr::alignment")
```
I won’t display the raw output from `shrooms_align_seqinr` because its very long; we have 14 shroom genes, and shroom happens to be a rather long gene.
Now that I’ve done the necessary tweaks let me display the msa. This will only be really useful if I have a big monitor.
```
compbio4all::print_msa(alignment = shrooms_align_seqinr,
chunksize = 60)
```
#### 18\.8\.2\.2 Displaying an MSA as an R plot
Printing and msa to the R plot window can be useful to make nicer looking figures of parts of an MSA. I’m going to just show about 100 amino acids near the end of the alignment, where there is the most overlap across all of the sequences. This is set with the `start = ...` and `end = ...` arguments.
Note that we’re using the `shrooms_align` object again, but with the class reassigned.
```
# key step - must have class set properly
class(shrooms_align) <- "AAMultipleAlignment"
# run ggmsa
ggmsa::ggmsa(shrooms_align, # shrooms_align, NOT shrooms_align_seqinr
start = 2000,
end = 2100)
```
#### 18\.8\.2\.3 Saving an MSA as PDF
We can take a look at the alignment in PDF format if we want. I this case I’m going to just show about 100 amino acids near the end of the alignment, where there is the most overlap across all of the sequences. This is set with the `y = c(...)` argument.
In order for this to work you need to have a program called LaTex installed on your computer. LaTex can occasionally be tricky to install so you can skip this step if necessary.
If you want to try to install LaTex, you can run this code to see if it works
for you
```
install.packages("tinytex")
install_tinytex()
```
If you have LaTex working on your computer you can then run this code. (If this code doesn’t work, you can comment it out)
```
msaPrettyPrint(shrooms_align, # alignment
file = "shroom_msa.pdf", # file name
y=c(2000, 2100), # range
askForOverwrite=FALSE)
```
You can see where R is saving things by running `getwd()`
```
getwd()
```
```
## [1] "/Users/nlb24/OneDrive - University of Pittsburgh/0-books/lbrb-bk/lbrb"
```
On a Mac you can usually find the file by searching in Finder for the file name, which I set to be “shroom\_msa.pdf” using the `file = ...` argument above.
#### 18\.8\.2\.1 Viewing an MSA in R
msa produces a species MSA objects
```
class(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment"
## attr(,"package")
## [1] "msa"
```
```
is(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment" "AAMultipleAlignment" "MsaMetaData"
## [4] "MultipleAlignment"
```
We can look at the direct output from `msa()`, but its not very helpful \- its just a glimpse of part of the alignment. The “…” in the middle just means “a lot of other stuff in the middle.”
```
shrooms_align
```
```
## CLUSTAL 2.1
##
## Call:
## msa(shrooms_vector_ss, method = "ClustalW")
##
## MsaAAMultipleAlignment with 14 rows and 2252 columns
## aln names
## [1] -------------------------...------------------------- NP_065768
## [2] -------------------------...------------------------- AAK95579
## [3] -------------------------...SVFGGVFPTLTSPL----------- AAF13269
## [4] -------------------------...SVFGGVFPTLTSPL----------- AAF13270
## [5] -------------------------...CTFSGIFPTLTSPL----------- NP_065910
## [6] -------------------------...NKS--LPPPLTSSL----------- ABD59319
## [7] -------------------------...------------------------- CAA58534
## [8] -------------------------...------------------------- ABD19518
## [9] -------------------------...LT----------------------- NP_597713
## [10] -------------------------...------------------------- CAA78718
## [11] -------------------------...------------------------- EAA12598
## [12] -------------------------...------------------------- ABA81834
## [13] MTELQPSPPGYRVQDEAPGPPSCPP...------------------------- XP_392427
## [14] -------------------------...AATSSSSNGIGGPEQLNSNATSSYC XP_783573
## Con -------------------------...------------------------- Consensus
```
A function called `print_msa()` (Coghlan 2011\) which I’ve put into`combio4all` can give us more informative output by printing out the actual alignment into the R console.
To use `print_msa()` We need to make a few minor tweaks though first to the output of the msa() function. These are behind the scenes changes so don’t worry about the details right now. We’ll change the name to `shrooms_align_seqinr` to indicate that one of our changes is putting this into a format defined by the bioinformatics package `seqinr`.
First, we change **class** of the variable to let our functions know exactly what we’re working with.
The output of the class() function can sometimes be a bit complicated; in this case its telling us that the “class” of the shrooms\_align object is “MsaAAMultipleAlignment”, which is a special purpose type of R object created by the msa() function (“Msa…”) for amino acids (…“AA”…) that is a multiple sequence alignment (“…MultipleAlignment”)
```
class(shrooms_align)
```
```
## [1] "MsaAAMultipleAlignment"
## attr(,"package")
## [1] "msa"
```
To make shrooms\_align play nice with our other functions it has to just be of the class “AAMultipleAlignment”. (Again, this is annoying, and took my a while to figure out when I was creating this workflow).
You rarely have to change the class of an R object; the usual use of the class() function is to just get an idea of what an object is. You can also use the class() function to change the class of an object.
```
class(shrooms_align) <- "AAMultipleAlignment"
```
Now we need to use a function from msa called msaConvert() to make *another* tweak to the object to make it work with functions from the seqinr package. We’ll change the name of our msa object from shrooms\_align to shrooms\_align\_seqinr to reflect this change. (Another annoying step that took me a while to figure out when I first did this.)
```
shrooms_align_seqinr <- msaConvert(shrooms_align,
type = "seqinr::alignment")
```
I won’t display the raw output from `shrooms_align_seqinr` because its very long; we have 14 shroom genes, and shroom happens to be a rather long gene.
Now that I’ve done the necessary tweaks let me display the msa. This will only be really useful if I have a big monitor.
```
compbio4all::print_msa(alignment = shrooms_align_seqinr,
chunksize = 60)
```
#### 18\.8\.2\.2 Displaying an MSA as an R plot
Printing and msa to the R plot window can be useful to make nicer looking figures of parts of an MSA. I’m going to just show about 100 amino acids near the end of the alignment, where there is the most overlap across all of the sequences. This is set with the `start = ...` and `end = ...` arguments.
Note that we’re using the `shrooms_align` object again, but with the class reassigned.
```
# key step - must have class set properly
class(shrooms_align) <- "AAMultipleAlignment"
# run ggmsa
ggmsa::ggmsa(shrooms_align, # shrooms_align, NOT shrooms_align_seqinr
start = 2000,
end = 2100)
```
#### 18\.8\.2\.3 Saving an MSA as PDF
We can take a look at the alignment in PDF format if we want. I this case I’m going to just show about 100 amino acids near the end of the alignment, where there is the most overlap across all of the sequences. This is set with the `y = c(...)` argument.
In order for this to work you need to have a program called LaTex installed on your computer. LaTex can occasionally be tricky to install so you can skip this step if necessary.
If you want to try to install LaTex, you can run this code to see if it works
for you
```
install.packages("tinytex")
install_tinytex()
```
If you have LaTex working on your computer you can then run this code. (If this code doesn’t work, you can comment it out)
```
msaPrettyPrint(shrooms_align, # alignment
file = "shroom_msa.pdf", # file name
y=c(2000, 2100), # range
askForOverwrite=FALSE)
```
You can see where R is saving things by running `getwd()`
```
getwd()
```
```
## [1] "/Users/nlb24/OneDrive - University of Pittsburgh/0-books/lbrb-bk/lbrb"
```
On a Mac you can usually find the file by searching in Finder for the file name, which I set to be “shroom\_msa.pdf” using the `file = ...` argument above.
18\.9 A subset of sequences
---------------------------
To make things easier we’ll move forward with just a subset of sequences:
* XP\_392427: amShroom (bee shroom)
* EAA12598: agShroom (mosquito shroom)
* ABA81834: dmShroom (*Drosophila* shroom)
* XP\_783573: spShroom (sea urchin shroom)
* CAA78718: xShroom1 (frog shroom)
Our main working object shrooms\_vector\_ss has the names of our genes listed
```
names(shrooms_vector_ss)
```
```
## [1] "CAA78718" "NP_597713" "CAA58534" "ABD19518" "AAF13269" "AAF13270"
## [7] "NP_065910" "ABD59319" "NP_065768" "AAK95579" "ABA81834" "EAA12598"
## [13] "XP_392427" "XP_783573"
```
We can select the ones we want to focus on be first making a vector of the names
```
names.focal <- c("XP_392427","EAA12598","ABA81834","XP_783573","CAA78718")
```
We can use this vector and bracket notation to select the what we want from shrooms\_vector\_ss:
```
shrooms_vector_ss[names.focal]
```
```
## AAStringSet object of length 5:
## width seq names
## [1] 2126 MTELQPSPPGYRVQDEAPGPPSC...GREIQDKVKLGEEQLAALREAID XP_392427
## [2] 674 IPFSSSPKNRSNSKASYLPRQPR...ADKIKLGEEQLAALKDTLVQSEC EAA12598
## [3] 1576 MKMRNHKENGNGSEMGESTKSLA...AVRIKGSEEQLSSLSDALVQSDC ABA81834
## [4] 1661 MMKDAMYPTTTSTTSSSVNPLPK...TSSSSNGIGGPEQLNSNATSSYC XP_783573
## [5] 1420 MSAFGNTIERWNIKSTGVIAGLG...KNLEEKIKVYEEQFESIHNSLPP CAA78718
```
Let’s assign the subset of sequences to a new object called shrooms\_vector\_ss\_subset.
```
shrooms_vector_ss_subset <- shrooms_vector_ss[names.focal]
```
Let’s make another MSA with just this subset. If msa isn’t working for you you can comment this out.
```
shrooms_align_subset <- msa(shrooms_vector_ss_subset,
method = "ClustalW")
```
```
## use default substitution matrix
```
To view it using ggmsa we need to do those annoying conversions again.
```
class(shrooms_align_subset) <- "AAMultipleAlignment"
shrooms_align_subset_seqinr <- msaConvert(shrooms_align_subset, type = "seqinr::alignment")
```
THen we can plot it
```
ggmsa::ggmsa(shrooms_align_subset, # shrooms_align, NOT shrooms_align_seqinr
start = 2030,
end = 2100)
```
We can save our new smaller MSA like this.
```
msaPrettyPrint(shrooms_align_subset, # alignment
file = "shroom_msa_subset.pdf", # file name
y=c(2030, 2100), # range
askForOverwrite=FALSE)
```
18\.10 Genetic distances of sequence in subset
----------------------------------------------
While an MSA is a good way to examine a sequence its hard to assess all of the information visually. A phylogenetic tree allows you to summarize patterns in an MSA. The fastest way to make phylogenetic trees to is first summarize an MSA using a **genetic distance matrix**. The more amino acids that are identical to each other, the smaller the genetic distance is between them and the less evolution has occurred.
We usually work in terms of *difference* or **genetic distance** (a.k.a. **evolutionary distance**), though often we also talk in terms of similarity or identity.
Calculating genetic distance from an MSA is done using the `seqinr::dist.alignment()` function.
```
shrooms_subset_dist <- seqinr::dist.alignment(shrooms_align_subset_seqinr,
matrix = "identity")
```
This produces a “dist” class object.
```
is(shrooms_subset_dist)
```
```
## [1] "dist" "oldClass"
```
```
class(shrooms_subset_dist)
```
```
## [1] "dist"
```
If you’ve been having trouble with the MSA software, the data necessary to build the distance matrix directly in R is in this code chunk (you can ignore the details).
```
shrooms_subset_dist_alt <- matrix(data = NA,
nrow = 5,
ncol = 5)
distances <- c(0.8260049,
0.8478722, 0.9000568,
0.9244596, 0.9435187, 0.9372139,
0.9238779, 0.9370038, 0.9323225,0.9413209)
shrooms_subset_dist_alt[lower.tri(shrooms_subset_dist_alt)] <- distances
seqnames <- c("EAA12598","ABA81834","XP_392427", "XP_783573","CAA78718")
colnames(shrooms_subset_dist_alt) <- seqnames
row.names(shrooms_subset_dist_alt) <- seqnames
shrooms_subset_dist_alt <- as.dist(shrooms_subset_dist_alt)
shrooms_subset_dist <- shrooms_subset_dist_alt
```
We’ve made a matrix using `dist.alignment()`; let’s round it off so its easier to look at using the `round()` function.
```
shrooms_subset_dist_rounded <- round(shrooms_subset_dist,
digits = 3)
```
If we want to look at it we can type
```
shrooms_subset_dist_rounded
```
```
## EAA12598 ABA81834 XP_392427 XP_783573
## ABA81834 0.826
## XP_392427 0.848 0.944
## XP_783573 0.900 0.937 0.937
## CAA78718 0.924 0.924 0.932 0.941
```
Not that we have 5 sequence, but the matrix is 4 x 4\. This is because redundant information is dropped, including distances from one sequence to itself. This makes it so that the first column is EAA12598, but the first row is ABA81834\.
18\.11 Phylognetic trees of subset sequences (finally!)
-------------------------------------------------------
We got our sequences, built a multiple sequence alignment, and calculated the genetic distance between sequences. Now we are \- finally \- ready to build a phylogenetic tree.
First, we let R figure out the structure of the tree. There are **MANY** ways to build phylogenetic trees. We’ll use a common one used for exploring sequences called **neighbor joining** algorithm via the function `nj()`. Neighbor joining uses genetic distances to cluster sequences into **clades**.
nj() is simple function that takes only a single argument, a distance matrix.
```
# Note - not using rounded values
tree_subset <- nj(shrooms_subset_dist)
```
### 18\.11\.1 Plotting phylogenetic trees
Now we’ll make a quick plot of our tree using `plot()` (and add a little label using an important function called `mtext()`).
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
type = "unrooted",
use.edge.length = F)
# add label
mtext(text = "Shroom family gene tree - UNrooted, no branch lengths")
```
This is an **unrooted tree** with no outgroup defined. For the sake of plotting we’ve also ignored the evolutionary distance between the sequences, so the branch lengths don’t have meaning.
To make a rooted tree we remove `type = "unrooted`. In the case of neighbor joining, the algorithm tries to figure out the outgroup on its own.
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
use.edge.length = F)
# add label
mtext(text = "Shroom family gene tree - rooted, no branch lenths")
```
We can include information about branch length by setting `use.edge.length = ...` to `T`.
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
use.edge.length = T)
# add label
mtext(text = "Shroom family gene tree - rooted, with branch lenths")
```
Now the length of the branches indicates the evolutionary distance between sequences and correlate to the distances reported in our distance matrix. The branches are all very long, indicating that these genes have been evolving independently for many millions of years.
An important note: the vertical lines on the tree have no meaning, only the horizontal ones.
Because the branch lengths are all so long I find this tree a bit hard to view when its rooted. Let’s make it unrooted again.
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
type = "unrooted",
use.edge.length = T)
# add label
mtext(text = "Shroom family gene tree - rooted, with branch lenths")
```
Now you can see that the ABA and EAA sequences form a clade, and that the distance between them is somewhat smaller than the distance between other sequences. If we go back to our original distance matrix, we can see that the smallest genetic distance is between ABA and EAA at 0\.826\.
```
shrooms_subset_dist_rounded
```
```
## EAA12598 ABA81834 XP_392427 XP_783573
## ABA81834 0.826
## XP_392427 0.848 0.944
## XP_783573 0.900 0.937 0.937
## CAA78718 0.924 0.924 0.932 0.941
```
We can confirm that this is the minimum using the min() function.
```
min(shrooms_subset_dist_rounded)
```
```
## [1] 0.826
```
### 18\.11\.1 Plotting phylogenetic trees
Now we’ll make a quick plot of our tree using `plot()` (and add a little label using an important function called `mtext()`).
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
type = "unrooted",
use.edge.length = F)
# add label
mtext(text = "Shroom family gene tree - UNrooted, no branch lengths")
```
This is an **unrooted tree** with no outgroup defined. For the sake of plotting we’ve also ignored the evolutionary distance between the sequences, so the branch lengths don’t have meaning.
To make a rooted tree we remove `type = "unrooted`. In the case of neighbor joining, the algorithm tries to figure out the outgroup on its own.
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
use.edge.length = F)
# add label
mtext(text = "Shroom family gene tree - rooted, no branch lenths")
```
We can include information about branch length by setting `use.edge.length = ...` to `T`.
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
use.edge.length = T)
# add label
mtext(text = "Shroom family gene tree - rooted, with branch lenths")
```
Now the length of the branches indicates the evolutionary distance between sequences and correlate to the distances reported in our distance matrix. The branches are all very long, indicating that these genes have been evolving independently for many millions of years.
An important note: the vertical lines on the tree have no meaning, only the horizontal ones.
Because the branch lengths are all so long I find this tree a bit hard to view when its rooted. Let’s make it unrooted again.
```
# plot tree
plot.phylo(tree_subset, main="Phylogenetic Tree",
type = "unrooted",
use.edge.length = T)
# add label
mtext(text = "Shroom family gene tree - rooted, with branch lenths")
```
Now you can see that the ABA and EAA sequences form a clade, and that the distance between them is somewhat smaller than the distance between other sequences. If we go back to our original distance matrix, we can see that the smallest genetic distance is between ABA and EAA at 0\.826\.
```
shrooms_subset_dist_rounded
```
```
## EAA12598 ABA81834 XP_392427 XP_783573
## ABA81834 0.826
## XP_392427 0.848 0.944
## XP_783573 0.900 0.937 0.937
## CAA78718 0.924 0.924 0.932 0.941
```
We can confirm that this is the minimum using the min() function.
```
min(shrooms_subset_dist_rounded)
```
```
## [1] 0.826
```
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/simple-for-loop-example.html |
Chapter 19 Simple for() loop example
====================================
19\.1 Key functions / terms
---------------------------
* paste()
* functions to learn about vectors and other objects
* nchar()
* vector element
* bracket notation to access vector elements
* for() loop
* curly brackets { }
* function()
* class()
Here’s a simple “toy” example of using four loops.
Let’s say we need look AG different codons and need to vary the first base of a codon. For example, we want to look AG all the codons that are “XAG”, so we want
AAG
TAG
CAG
GAG
We make a vector that holds the first base of the code
```
#element 1 2 3 4
x <- c("A","T","C","G")
```
This is a vector with 4 elements. We can explore it using the usual functions that tell us about objects (aka variables) in R.
```
is(x)
```
```
## [1] "character" "vector"
## [3] "data.frameRowLabels" "SuperClassMethod"
## [5] "EnumerationValue" "character_OR_connection"
## [7] "character_OR_NULL" "atomic"
## [9] "vector_OR_Vector" "vector_OR_factor"
```
```
length(x)
```
```
## [1] 4
```
```
nchar(x)
```
```
## [1] 1 1 1 1
```
```
dim(x)
```
```
## NULL
```
```
nrow(x)
```
```
## NULL
```
```
ncol(x)
```
```
## NULL
```
If we need to know what a function does we should always look it up in the help file
```
?nchar
```
Note that when we call the function nchar() on x we get
```
nchar(x)
```
```
## [1] 1 1 1 1
```
This means that each element of x has one character in it.
Compare that result with this. Let’s make a vector with a single codon in it.
```
#element 1
y <- c("AGC")
```
Now nchar() says this
```
nchar(y)
```
```
## [1] 3
```
That is, 3 character (3 letters) in the first and only element of the vectors.
If our vector contained codons, it would look like this
```
y <- c("AAG", "TAG", "CAG", "GAG")
nchar(y)
```
```
## [1] 3 3 3 3
```
That is, four elements in the vector, each element with 3 characters in it.
Let’s say we are keeping the second and third position of our codon fixed AG “AG” and will vary the first position. A function that will be handy is paste().
Paste takes things and combines them into a single element of a vector. So, I can do this with my name
```
n <- paste("Nathan","Linn","Brouwer")
```
This gives me a vector of length 1
```
length(n)
```
```
## [1] 1
```
That contains my name
```
n
```
```
## [1] "Nathan Linn Brouwer"
```
If I don’t want any spaces I can do this
```
paste("NAGhan","Linn","Brouwer", sep = "")
```
```
## [1] "NAGhanLinnBrouwer"
```
I can use paste() to assembly codons for me
```
codon1 <- paste("A", "AG", sep = "")
codon1
```
```
## [1] "AAG"
```
I can make all for possible codon that end in “AG” like this
```
paste("A", "AG", sep = "")
```
```
## [1] "AAG"
```
```
paste("T", "AG", sep = "")
```
```
## [1] "TAG"
```
```
paste("C", "AG", sep = "")
```
```
## [1] "CAG"
```
```
paste("G", "AG", sep = "")
```
```
## [1] "GAG"
```
Since I have a vector with the first base I’m varying in it, I can also do this using bracket notation, with x\[1], x\[2], etc.
```
paste(x[1], "AG", sep = "")
```
```
## [1] "AAG"
```
```
paste(x[2], "AG", sep = "")
```
```
## [1] "TAG"
```
```
paste(x[3], "AG", sep = "")
```
```
## [1] "CAG"
```
```
paste(x[4], "AG", sep = "")
```
```
## [1] "GAG"
```
Copying the same line of code multiple times gets the job done but will be prone to errors. Anytime the same process gets repeated you should consider using for() loops and/or functions. I can take the four lines of code in the previous chunk and turn it into a four loop like this.
```
for(i in 1:length(x)){
codon <- paste(x[i],"AG", sep = "")
print(codon)
}
```
```
## [1] "AAG"
## [1] "TAG"
## [1] "CAG"
## [1] "GAG"
```
All for loops start with for(…). Don’t worry about what’s in between the parentheses right now. Then there’s a curly bracket {, some code thAG does whAG we want, and a closing curly bracket }.
This is a “toy” example and doesn’t accomplish much \- my four loop has as many lines of code as the stuff it replaces. But if I have to do something dozens, hundreds, or thousands of times then its very useful to use for() loops.
Functions also allow you to take a process and consolidated it. Often, functions contain for loops in them. For example, I can consolidated the for() loop into a function like this.
First, I define a function called for\_loop\_function()
```
for_loop_function <- function(x){
for(i in 1:length(x)){
codon <- paste(x[i],"AG", sep = "")
print(codon)
}
}
```
All function definitions start with function(…), have a curly bracket {, some code, and end with a }. Functions often don’t have for loops, but in this case it does, so there are two sets of curly brackets, one for the for() loop and one for the function wrapping around it.
So now I can get the results I did before like this
```
for_loop_function(x)
```
```
## [1] "AAG"
## [1] "TAG"
## [1] "CAG"
## [1] "GAG"
```
This is handy if I’m going to modify or reuse the process I’m doing. Let’s say I want to work with RNA instead of DNA. I’ll define a vector like this with U as the second element of the vector instead of T
```
x1 <- c("A","U" ,"C", "G")
x1[2]
```
```
## [1] "U"
```
Now I can get my results for RNA
```
for_loop_function(x1)
```
```
## [1] "AAG"
## [1] "UAG"
## [1] "CAG"
## [1] "GAG"
```
Note that because everything in R is an object, I can learn about objects containing functions like this
```
is(for_loop_function)
```
```
## [1] "function" "OptionalFunction" "PossibleMethod"
## [4] "expression_OR_function"
```
The first element of the output tells me that this is a function object.,
More succinctly I can use the class() function
```
class(for_loop_function)
```
```
## [1] "function"
```
Here’s an example of another function. Note the key elements:
The function name: entrez\_fetch\_list
The assignment operator \<\-
The function() function
The brackets
In this case, there’s also a for() loop in this function.
```
entrez_fetch_list <- function(db, id, rettype, ...){
#setup list for storing output
n.seq <- length(id)
list.output <- as.list(rep(NA, n.seq))
names(list.output) <- id
# get output
for(i in 1:length(id)){
list.output[[i]] <- rentrez::entrez_fetch(db = db,
id = id[i],
rettype = rettype)
}
return(list.output)
}
```
19\.1 Key functions / terms
---------------------------
* paste()
* functions to learn about vectors and other objects
* nchar()
* vector element
* bracket notation to access vector elements
* for() loop
* curly brackets { }
* function()
* class()
Here’s a simple “toy” example of using four loops.
Let’s say we need look AG different codons and need to vary the first base of a codon. For example, we want to look AG all the codons that are “XAG”, so we want
AAG
TAG
CAG
GAG
We make a vector that holds the first base of the code
```
#element 1 2 3 4
x <- c("A","T","C","G")
```
This is a vector with 4 elements. We can explore it using the usual functions that tell us about objects (aka variables) in R.
```
is(x)
```
```
## [1] "character" "vector"
## [3] "data.frameRowLabels" "SuperClassMethod"
## [5] "EnumerationValue" "character_OR_connection"
## [7] "character_OR_NULL" "atomic"
## [9] "vector_OR_Vector" "vector_OR_factor"
```
```
length(x)
```
```
## [1] 4
```
```
nchar(x)
```
```
## [1] 1 1 1 1
```
```
dim(x)
```
```
## NULL
```
```
nrow(x)
```
```
## NULL
```
```
ncol(x)
```
```
## NULL
```
If we need to know what a function does we should always look it up in the help file
```
?nchar
```
Note that when we call the function nchar() on x we get
```
nchar(x)
```
```
## [1] 1 1 1 1
```
This means that each element of x has one character in it.
Compare that result with this. Let’s make a vector with a single codon in it.
```
#element 1
y <- c("AGC")
```
Now nchar() says this
```
nchar(y)
```
```
## [1] 3
```
That is, 3 character (3 letters) in the first and only element of the vectors.
If our vector contained codons, it would look like this
```
y <- c("AAG", "TAG", "CAG", "GAG")
nchar(y)
```
```
## [1] 3 3 3 3
```
That is, four elements in the vector, each element with 3 characters in it.
Let’s say we are keeping the second and third position of our codon fixed AG “AG” and will vary the first position. A function that will be handy is paste().
Paste takes things and combines them into a single element of a vector. So, I can do this with my name
```
n <- paste("Nathan","Linn","Brouwer")
```
This gives me a vector of length 1
```
length(n)
```
```
## [1] 1
```
That contains my name
```
n
```
```
## [1] "Nathan Linn Brouwer"
```
If I don’t want any spaces I can do this
```
paste("NAGhan","Linn","Brouwer", sep = "")
```
```
## [1] "NAGhanLinnBrouwer"
```
I can use paste() to assembly codons for me
```
codon1 <- paste("A", "AG", sep = "")
codon1
```
```
## [1] "AAG"
```
I can make all for possible codon that end in “AG” like this
```
paste("A", "AG", sep = "")
```
```
## [1] "AAG"
```
```
paste("T", "AG", sep = "")
```
```
## [1] "TAG"
```
```
paste("C", "AG", sep = "")
```
```
## [1] "CAG"
```
```
paste("G", "AG", sep = "")
```
```
## [1] "GAG"
```
Since I have a vector with the first base I’m varying in it, I can also do this using bracket notation, with x\[1], x\[2], etc.
```
paste(x[1], "AG", sep = "")
```
```
## [1] "AAG"
```
```
paste(x[2], "AG", sep = "")
```
```
## [1] "TAG"
```
```
paste(x[3], "AG", sep = "")
```
```
## [1] "CAG"
```
```
paste(x[4], "AG", sep = "")
```
```
## [1] "GAG"
```
Copying the same line of code multiple times gets the job done but will be prone to errors. Anytime the same process gets repeated you should consider using for() loops and/or functions. I can take the four lines of code in the previous chunk and turn it into a four loop like this.
```
for(i in 1:length(x)){
codon <- paste(x[i],"AG", sep = "")
print(codon)
}
```
```
## [1] "AAG"
## [1] "TAG"
## [1] "CAG"
## [1] "GAG"
```
All for loops start with for(…). Don’t worry about what’s in between the parentheses right now. Then there’s a curly bracket {, some code thAG does whAG we want, and a closing curly bracket }.
This is a “toy” example and doesn’t accomplish much \- my four loop has as many lines of code as the stuff it replaces. But if I have to do something dozens, hundreds, or thousands of times then its very useful to use for() loops.
Functions also allow you to take a process and consolidated it. Often, functions contain for loops in them. For example, I can consolidated the for() loop into a function like this.
First, I define a function called for\_loop\_function()
```
for_loop_function <- function(x){
for(i in 1:length(x)){
codon <- paste(x[i],"AG", sep = "")
print(codon)
}
}
```
All function definitions start with function(…), have a curly bracket {, some code, and end with a }. Functions often don’t have for loops, but in this case it does, so there are two sets of curly brackets, one for the for() loop and one for the function wrapping around it.
So now I can get the results I did before like this
```
for_loop_function(x)
```
```
## [1] "AAG"
## [1] "TAG"
## [1] "CAG"
## [1] "GAG"
```
This is handy if I’m going to modify or reuse the process I’m doing. Let’s say I want to work with RNA instead of DNA. I’ll define a vector like this with U as the second element of the vector instead of T
```
x1 <- c("A","U" ,"C", "G")
x1[2]
```
```
## [1] "U"
```
Now I can get my results for RNA
```
for_loop_function(x1)
```
```
## [1] "AAG"
## [1] "UAG"
## [1] "CAG"
## [1] "GAG"
```
Note that because everything in R is an object, I can learn about objects containing functions like this
```
is(for_loop_function)
```
```
## [1] "function" "OptionalFunction" "PossibleMethod"
## [4] "expression_OR_function"
```
The first element of the output tells me that this is a function object.,
More succinctly I can use the class() function
```
class(for_loop_function)
```
```
## [1] "function"
```
Here’s an example of another function. Note the key elements:
The function name: entrez\_fetch\_list
The assignment operator \<\-
The function() function
The brackets
In this case, there’s also a for() loop in this function.
```
entrez_fetch_list <- function(db, id, rettype, ...){
#setup list for storing output
n.seq <- length(id)
list.output <- as.list(rep(NA, n.seq))
names(list.output) <- id
# get output
for(i in 1:length(id)){
list.output[[i]] <- rentrez::entrez_fetch(db = db,
id = id[i],
rettype = rettype)
}
return(list.output)
}
```
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/phylogenetic-tree-example-using-neighbor-joining.html |
Chapter 20 Phylogenetic tree example using neighbor joining
===========================================================
20\.1 Key vocab / concepts (not exhaustive)
-------------------------------------------
neighbor joining, msa, distance matrix, redundancy in distance matrices, diagonal of matrix, upper/lower triangular portion of matrix, symmetric matrix, phylogenetic tree, unrooted tree, branches, branch lengths, nodes
Phylogenetic trees can be built several ways. Almost all methods using macromolecular sequences (DNA, protein, RNA) require first creating a MSA. Some methods then use algorithms that work directly with the MSA. A faster way is to summarize the MSA as a distance matrix by calculating the genetic distance between each pair of sequences in the MSA. This produces a genetic distance matrix. There are several ways to calculate genetic distance, but the simplest is to count the number of bases that are different between two sequences that are of equal length or to use 1\-PID.
The ape packages has a popular distance\-based tree building method called neighbor joining (nj). This is a common method; for example, it can be used via the BLAST website.
First, download ape from cran if needed. I’ve commented this out because I have already done this.
```
# install.packages("ape")
```
Now load it into my current R sessions
```
library(ape)
```
I can always learn about an R function be going to the help file. Help files often include at the bottom. These vary in quality, but good ones use simple datasets to illustrate the basic operation of functions in a package.
If you go to the nj help file
```
?nj
```
and scroll to the bottom you’ll see a classic example from paper that introduce the nj algorithm, Saitou and Nei (1987\).
In this example, we’ll make a simple 8 x 8 distance matrix where each value is the number of bases that are different between two sequences.
The data first go in a vector
```
x <- c(7, 8, 11, 13, 16, 13, 17, 5, 8, 10, 13,
10, 14, 5, 7, 10, 7, 11, 8, 11, 8, 12,
5, 6, 10, 9, 13, 8)
```
Note that even though we’re going to make an 8 x 8 matrix, which will have 64 elements, we only have 28 elements in our vector. See if you can predict why this is.
```
length(x)
```
```
## [1] 28
```
We’ll then make an 8 x 8 matrix M using the matrix() function. (Don’t worry about the exact arguments being used here)
```
M <- matrix(0, 8, 8)
M
```
```
## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
## [1,] 0 0 0 0 0 0 0 0
## [2,] 0 0 0 0 0 0 0 0
## [3,] 0 0 0 0 0 0 0 0
## [4,] 0 0 0 0 0 0 0 0
## [5,] 0 0 0 0 0 0 0 0
## [6,] 0 0 0 0 0 0 0 0
## [7,] 0 0 0 0 0 0 0 0
## [8,] 0 0 0 0 0 0 0 0
```
Now we’ll make the matrix (close eyes and run code)
```
M[lower.tri(M)] <- x
M <- t(M)
M[lower.tri(M)] <- x
dimnames(M) <- list(1:8, 1:8)
```
Now we have an 8 x 8 matrix filled in
```
M
```
```
## 1 2 3 4 5 6 7 8
## 1 0 7 8 11 13 16 13 17
## 2 7 0 5 8 10 13 10 14
## 3 8 5 0 5 7 10 7 11
## 4 11 8 5 0 8 11 8 12
## 5 13 10 7 8 0 5 6 10
## 6 16 13 10 11 5 0 9 13
## 7 13 10 7 8 6 9 0 8
## 8 17 14 11 12 10 13 8 0
```
You can see that the diagonal is all 0, and that the matrix is symmetric. There were only 28 elements in our vector because the same values appear in both the upper triangular portion of the matrix and in the lower triangular portion. So, 28 elements, used in two places, plus 8 zeros for the diagonal:
```
2*28 + 8
```
```
## [1] 64
```
In this case the zeros are data: 1 sequences compared to itself has a distance of 0\. In many cases, the redundancies of a matrix will be dropped. For example, the diagonal left blank, one of the two triangular portions dropped (doesn’t matter which one), and even the first column dropped because its 0\.
Now, we can make the actual tree. nj() is a simple function that takes on a single arguments, a distance matrix
```
tr <- ape::nj(M)
```
The plot command visualizes the tree for us
```
plot(tr, type = "u")
plot(tr, type = "unrooted")
```
This is an unrooted tree because no outgroup (distantly related comparison group that diverged long ago) was defined. Unrooted trees are common in practice, but usually trees in textbooks are rooted. The lengths of the branches correlate with the number of differences between each pair of sequences. Two branches meet at a node.
The nj help file also has another example based on data from mice that comes from the ape package.
We can learn more about the data using ?woodmouse
This tells us that this is data from the mitochondrial gene cytochrome b, a gene that occurs in all mitochondria and hence all eukaryotes (but not archea or bacteria).
We load these data using the data() command
```
data(woodmouse)
```
We can see what it is using is() and class() and other function
```
is(woodmouse)
```
```
## [1] "DNAbin"
```
```
class(woodmouse)
```
```
## [1] "DNAbin"
```
```
dim(woodmouse)
```
```
## [1] 15 965
```
Note that its a specialized data object called “DNAbin” and not in a simple matrix format. I can learn about DNAbin objects from their help file. This help file also has some example code at the bottom, but its pretty dense. I was able to fish out a handy function called dist.dna() which let’s me better see what’s in the woodmouse object.
```
dist.dna(woodmouse)
```
There are 15 samples and the matrix is big. I can make it easier to see by using the round() function wrapped around dist.dna()
```
round(dist.dna(woodmouse), digits = 2)
```
```
## No305 No304 No306 No0906S No0908S No0909S No0910S No0912S No0913S
## No304 0.01
## No306 0.01 0.00
## No0906S 0.02 0.01 0.01
## No0908S 0.02 0.01 0.01 0.01
## No0909S 0.02 0.02 0.01 0.02 0.02
## No0910S 0.02 0.01 0.01 0.01 0.01 0.02
## No0912S 0.01 0.01 0.01 0.01 0.01 0.01 0.01
## No0913S 0.02 0.01 0.00 0.01 0.01 0.02 0.01 0.01
## No1103S 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.01
## No1007S 0.02 0.02 0.01 0.02 0.02 0.00 0.02 0.01 0.02
## No1114S 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02
## No1202S 0.02 0.01 0.01 0.01 0.01 0.02 0.00 0.01 0.01
## No1206S 0.02 0.01 0.01 0.01 0.01 0.02 0.01 0.01 0.01
## No1208S 0.02 0.02 0.01 0.02 0.02 0.00 0.02 0.01 0.02
## No1103S No1007S No1114S No1202S No1206S
## No304
## No306
## No0906S
## No0908S
## No0909S
## No0910S
## No0912S
## No0913S
## No1103S
## No1007S 0.01
## No1114S 0.02 0.02
## No1202S 0.01 0.01 0.02
## No1206S 0.01 0.02 0.02 0.01
## No1208S 0.01 0.00 0.02 0.02 0.02
```
Its still a big matrix, so you may need to move your RStudio panes around a bit to see it all.
Note that in this case, the lower triangular portion of the matrix is shown, the upper part and diagonal is omitted, and the first row is dropped: the first row is No304, while the first column is No305 (followed by column 2, No304\).
In order to build the tree we have to turn the DNAbin object into this matrix. The most transparent way to do this is like this
```
mouse_M <- dist.dna(woodmouse)
```
Then build the tree with nj()
```
trw <- nj(mouse_M)
```
We can do it in a single step, though, if we wrap the nj() function around dist.dna()
```
trw <- nj(dist.dna(woodmouse))
```
We can then visualize it using plot()
```
plot(trw)
```
20\.1 Key vocab / concepts (not exhaustive)
-------------------------------------------
neighbor joining, msa, distance matrix, redundancy in distance matrices, diagonal of matrix, upper/lower triangular portion of matrix, symmetric matrix, phylogenetic tree, unrooted tree, branches, branch lengths, nodes
Phylogenetic trees can be built several ways. Almost all methods using macromolecular sequences (DNA, protein, RNA) require first creating a MSA. Some methods then use algorithms that work directly with the MSA. A faster way is to summarize the MSA as a distance matrix by calculating the genetic distance between each pair of sequences in the MSA. This produces a genetic distance matrix. There are several ways to calculate genetic distance, but the simplest is to count the number of bases that are different between two sequences that are of equal length or to use 1\-PID.
The ape packages has a popular distance\-based tree building method called neighbor joining (nj). This is a common method; for example, it can be used via the BLAST website.
First, download ape from cran if needed. I’ve commented this out because I have already done this.
```
# install.packages("ape")
```
Now load it into my current R sessions
```
library(ape)
```
I can always learn about an R function be going to the help file. Help files often include at the bottom. These vary in quality, but good ones use simple datasets to illustrate the basic operation of functions in a package.
If you go to the nj help file
```
?nj
```
and scroll to the bottom you’ll see a classic example from paper that introduce the nj algorithm, Saitou and Nei (1987\).
In this example, we’ll make a simple 8 x 8 distance matrix where each value is the number of bases that are different between two sequences.
The data first go in a vector
```
x <- c(7, 8, 11, 13, 16, 13, 17, 5, 8, 10, 13,
10, 14, 5, 7, 10, 7, 11, 8, 11, 8, 12,
5, 6, 10, 9, 13, 8)
```
Note that even though we’re going to make an 8 x 8 matrix, which will have 64 elements, we only have 28 elements in our vector. See if you can predict why this is.
```
length(x)
```
```
## [1] 28
```
We’ll then make an 8 x 8 matrix M using the matrix() function. (Don’t worry about the exact arguments being used here)
```
M <- matrix(0, 8, 8)
M
```
```
## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
## [1,] 0 0 0 0 0 0 0 0
## [2,] 0 0 0 0 0 0 0 0
## [3,] 0 0 0 0 0 0 0 0
## [4,] 0 0 0 0 0 0 0 0
## [5,] 0 0 0 0 0 0 0 0
## [6,] 0 0 0 0 0 0 0 0
## [7,] 0 0 0 0 0 0 0 0
## [8,] 0 0 0 0 0 0 0 0
```
Now we’ll make the matrix (close eyes and run code)
```
M[lower.tri(M)] <- x
M <- t(M)
M[lower.tri(M)] <- x
dimnames(M) <- list(1:8, 1:8)
```
Now we have an 8 x 8 matrix filled in
```
M
```
```
## 1 2 3 4 5 6 7 8
## 1 0 7 8 11 13 16 13 17
## 2 7 0 5 8 10 13 10 14
## 3 8 5 0 5 7 10 7 11
## 4 11 8 5 0 8 11 8 12
## 5 13 10 7 8 0 5 6 10
## 6 16 13 10 11 5 0 9 13
## 7 13 10 7 8 6 9 0 8
## 8 17 14 11 12 10 13 8 0
```
You can see that the diagonal is all 0, and that the matrix is symmetric. There were only 28 elements in our vector because the same values appear in both the upper triangular portion of the matrix and in the lower triangular portion. So, 28 elements, used in two places, plus 8 zeros for the diagonal:
```
2*28 + 8
```
```
## [1] 64
```
In this case the zeros are data: 1 sequences compared to itself has a distance of 0\. In many cases, the redundancies of a matrix will be dropped. For example, the diagonal left blank, one of the two triangular portions dropped (doesn’t matter which one), and even the first column dropped because its 0\.
Now, we can make the actual tree. nj() is a simple function that takes on a single arguments, a distance matrix
```
tr <- ape::nj(M)
```
The plot command visualizes the tree for us
```
plot(tr, type = "u")
plot(tr, type = "unrooted")
```
This is an unrooted tree because no outgroup (distantly related comparison group that diverged long ago) was defined. Unrooted trees are common in practice, but usually trees in textbooks are rooted. The lengths of the branches correlate with the number of differences between each pair of sequences. Two branches meet at a node.
The nj help file also has another example based on data from mice that comes from the ape package.
We can learn more about the data using ?woodmouse
This tells us that this is data from the mitochondrial gene cytochrome b, a gene that occurs in all mitochondria and hence all eukaryotes (but not archea or bacteria).
We load these data using the data() command
```
data(woodmouse)
```
We can see what it is using is() and class() and other function
```
is(woodmouse)
```
```
## [1] "DNAbin"
```
```
class(woodmouse)
```
```
## [1] "DNAbin"
```
```
dim(woodmouse)
```
```
## [1] 15 965
```
Note that its a specialized data object called “DNAbin” and not in a simple matrix format. I can learn about DNAbin objects from their help file. This help file also has some example code at the bottom, but its pretty dense. I was able to fish out a handy function called dist.dna() which let’s me better see what’s in the woodmouse object.
```
dist.dna(woodmouse)
```
There are 15 samples and the matrix is big. I can make it easier to see by using the round() function wrapped around dist.dna()
```
round(dist.dna(woodmouse), digits = 2)
```
```
## No305 No304 No306 No0906S No0908S No0909S No0910S No0912S No0913S
## No304 0.01
## No306 0.01 0.00
## No0906S 0.02 0.01 0.01
## No0908S 0.02 0.01 0.01 0.01
## No0909S 0.02 0.02 0.01 0.02 0.02
## No0910S 0.02 0.01 0.01 0.01 0.01 0.02
## No0912S 0.01 0.01 0.01 0.01 0.01 0.01 0.01
## No0913S 0.02 0.01 0.00 0.01 0.01 0.02 0.01 0.01
## No1103S 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.01
## No1007S 0.02 0.02 0.01 0.02 0.02 0.00 0.02 0.01 0.02
## No1114S 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02 0.02
## No1202S 0.02 0.01 0.01 0.01 0.01 0.02 0.00 0.01 0.01
## No1206S 0.02 0.01 0.01 0.01 0.01 0.02 0.01 0.01 0.01
## No1208S 0.02 0.02 0.01 0.02 0.02 0.00 0.02 0.01 0.02
## No1103S No1007S No1114S No1202S No1206S
## No304
## No306
## No0906S
## No0908S
## No0909S
## No0910S
## No0912S
## No0913S
## No1103S
## No1007S 0.01
## No1114S 0.02 0.02
## No1202S 0.01 0.01 0.02
## No1206S 0.01 0.02 0.02 0.01
## No1208S 0.01 0.00 0.02 0.02 0.02
```
Its still a big matrix, so you may need to move your RStudio panes around a bit to see it all.
Note that in this case, the lower triangular portion of the matrix is shown, the upper part and diagonal is omitted, and the first row is dropped: the first row is No304, while the first column is No305 (followed by column 2, No304\).
In order to build the tree we have to turn the DNAbin object into this matrix. The most transparent way to do this is like this
```
mouse_M <- dist.dna(woodmouse)
```
Then build the tree with nj()
```
trw <- nj(mouse_M)
```
We can do it in a single step, though, if we wrap the nj() function around dist.dna()
```
trw <- nj(dist.dna(woodmouse))
```
We can then visualize it using plot()
```
plot(trw)
```
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/download-FASTA-inR.html |
Chapter 21 Downloading DNA sequences as FASTA files in R
========================================================
This is a modification of [“DNA Sequence Statistics”](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/src/chapter1.html) from Avril Coghlan’s [*A little book of R for bioinformatics.*](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/index.html). Most of the text and code was originally written by Dr. Coghlan and distributed under the [Creative Commons 3\.0](https://creativecommons.org/licenses/by/3.0/us/) license.
**NOTE**: There is some redundancy in this current draft that needs to be eliminated.
### 21\.0\.1 Functions
* library()
* help()
* cat()
* is()
* class()
* dim()
* length()
* nchar()
* strtrim()
* is.vector()
* table()
* write()
* getwd()
* seqinr::write.fasta()
### 21\.0\.2 Software/websites
* www.ncbi.nlm.nih.gov
* Text editors (e.g. Notepad\+\+, TextWrangler)
### 21\.0\.3 R vocabulary
* list
* library
* package
* CRAN
* wrapper
* underscore \_
* Camel Case
### 21\.0\.4 File types
* FASTA
### 21\.0\.5 Bioinformatics vocabulary
* accession, accession number
* NCBI
* NCBI Sequence Database
* EMBL Sequence Database
* FASTA file
* RefSeq
21\.1 Learning objectives
-------------------------
By the end of this tutorial you will be able to
* Download sequences in FASTA format
* Understand there format and structure
* Do basic of FASTA data is stored in a vector using is(), class(), length() and other functions
* Determine the GC content using GC() and obtain other summary data with count()
* Save FASTA files to your hard drive
### 21\.1\.1 Organisms and Sequence accessions
* Dengue virus: DEN\-1, DEN\-2, DEN\-3, and DEN\-4\.
The NCBI **RefeSeq accessions** for the DNA sequences of the DEN\-1, DEN\-2, DEN\-3, and DEN\-4 Dengue viruses are NC\_001477, NC\_001474, NC\_001475 and NC\_002640, respectively.
According to Wikipedia
> “Dengue virus (DENV) is the cause of dengue fever. It is a mosquito\-borne, single positive\-stranded RNA virus … Five serotypes of the virus have been found, all of which can cause the full spectrum of disease. Nevertheless, scientists’ understanding of dengue virus may be simplistic, as rather than distinct … groups, a continuum appears to exist.” <https://en.wikipedia.org/wiki/Dengue_virus>
### 21\.1\.2 Preliminaries
Note that seqinr the package name is all lower case, though the authors of the package like to spell it “SeqinR”.
```
library(rentrez)
library(seqinr)
library(compbio4all)
```
21\.2 DNA Sequence Statistics: Part 1
-------------------------------------
### 21\.2\.1 Using R for Bioinformatics
The chapter will guide you through the process of using R to carry out simple analyses that are common in bioinformatics and computational biology. In particular, the focus is on computational analysis of biological sequence data such as genome sequences and protein sequences. The programming approaches, however, are broadly generalizable to statistics and data science.
The tutorials assume that the reader has some basic knowledge of biology, but not necessarily of bioinformatics. The focus is to explain simple bioinformatics analysis, and to explain how to carry out these analyses using *R*.
### 21\.2\.2 R packages for bioinformatics: Bioconductor and SeqinR
Many authors have written *R* packages for performing a wide variety of analyses. These do not come with the standard *R* installation, but must be installed and loaded as “add\-ons”.
Bioinformaticians have written numerous specialized packages for *R*. In this tutorial, you will learn to use some of the function in the [`SeqinR`](https://cran.r-project.org/web/packages/seqinr/index.html) package to to carry out simple analyses of DNA sequences. (`SeqinR` can retrieve sequences from a DNA sequence database, but this has largely been replaced by the functions in the package `rentrez`)
Many well\-known bioinformatics packages for *R* are in the Bioconductor set of *R* packages (www.bioconductor.org), which contains packages with many *R* functions for analyzing biological data sets such as microarray data. The [`SeqinR`](https://cran.r-project.org/web/packages/seqinr/index.html) package is from CRAN, which contains R functions for obtaining sequences from DNA and protein sequence databases, and for analyzing DNA and protein sequences.
For instructions/review on how to install an R package on your own see [How to install an R package](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/src/installr.html) )
We will also use functions or data from the `rentrez` and `compbio4all` packages.
Remember that you can ask for more information about a particular *R* command by using the `help()` function or `?` function. For example, to ask for more information about the `library()`, you can type:
```
help("library")
```
You can also do this
```
?library
```
### 21\.2\.3 FASTA file format
The FASTA format is a simple and widely used format for storing biological (e.g. DNA or protein) sequences. It was first used by the [FASTA program](https://en.wikipedia.org/wiki/FASTA) for sequence alignment in the 1980s and has been adopted as standard by many other programs.
FASTA files begin with a single\-line description starting with a greater\-than sign `>` character, followed on the next line by the sequences. Here is an example of a FASTA file. (If you’re looking at the source script for this lesson you’ll see the `cat()` command, which is just a text display function used format the text when you run the code).
```
## >A06852 183 residues MPRLFSYLLGVWLLLSQLPREIPGQSTNDFIKACGRELVRLWVEICGSVSWGRTALSLEEPQLETGPPAETMPSSITKDAEILKMMLEFVPNLPQELKATLSERQPSLRELQQSASKDSNLNFEEFKKIILNRQNEAEDKSLLELKNLGLDKHSRKKRLFRMTLSEKCCQVGCIRKDIARLC
```
### 21\.2\.4 The NCBI sequence database
The US [National Centre for Biotechnology Information (NCBI)](www.ncbi.nlm.nih.gov) maintains the **NCBI Sequence Database**, a huge database of all the DNA and protein sequence data that has been collected. There are also similar databases in Europe, the [European Molecular Biology Laboratory (EMBL) Sequence Database](www.ebi.ac.uk/embl), and Japan, the [DNA Data Bank of Japan (DDBJ)](www.ddbj.nig.ac.jp). These three databases exchange data every night, so at any one point in time, they contain almost identical data.
Each sequence in the NCBI Sequence Database is stored in a separate **record**, and is assigned a unique identifier that can be used to refer to that record. The identifier is known as an **accession**, and consists of a mixture of numbers and letters.
For example, Dengue virus causes Dengue fever, which is classified as a **neglected tropical disease** by the World Health Organization (WHO), is classified by any one of four types of Dengue virus: DEN\-1, DEN\-2, DEN\-3, and DEN\-4\. The NCBI accessions for the DNA sequences of the DEN\-1, DEN\-2, DEN\-3, and DEN\-4 Dengue viruses are
* NC\_001477
* NC\_001474
* NC\_001475
* NC\_002640
Note that because the NCBI Sequence Database, the EMBL Sequence Database, and DDBJ exchange data every night, the DEN\-1 (and DEN\-2, DEN\-3, DEN\-4\) Dengue virus sequence are present in all three databases, but they have different accessions in each database, as they each use their own numbering systems for referring to their own sequence records.
### 21\.2\.5 Retrieving genome sequence data using rentrez
You can retrieve sequence data from NCBI directly from *R* using the `rentrez` package. The DEN\-1 Dengue virus genome sequence has NCBI RefSeq accession NC\_001477\. To retrieve a sequence with a particular NCBI accession, you can use the function `entrez_fetch()` from the `rentrez` package. Note that to be specific where the function comes from I write it as `package::function()`.
```
dengueseq_fasta <- rentrez::entrez_fetch(db = "nucleotide",
id = "NC_001477",
rettype = "fasta")
```
Note that the “*” in the name is just an arbitrary way to separate two words. Another common format would be `dengueseq.fasta`. Some people like `dengueseqFasta`, called **camel case** because the capital letter makes a hump in the middle of the word. Underscores are becoming most common and are favored by developers associated with RStudio and the **tidyverse** of packages that many data scientists use. I switch between ”.” and ”*” as separators, usually favoring “\_” for function names and “.” for objects; I personally find camel case harder to read and to type.
Ok, so what exactly have we done when we made `dengueseq_fasta`? We have an R object `dengueseq_fasta` which has the sequence linked to the accession number “NC\_001477\.” So where is the sequence, and what is it?
First, what is it?
```
is(dengueseq_fasta)
```
```
## [1] "character" "vector"
## [3] "data.frameRowLabels" "SuperClassMethod"
## [5] "EnumerationValue" "character_OR_connection"
## [7] "character_OR_NULL" "atomic"
## [9] "vector_OR_Vector" "vector_OR_factor"
```
```
class(dengueseq_fasta)
```
```
## [1] "character"
```
How big is it? Try the `dim()` and `length()` commands and see which one works. Do you know why one works and the other doesn’t?
```
dim(dengueseq_fasta)
```
```
## NULL
```
```
length(dengueseq_fasta)
```
```
## [1] 1
```
The size of the object is 1\. Why is this? This is the genomic sequence of a virus, so you’d expect it to be fairly large. We’ll use another function below to explore that issue. Think about this first: how many pieces of unique information are in the `dengueseq` object? In what sense is there only *one* piece of information?
If we want to actually see the sequence we can type just type `dengueseq_fasta` and press enter. This will print the WHOLE genomic sequence out but it will probably run of your screen.
```
dengueseq_fasta
```
This is a whole genome sequence, but its stored as single entry in a vector, so the `length()` command just tells us how many entries there are in the vector, which is just one! What this means is that the entire genomic sequence is stored in a single entry of the vector `dengueseq_fasta`. (If you’re not following along with this, no worries \- its not essential to actually working with the data)
If we want to actually know how long the sequence is, we need to use the function `nchar()`, which stands for “number of characters”.
```
nchar(dengueseq_fasta)
```
```
## [1] 10935
```
The sequence is 10935 bases long. All of these bases are stored as a single **character string** with no spaces in a single entry of our `dengueseq_fasta` vector. This isn’t actually a useful format for us, so below were’ going to convert it to something more useful.
If we want to see just part of the sequence we can use the `strtrim()` function. This stands for “String trim”. Before you run the code below, predict what the 100 means.
```
strtrim(dengueseq_fasta, 100)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAAC"
```
Note that at the end of the name is a slash followed by an n, which indicates to the computer that this is a **newline**; this is read by text editor, but is ignored by R in this context.
```
strtrim(dengueseq_fasta, 45)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nA"
```
After the `\\n` begins the sequence, which will continue on for a LOOOOOONG way. Let’s just print a little bit.
```
strtrim(dengueseq_fasta, 52)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTA"
```
Let’s print some more. Do you notice anything beside A, T, C and G in the sequence?
```
strtrim(dengueseq_fasta, 200)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTAACAGT\nTTTTTATTAGAGAGCAGATCTCTGATGAACAACCAACGGAAAAAGACGGGTCGACCGTCTTTCAATATGC\nTGAAACGCGCGAGAAA"
```
Again, there are the `\\n` newline characters, which tell text editors and word processors how to display the file. (note that if you are reading the raw code for this chapter there will be 2 slashes in front of the n in the previous sentence; this is an RMarkdown thing)
Now that we a sense of what we’re looking at let’s explore the `dengueseq_fasta` a bit more.
We can find out more information about what it is using the `class()`command.
```
class(dengueseq_fasta)
```
```
## [1] "character"
```
As noted before, this is character data.
Many things in R are vectors so we can ask *R* `is.vector()`
```
is.vector(dengueseq_fasta)
```
```
## [1] TRUE
```
Yup, that’s true.
Ok, let’s see what else. A handy though often verbose command is `is()`, which tells us what an object, well, what it is:
```
is(dengueseq_fasta)
```
```
## [1] "character" "vector"
## [3] "data.frameRowLabels" "SuperClassMethod"
## [5] "EnumerationValue" "character_OR_connection"
## [7] "character_OR_NULL" "atomic"
## [9] "vector_OR_Vector" "vector_OR_factor"
```
There is a lot here but if you scan for some key words you will see “character” and “vector” at the top. The other stuff you can ignore. The first two things, though, tell us the dengueseq\_fasta is a **vector** of the class **character**: that is, a **character vector**.
Another handy function is `str()`, which gives us a peak at the context and structure of an *R* object. This is most useful when you are working in the R console or with dataframes, but is a useful function to run on all *R* objects. How does this output differ from other ways we’ve displayed dengueseq\_fasta?
```
str(dengueseq_fasta)
```
```
## chr ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTA"| __truncated__
```
We know it contains character data \- how many characters? `nchar()` for “number of characters” answers that:
```
nchar(dengueseq_fasta)
```
```
## [1] 10935
```
21\.3 Saving FASTA files
------------------------
We can save our data as .fasta file for safe keeping. The `write()` function will save the data we downloaded as a plain text file.
If you do this, you’ll need to figure out where *R* is saving things, which requires and understanding *R’s* **file system**, which can take some getting used to, especially if you’re new to programming. As a start, you can see where *R* saves things by using the `getwd()` command, which tells you where on your hard drive R currently is using as its home base for files.
```
getwd()
```
```
## [1] "/Users/nlb24/OneDrive - University of Pittsburgh/0-books/lbrb-bk/lbrb"
```
You can set the working directory to where a script file is from using these steps in RStudio:
1. Click on “Session” (in the middle of the menu on the top of the screen)
2. Select “Set Working Directory” (in the middle of the drop\-down menu)
3. Select “To source file location” (2nd option)
Then, when you things, it will be in that directory.
```
write(dengueseq_fasta,
file="dengueseq.fasta")
```
You can see what files are in your directory using list.files()
```
list.files()
```
21\.4 Next steps
----------------
FASTA files in R typically have to be converted before being used. I made a function called compbio4all::fasta\_cleaner() which takes care of this.
In the optional lesson “Cleaning and preparing FASTA files for analysis in R” I process the `dengueseq_fasta` object step by step so that we can use it in analyses. If you are interested in how that functions works check out that chapter; otherwise, you can skip it.
21\.5 Exercises
---------------
Uncomment the code chunk below to download your own sequence of interest. Change the `id =` to that of your sequence, and change the `db =` to “protein” if needed. Change the object name to a descriptive name, such as the name of the gene, e.g. `shroom.fasta`.
```
# dengueseq.fasta <- rentrez::entrez_fetch(db = "nucleotide", # set to "protein" if needed
# id = "NC_001477", # change accession
# rettype = "fasta")
```
Set your working directory then save the FASTA file to your hard drive using. Be sure to change the name of the object and the file name to be appropriate to your gene.
```
# write(dengueseq.fasta, # change object name, e.g. shroom.fasta
# file="dengueseq.fasta") # change file name, e.g. shroom.fasta
```
21\.6 Review questions
----------------------
1. What does the nchar() function stand for?
2. Why does a FASTA file stored in a vector by entrez\_fetch() have a length of 1 and no dimension?
3. What does strtrim() mean?
4. If a sequence is stored in object X and you run the code strtrim(x, 10\), how many characters are shown?
5. What is the newline character in a FASTA file?
### 21\.0\.1 Functions
* library()
* help()
* cat()
* is()
* class()
* dim()
* length()
* nchar()
* strtrim()
* is.vector()
* table()
* write()
* getwd()
* seqinr::write.fasta()
### 21\.0\.2 Software/websites
* www.ncbi.nlm.nih.gov
* Text editors (e.g. Notepad\+\+, TextWrangler)
### 21\.0\.3 R vocabulary
* list
* library
* package
* CRAN
* wrapper
* underscore \_
* Camel Case
### 21\.0\.4 File types
* FASTA
### 21\.0\.5 Bioinformatics vocabulary
* accession, accession number
* NCBI
* NCBI Sequence Database
* EMBL Sequence Database
* FASTA file
* RefSeq
21\.1 Learning objectives
-------------------------
By the end of this tutorial you will be able to
* Download sequences in FASTA format
* Understand there format and structure
* Do basic of FASTA data is stored in a vector using is(), class(), length() and other functions
* Determine the GC content using GC() and obtain other summary data with count()
* Save FASTA files to your hard drive
### 21\.1\.1 Organisms and Sequence accessions
* Dengue virus: DEN\-1, DEN\-2, DEN\-3, and DEN\-4\.
The NCBI **RefeSeq accessions** for the DNA sequences of the DEN\-1, DEN\-2, DEN\-3, and DEN\-4 Dengue viruses are NC\_001477, NC\_001474, NC\_001475 and NC\_002640, respectively.
According to Wikipedia
> “Dengue virus (DENV) is the cause of dengue fever. It is a mosquito\-borne, single positive\-stranded RNA virus … Five serotypes of the virus have been found, all of which can cause the full spectrum of disease. Nevertheless, scientists’ understanding of dengue virus may be simplistic, as rather than distinct … groups, a continuum appears to exist.” <https://en.wikipedia.org/wiki/Dengue_virus>
### 21\.1\.2 Preliminaries
Note that seqinr the package name is all lower case, though the authors of the package like to spell it “SeqinR”.
```
library(rentrez)
library(seqinr)
library(compbio4all)
```
### 21\.1\.1 Organisms and Sequence accessions
* Dengue virus: DEN\-1, DEN\-2, DEN\-3, and DEN\-4\.
The NCBI **RefeSeq accessions** for the DNA sequences of the DEN\-1, DEN\-2, DEN\-3, and DEN\-4 Dengue viruses are NC\_001477, NC\_001474, NC\_001475 and NC\_002640, respectively.
According to Wikipedia
> “Dengue virus (DENV) is the cause of dengue fever. It is a mosquito\-borne, single positive\-stranded RNA virus … Five serotypes of the virus have been found, all of which can cause the full spectrum of disease. Nevertheless, scientists’ understanding of dengue virus may be simplistic, as rather than distinct … groups, a continuum appears to exist.” <https://en.wikipedia.org/wiki/Dengue_virus>
### 21\.1\.2 Preliminaries
Note that seqinr the package name is all lower case, though the authors of the package like to spell it “SeqinR”.
```
library(rentrez)
library(seqinr)
library(compbio4all)
```
21\.2 DNA Sequence Statistics: Part 1
-------------------------------------
### 21\.2\.1 Using R for Bioinformatics
The chapter will guide you through the process of using R to carry out simple analyses that are common in bioinformatics and computational biology. In particular, the focus is on computational analysis of biological sequence data such as genome sequences and protein sequences. The programming approaches, however, are broadly generalizable to statistics and data science.
The tutorials assume that the reader has some basic knowledge of biology, but not necessarily of bioinformatics. The focus is to explain simple bioinformatics analysis, and to explain how to carry out these analyses using *R*.
### 21\.2\.2 R packages for bioinformatics: Bioconductor and SeqinR
Many authors have written *R* packages for performing a wide variety of analyses. These do not come with the standard *R* installation, but must be installed and loaded as “add\-ons”.
Bioinformaticians have written numerous specialized packages for *R*. In this tutorial, you will learn to use some of the function in the [`SeqinR`](https://cran.r-project.org/web/packages/seqinr/index.html) package to to carry out simple analyses of DNA sequences. (`SeqinR` can retrieve sequences from a DNA sequence database, but this has largely been replaced by the functions in the package `rentrez`)
Many well\-known bioinformatics packages for *R* are in the Bioconductor set of *R* packages (www.bioconductor.org), which contains packages with many *R* functions for analyzing biological data sets such as microarray data. The [`SeqinR`](https://cran.r-project.org/web/packages/seqinr/index.html) package is from CRAN, which contains R functions for obtaining sequences from DNA and protein sequence databases, and for analyzing DNA and protein sequences.
For instructions/review on how to install an R package on your own see [How to install an R package](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/src/installr.html) )
We will also use functions or data from the `rentrez` and `compbio4all` packages.
Remember that you can ask for more information about a particular *R* command by using the `help()` function or `?` function. For example, to ask for more information about the `library()`, you can type:
```
help("library")
```
You can also do this
```
?library
```
### 21\.2\.3 FASTA file format
The FASTA format is a simple and widely used format for storing biological (e.g. DNA or protein) sequences. It was first used by the [FASTA program](https://en.wikipedia.org/wiki/FASTA) for sequence alignment in the 1980s and has been adopted as standard by many other programs.
FASTA files begin with a single\-line description starting with a greater\-than sign `>` character, followed on the next line by the sequences. Here is an example of a FASTA file. (If you’re looking at the source script for this lesson you’ll see the `cat()` command, which is just a text display function used format the text when you run the code).
```
## >A06852 183 residues MPRLFSYLLGVWLLLSQLPREIPGQSTNDFIKACGRELVRLWVEICGSVSWGRTALSLEEPQLETGPPAETMPSSITKDAEILKMMLEFVPNLPQELKATLSERQPSLRELQQSASKDSNLNFEEFKKIILNRQNEAEDKSLLELKNLGLDKHSRKKRLFRMTLSEKCCQVGCIRKDIARLC
```
### 21\.2\.4 The NCBI sequence database
The US [National Centre for Biotechnology Information (NCBI)](www.ncbi.nlm.nih.gov) maintains the **NCBI Sequence Database**, a huge database of all the DNA and protein sequence data that has been collected. There are also similar databases in Europe, the [European Molecular Biology Laboratory (EMBL) Sequence Database](www.ebi.ac.uk/embl), and Japan, the [DNA Data Bank of Japan (DDBJ)](www.ddbj.nig.ac.jp). These three databases exchange data every night, so at any one point in time, they contain almost identical data.
Each sequence in the NCBI Sequence Database is stored in a separate **record**, and is assigned a unique identifier that can be used to refer to that record. The identifier is known as an **accession**, and consists of a mixture of numbers and letters.
For example, Dengue virus causes Dengue fever, which is classified as a **neglected tropical disease** by the World Health Organization (WHO), is classified by any one of four types of Dengue virus: DEN\-1, DEN\-2, DEN\-3, and DEN\-4\. The NCBI accessions for the DNA sequences of the DEN\-1, DEN\-2, DEN\-3, and DEN\-4 Dengue viruses are
* NC\_001477
* NC\_001474
* NC\_001475
* NC\_002640
Note that because the NCBI Sequence Database, the EMBL Sequence Database, and DDBJ exchange data every night, the DEN\-1 (and DEN\-2, DEN\-3, DEN\-4\) Dengue virus sequence are present in all three databases, but they have different accessions in each database, as they each use their own numbering systems for referring to their own sequence records.
### 21\.2\.5 Retrieving genome sequence data using rentrez
You can retrieve sequence data from NCBI directly from *R* using the `rentrez` package. The DEN\-1 Dengue virus genome sequence has NCBI RefSeq accession NC\_001477\. To retrieve a sequence with a particular NCBI accession, you can use the function `entrez_fetch()` from the `rentrez` package. Note that to be specific where the function comes from I write it as `package::function()`.
```
dengueseq_fasta <- rentrez::entrez_fetch(db = "nucleotide",
id = "NC_001477",
rettype = "fasta")
```
Note that the “*” in the name is just an arbitrary way to separate two words. Another common format would be `dengueseq.fasta`. Some people like `dengueseqFasta`, called **camel case** because the capital letter makes a hump in the middle of the word. Underscores are becoming most common and are favored by developers associated with RStudio and the **tidyverse** of packages that many data scientists use. I switch between ”.” and ”*” as separators, usually favoring “\_” for function names and “.” for objects; I personally find camel case harder to read and to type.
Ok, so what exactly have we done when we made `dengueseq_fasta`? We have an R object `dengueseq_fasta` which has the sequence linked to the accession number “NC\_001477\.” So where is the sequence, and what is it?
First, what is it?
```
is(dengueseq_fasta)
```
```
## [1] "character" "vector"
## [3] "data.frameRowLabels" "SuperClassMethod"
## [5] "EnumerationValue" "character_OR_connection"
## [7] "character_OR_NULL" "atomic"
## [9] "vector_OR_Vector" "vector_OR_factor"
```
```
class(dengueseq_fasta)
```
```
## [1] "character"
```
How big is it? Try the `dim()` and `length()` commands and see which one works. Do you know why one works and the other doesn’t?
```
dim(dengueseq_fasta)
```
```
## NULL
```
```
length(dengueseq_fasta)
```
```
## [1] 1
```
The size of the object is 1\. Why is this? This is the genomic sequence of a virus, so you’d expect it to be fairly large. We’ll use another function below to explore that issue. Think about this first: how many pieces of unique information are in the `dengueseq` object? In what sense is there only *one* piece of information?
If we want to actually see the sequence we can type just type `dengueseq_fasta` and press enter. This will print the WHOLE genomic sequence out but it will probably run of your screen.
```
dengueseq_fasta
```
This is a whole genome sequence, but its stored as single entry in a vector, so the `length()` command just tells us how many entries there are in the vector, which is just one! What this means is that the entire genomic sequence is stored in a single entry of the vector `dengueseq_fasta`. (If you’re not following along with this, no worries \- its not essential to actually working with the data)
If we want to actually know how long the sequence is, we need to use the function `nchar()`, which stands for “number of characters”.
```
nchar(dengueseq_fasta)
```
```
## [1] 10935
```
The sequence is 10935 bases long. All of these bases are stored as a single **character string** with no spaces in a single entry of our `dengueseq_fasta` vector. This isn’t actually a useful format for us, so below were’ going to convert it to something more useful.
If we want to see just part of the sequence we can use the `strtrim()` function. This stands for “String trim”. Before you run the code below, predict what the 100 means.
```
strtrim(dengueseq_fasta, 100)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAAC"
```
Note that at the end of the name is a slash followed by an n, which indicates to the computer that this is a **newline**; this is read by text editor, but is ignored by R in this context.
```
strtrim(dengueseq_fasta, 45)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nA"
```
After the `\\n` begins the sequence, which will continue on for a LOOOOOONG way. Let’s just print a little bit.
```
strtrim(dengueseq_fasta, 52)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTA"
```
Let’s print some more. Do you notice anything beside A, T, C and G in the sequence?
```
strtrim(dengueseq_fasta, 200)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTAACAGT\nTTTTTATTAGAGAGCAGATCTCTGATGAACAACCAACGGAAAAAGACGGGTCGACCGTCTTTCAATATGC\nTGAAACGCGCGAGAAA"
```
Again, there are the `\\n` newline characters, which tell text editors and word processors how to display the file. (note that if you are reading the raw code for this chapter there will be 2 slashes in front of the n in the previous sentence; this is an RMarkdown thing)
Now that we a sense of what we’re looking at let’s explore the `dengueseq_fasta` a bit more.
We can find out more information about what it is using the `class()`command.
```
class(dengueseq_fasta)
```
```
## [1] "character"
```
As noted before, this is character data.
Many things in R are vectors so we can ask *R* `is.vector()`
```
is.vector(dengueseq_fasta)
```
```
## [1] TRUE
```
Yup, that’s true.
Ok, let’s see what else. A handy though often verbose command is `is()`, which tells us what an object, well, what it is:
```
is(dengueseq_fasta)
```
```
## [1] "character" "vector"
## [3] "data.frameRowLabels" "SuperClassMethod"
## [5] "EnumerationValue" "character_OR_connection"
## [7] "character_OR_NULL" "atomic"
## [9] "vector_OR_Vector" "vector_OR_factor"
```
There is a lot here but if you scan for some key words you will see “character” and “vector” at the top. The other stuff you can ignore. The first two things, though, tell us the dengueseq\_fasta is a **vector** of the class **character**: that is, a **character vector**.
Another handy function is `str()`, which gives us a peak at the context and structure of an *R* object. This is most useful when you are working in the R console or with dataframes, but is a useful function to run on all *R* objects. How does this output differ from other ways we’ve displayed dengueseq\_fasta?
```
str(dengueseq_fasta)
```
```
## chr ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTA"| __truncated__
```
We know it contains character data \- how many characters? `nchar()` for “number of characters” answers that:
```
nchar(dengueseq_fasta)
```
```
## [1] 10935
```
### 21\.2\.1 Using R for Bioinformatics
The chapter will guide you through the process of using R to carry out simple analyses that are common in bioinformatics and computational biology. In particular, the focus is on computational analysis of biological sequence data such as genome sequences and protein sequences. The programming approaches, however, are broadly generalizable to statistics and data science.
The tutorials assume that the reader has some basic knowledge of biology, but not necessarily of bioinformatics. The focus is to explain simple bioinformatics analysis, and to explain how to carry out these analyses using *R*.
### 21\.2\.2 R packages for bioinformatics: Bioconductor and SeqinR
Many authors have written *R* packages for performing a wide variety of analyses. These do not come with the standard *R* installation, but must be installed and loaded as “add\-ons”.
Bioinformaticians have written numerous specialized packages for *R*. In this tutorial, you will learn to use some of the function in the [`SeqinR`](https://cran.r-project.org/web/packages/seqinr/index.html) package to to carry out simple analyses of DNA sequences. (`SeqinR` can retrieve sequences from a DNA sequence database, but this has largely been replaced by the functions in the package `rentrez`)
Many well\-known bioinformatics packages for *R* are in the Bioconductor set of *R* packages (www.bioconductor.org), which contains packages with many *R* functions for analyzing biological data sets such as microarray data. The [`SeqinR`](https://cran.r-project.org/web/packages/seqinr/index.html) package is from CRAN, which contains R functions for obtaining sequences from DNA and protein sequence databases, and for analyzing DNA and protein sequences.
For instructions/review on how to install an R package on your own see [How to install an R package](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/src/installr.html) )
We will also use functions or data from the `rentrez` and `compbio4all` packages.
Remember that you can ask for more information about a particular *R* command by using the `help()` function or `?` function. For example, to ask for more information about the `library()`, you can type:
```
help("library")
```
You can also do this
```
?library
```
### 21\.2\.3 FASTA file format
The FASTA format is a simple and widely used format for storing biological (e.g. DNA or protein) sequences. It was first used by the [FASTA program](https://en.wikipedia.org/wiki/FASTA) for sequence alignment in the 1980s and has been adopted as standard by many other programs.
FASTA files begin with a single\-line description starting with a greater\-than sign `>` character, followed on the next line by the sequences. Here is an example of a FASTA file. (If you’re looking at the source script for this lesson you’ll see the `cat()` command, which is just a text display function used format the text when you run the code).
```
## >A06852 183 residues MPRLFSYLLGVWLLLSQLPREIPGQSTNDFIKACGRELVRLWVEICGSVSWGRTALSLEEPQLETGPPAETMPSSITKDAEILKMMLEFVPNLPQELKATLSERQPSLRELQQSASKDSNLNFEEFKKIILNRQNEAEDKSLLELKNLGLDKHSRKKRLFRMTLSEKCCQVGCIRKDIARLC
```
### 21\.2\.4 The NCBI sequence database
The US [National Centre for Biotechnology Information (NCBI)](www.ncbi.nlm.nih.gov) maintains the **NCBI Sequence Database**, a huge database of all the DNA and protein sequence data that has been collected. There are also similar databases in Europe, the [European Molecular Biology Laboratory (EMBL) Sequence Database](www.ebi.ac.uk/embl), and Japan, the [DNA Data Bank of Japan (DDBJ)](www.ddbj.nig.ac.jp). These three databases exchange data every night, so at any one point in time, they contain almost identical data.
Each sequence in the NCBI Sequence Database is stored in a separate **record**, and is assigned a unique identifier that can be used to refer to that record. The identifier is known as an **accession**, and consists of a mixture of numbers and letters.
For example, Dengue virus causes Dengue fever, which is classified as a **neglected tropical disease** by the World Health Organization (WHO), is classified by any one of four types of Dengue virus: DEN\-1, DEN\-2, DEN\-3, and DEN\-4\. The NCBI accessions for the DNA sequences of the DEN\-1, DEN\-2, DEN\-3, and DEN\-4 Dengue viruses are
* NC\_001477
* NC\_001474
* NC\_001475
* NC\_002640
Note that because the NCBI Sequence Database, the EMBL Sequence Database, and DDBJ exchange data every night, the DEN\-1 (and DEN\-2, DEN\-3, DEN\-4\) Dengue virus sequence are present in all three databases, but they have different accessions in each database, as they each use their own numbering systems for referring to their own sequence records.
### 21\.2\.5 Retrieving genome sequence data using rentrez
You can retrieve sequence data from NCBI directly from *R* using the `rentrez` package. The DEN\-1 Dengue virus genome sequence has NCBI RefSeq accession NC\_001477\. To retrieve a sequence with a particular NCBI accession, you can use the function `entrez_fetch()` from the `rentrez` package. Note that to be specific where the function comes from I write it as `package::function()`.
```
dengueseq_fasta <- rentrez::entrez_fetch(db = "nucleotide",
id = "NC_001477",
rettype = "fasta")
```
Note that the “*” in the name is just an arbitrary way to separate two words. Another common format would be `dengueseq.fasta`. Some people like `dengueseqFasta`, called **camel case** because the capital letter makes a hump in the middle of the word. Underscores are becoming most common and are favored by developers associated with RStudio and the **tidyverse** of packages that many data scientists use. I switch between ”.” and ”*” as separators, usually favoring “\_” for function names and “.” for objects; I personally find camel case harder to read and to type.
Ok, so what exactly have we done when we made `dengueseq_fasta`? We have an R object `dengueseq_fasta` which has the sequence linked to the accession number “NC\_001477\.” So where is the sequence, and what is it?
First, what is it?
```
is(dengueseq_fasta)
```
```
## [1] "character" "vector"
## [3] "data.frameRowLabels" "SuperClassMethod"
## [5] "EnumerationValue" "character_OR_connection"
## [7] "character_OR_NULL" "atomic"
## [9] "vector_OR_Vector" "vector_OR_factor"
```
```
class(dengueseq_fasta)
```
```
## [1] "character"
```
How big is it? Try the `dim()` and `length()` commands and see which one works. Do you know why one works and the other doesn’t?
```
dim(dengueseq_fasta)
```
```
## NULL
```
```
length(dengueseq_fasta)
```
```
## [1] 1
```
The size of the object is 1\. Why is this? This is the genomic sequence of a virus, so you’d expect it to be fairly large. We’ll use another function below to explore that issue. Think about this first: how many pieces of unique information are in the `dengueseq` object? In what sense is there only *one* piece of information?
If we want to actually see the sequence we can type just type `dengueseq_fasta` and press enter. This will print the WHOLE genomic sequence out but it will probably run of your screen.
```
dengueseq_fasta
```
This is a whole genome sequence, but its stored as single entry in a vector, so the `length()` command just tells us how many entries there are in the vector, which is just one! What this means is that the entire genomic sequence is stored in a single entry of the vector `dengueseq_fasta`. (If you’re not following along with this, no worries \- its not essential to actually working with the data)
If we want to actually know how long the sequence is, we need to use the function `nchar()`, which stands for “number of characters”.
```
nchar(dengueseq_fasta)
```
```
## [1] 10935
```
The sequence is 10935 bases long. All of these bases are stored as a single **character string** with no spaces in a single entry of our `dengueseq_fasta` vector. This isn’t actually a useful format for us, so below were’ going to convert it to something more useful.
If we want to see just part of the sequence we can use the `strtrim()` function. This stands for “String trim”. Before you run the code below, predict what the 100 means.
```
strtrim(dengueseq_fasta, 100)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAAC"
```
Note that at the end of the name is a slash followed by an n, which indicates to the computer that this is a **newline**; this is read by text editor, but is ignored by R in this context.
```
strtrim(dengueseq_fasta, 45)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nA"
```
After the `\\n` begins the sequence, which will continue on for a LOOOOOONG way. Let’s just print a little bit.
```
strtrim(dengueseq_fasta, 52)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTA"
```
Let’s print some more. Do you notice anything beside A, T, C and G in the sequence?
```
strtrim(dengueseq_fasta, 200)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTAACAGT\nTTTTTATTAGAGAGCAGATCTCTGATGAACAACCAACGGAAAAAGACGGGTCGACCGTCTTTCAATATGC\nTGAAACGCGCGAGAAA"
```
Again, there are the `\\n` newline characters, which tell text editors and word processors how to display the file. (note that if you are reading the raw code for this chapter there will be 2 slashes in front of the n in the previous sentence; this is an RMarkdown thing)
Now that we a sense of what we’re looking at let’s explore the `dengueseq_fasta` a bit more.
We can find out more information about what it is using the `class()`command.
```
class(dengueseq_fasta)
```
```
## [1] "character"
```
As noted before, this is character data.
Many things in R are vectors so we can ask *R* `is.vector()`
```
is.vector(dengueseq_fasta)
```
```
## [1] TRUE
```
Yup, that’s true.
Ok, let’s see what else. A handy though often verbose command is `is()`, which tells us what an object, well, what it is:
```
is(dengueseq_fasta)
```
```
## [1] "character" "vector"
## [3] "data.frameRowLabels" "SuperClassMethod"
## [5] "EnumerationValue" "character_OR_connection"
## [7] "character_OR_NULL" "atomic"
## [9] "vector_OR_Vector" "vector_OR_factor"
```
There is a lot here but if you scan for some key words you will see “character” and “vector” at the top. The other stuff you can ignore. The first two things, though, tell us the dengueseq\_fasta is a **vector** of the class **character**: that is, a **character vector**.
Another handy function is `str()`, which gives us a peak at the context and structure of an *R* object. This is most useful when you are working in the R console or with dataframes, but is a useful function to run on all *R* objects. How does this output differ from other ways we’ve displayed dengueseq\_fasta?
```
str(dengueseq_fasta)
```
```
## chr ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTA"| __truncated__
```
We know it contains character data \- how many characters? `nchar()` for “number of characters” answers that:
```
nchar(dengueseq_fasta)
```
```
## [1] 10935
```
21\.3 Saving FASTA files
------------------------
We can save our data as .fasta file for safe keeping. The `write()` function will save the data we downloaded as a plain text file.
If you do this, you’ll need to figure out where *R* is saving things, which requires and understanding *R’s* **file system**, which can take some getting used to, especially if you’re new to programming. As a start, you can see where *R* saves things by using the `getwd()` command, which tells you where on your hard drive R currently is using as its home base for files.
```
getwd()
```
```
## [1] "/Users/nlb24/OneDrive - University of Pittsburgh/0-books/lbrb-bk/lbrb"
```
You can set the working directory to where a script file is from using these steps in RStudio:
1. Click on “Session” (in the middle of the menu on the top of the screen)
2. Select “Set Working Directory” (in the middle of the drop\-down menu)
3. Select “To source file location” (2nd option)
Then, when you things, it will be in that directory.
```
write(dengueseq_fasta,
file="dengueseq.fasta")
```
You can see what files are in your directory using list.files()
```
list.files()
```
21\.4 Next steps
----------------
FASTA files in R typically have to be converted before being used. I made a function called compbio4all::fasta\_cleaner() which takes care of this.
In the optional lesson “Cleaning and preparing FASTA files for analysis in R” I process the `dengueseq_fasta` object step by step so that we can use it in analyses. If you are interested in how that functions works check out that chapter; otherwise, you can skip it.
21\.6 Review questions
----------------------
1. What does the nchar() function stand for?
2. Why does a FASTA file stored in a vector by entrez\_fetch() have a length of 1 and no dimension?
3. What does strtrim() mean?
4. If a sequence is stored in object X and you run the code strtrim(x, 10\), how many characters are shown?
5. What is the newline character in a FASTA file?
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/cleaning-and-preparing-fasta-files-for-analysis-in-r.html |
Chapter 22 Cleaning and preparing FASTA files for analysis in R
===============================================================
This is a modification of [“DNA Sequence Statistics”](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/src/chapter1.html) from Avril Coghlan’s [*A little book of R for bioinformatics.*](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/index.html). Most of the text and code was originally written by Dr. Coghlan and distributed under the [Creative Commons 3\.0](https://creativecommons.org/licenses/by/3.0/us/) license.
22\.1 Preliminaries
-------------------
We’ll need the `dengueseq_fasta` FASTA data object, which is in the `compbio4all` package. We’ll also use the `stringr` package for cleaning up the FASTA data, which can be downloaded with `install.packages("stringr")`
```
# compbio4all, which has dengueseq_fasta
library(compbio4all)
data(dengueseq_fasta)
# stringr, for data cleaning
library(stringr)
```
22\.2 Convert FASTA sequence to an R variable
---------------------------------------------
We can’t actually do much with the contents of the `dengueseq_fasta` we downloaded with the `rentrez` package except read them. If we want do address some biological questions with the data we need is to convert it into a data structure *R* can work with.
There are several things we need to remove:
1. The **meta data** line `>NC_001477.1 Dengue virus 1, complete genome` (metadata is “data” about data, such as where it came from, what it is, who made it, etc.).
2. All the `\n` that show up in the file (these are the **line breaks**).
3. Put each nucleotide of the sequence into its own spot in a vector.
There are functions in other packages that can do this automatically, but
1. I haven’t found one I like, and
2. Walking through this will help you understand the types of operations you can do on text data.
I have my own FASTA cleaning function in compbio4all, fasta\_cleaner(), and the code below breaks down some of the concepts and functions it uses.
The first two steps for cleaning up a FASTAS file involve removing things from the existing **character string** that contains the sequence. The third step will split the single continuous character string like “AGTTGTTAGTCTACGT…” into a **character vector** like `c("A","G","T","T","G","T","T","A","G","T","C","T","A","C","G","T"...)`, where each element of the vector is a single character a stored in a separate slot in the vector.
### 22\.2\.1 Removing unwanted characters
The second item is the easiest to take care of. *R* and many programming languages have tools called **regular expressions** that allow you to manipulate text. R has a function called `gsub()` which allows you to substitute or delete character data from a string. First I’ll remove all those `\n` values.
The regular expression function `gsub()` takes three arguments:
1\. `pattern = ...`. This is what we need it to find so we can replace it.
1\. `replacement = ...`. The replacement.
1\. `x = ...`. A character string or vector where `gsub()` will do its work.
We need to get rid of the `\n` so that we are left with only A, T and G, which are the actually information of the sequence. We want `\n` completely removed, so the replacement will be `""`, which is as set of quotation marks with nothing in the middle, which means “delete the target pattern and put nothing in its place.”
One thing that is tricky about regular expressions is that many characters have special meaning to the functions, such as slashes, dollar signs, and brackets. So, if you want to find and replace one of these specially designated characters you need to put a slash in front of them. So when we set the pattern, instead of setting the pattern to a slash before an n `\n`, we have to give it two slashes `\\n`.
Here is the regular expression to delete the newline character \\n.
```
# note: we want to find all the \n, but need to set the pattern as \\n
dengueseq_vector <- gsub(pattern = "\\n",
replacement = "",
x = dengueseq_fasta)
```
We can use `strtrim()` to see if it worked
```
strtrim(dengueseq_vector, 80)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genomeAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTC"
```
Now for the metadata header. This is a bit complex, but the following code is going to take all the that occurs before the beginning of the sequence (“AGTTGTTAGTC”) and delete it.
First, I’ll define what I want to get rid of in an *R* object. This will make the **call** to `gsub()` a little cleaner to read
```
seq.header <- ">NC_001477.1 Dengue virus 1, complete genome"
```
Now I’ll get rid of the header with with `gsub()`.
```
dengueseq_vector <- gsub(pattern = seq.header, # object defined above
replacement = "",
x = dengueseq_vector)
```
See if it worked:
```
strtrim(dengueseq_vector, 80)
```
```
## [1] "AGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTAACAGTTTTTTATTAG"
```
### 22\.2\.2 Spliting unbroken strings in character vectors
Now the more complex part. We need to split up a continuous, unbroken string of letters into a vector where each letter is on its own. This can be done with the `str_split()` function (“string split”) from the `stringr` package. The notation `stringr::str_split()` mean “use the `str_split` function from from the `stringr` package.” More specifically, it temporarily loads the `stringr` package and gives R access to just the `str_split` function. These allows you to call a single function without loading the whole library.
There are several arguments to `str_split`, and I’ve tacked a `[[1]]` on to the end.
First, run the command
```
dengueseq_vector_split <- stringr::str_split(dengueseq_vector,
pattern = "",
simplify = FALSE)[[1]]
```
Look at the output with str()
```
str(dengueseq_vector_split)
```
```
## chr [1:10735] "A" "G" "T" "T" "G" "T" "T" "A" "G" "T" "C" "T" "A" "C" "G" ...
```
We can explore what the different arguments do by modifying them. Change `pattern = "" to pattern = "A"`. Can you figure out what happened?
```
# re-run the command with "pattern = "A"
dengueseq_vector_split2 <- stringr::str_split(dengueseq_vector,
pattern = "A",
simplify = FALSE)[[1]]
str(dengueseq_vector_split2)
```
```
## chr [1:3427] "" "GTTGTT" "GTCT" "CGTGG" "CCG" "C" "" "G" "" "C" "GTTTCG" ...
```
And try it with `pattern = "" to pattern = "G"`.
```
# re-run the command with "pattern = "G"
dengueseq_vector_split3 <- stringr::str_split(dengueseq_vector,
pattern = "G",
simplify = FALSE)[[1]]
str(dengueseq_vector_split3)
```
```
## chr [1:2771] "A" "TT" "TTA" "TCTAC" "T" "" "ACC" "ACAA" "AACA" "TTTC" ...
```
Run this code to compare the two ways we just used `str_split` (don’t worry what it does). Does this help you see what’s up?
```
options(str = strOptions(vec.len = 10))
str(list(dengueseq_vector_split[1:20],
dengueseq_vector_split2[1:10],
dengueseq_vector_split3[1:10]))
```
```
## List of 3
## $ : chr [1:20] "A" "G" "T" "T" "G" "T" "T" "A" "G" "T" ...
## $ : chr [1:10] "" "GTTGTT" "GTCT" "CGTGG" "CCG" "C" "" "G" "" "C"
## $ : chr [1:10] "A" "TT" "TTA" "TCTAC" "T" "" "ACC" "ACAA" "AACA" "TTTC"
```
So, what does the `pattern = ...` argument do? For more info open up the help file for `str_split` by calling `?str_split`.
Something cool which we will explore in the next exercise is that we can do summaries on vectors of nucleotides, like this:
```
table(dengueseq_vector_split)
```
```
## dengueseq_vector_split
## A C G T
## 3426 2240 2770 2299
```
22\.1 Preliminaries
-------------------
We’ll need the `dengueseq_fasta` FASTA data object, which is in the `compbio4all` package. We’ll also use the `stringr` package for cleaning up the FASTA data, which can be downloaded with `install.packages("stringr")`
```
# compbio4all, which has dengueseq_fasta
library(compbio4all)
data(dengueseq_fasta)
# stringr, for data cleaning
library(stringr)
```
22\.2 Convert FASTA sequence to an R variable
---------------------------------------------
We can’t actually do much with the contents of the `dengueseq_fasta` we downloaded with the `rentrez` package except read them. If we want do address some biological questions with the data we need is to convert it into a data structure *R* can work with.
There are several things we need to remove:
1. The **meta data** line `>NC_001477.1 Dengue virus 1, complete genome` (metadata is “data” about data, such as where it came from, what it is, who made it, etc.).
2. All the `\n` that show up in the file (these are the **line breaks**).
3. Put each nucleotide of the sequence into its own spot in a vector.
There are functions in other packages that can do this automatically, but
1. I haven’t found one I like, and
2. Walking through this will help you understand the types of operations you can do on text data.
I have my own FASTA cleaning function in compbio4all, fasta\_cleaner(), and the code below breaks down some of the concepts and functions it uses.
The first two steps for cleaning up a FASTAS file involve removing things from the existing **character string** that contains the sequence. The third step will split the single continuous character string like “AGTTGTTAGTCTACGT…” into a **character vector** like `c("A","G","T","T","G","T","T","A","G","T","C","T","A","C","G","T"...)`, where each element of the vector is a single character a stored in a separate slot in the vector.
### 22\.2\.1 Removing unwanted characters
The second item is the easiest to take care of. *R* and many programming languages have tools called **regular expressions** that allow you to manipulate text. R has a function called `gsub()` which allows you to substitute or delete character data from a string. First I’ll remove all those `\n` values.
The regular expression function `gsub()` takes three arguments:
1\. `pattern = ...`. This is what we need it to find so we can replace it.
1\. `replacement = ...`. The replacement.
1\. `x = ...`. A character string or vector where `gsub()` will do its work.
We need to get rid of the `\n` so that we are left with only A, T and G, which are the actually information of the sequence. We want `\n` completely removed, so the replacement will be `""`, which is as set of quotation marks with nothing in the middle, which means “delete the target pattern and put nothing in its place.”
One thing that is tricky about regular expressions is that many characters have special meaning to the functions, such as slashes, dollar signs, and brackets. So, if you want to find and replace one of these specially designated characters you need to put a slash in front of them. So when we set the pattern, instead of setting the pattern to a slash before an n `\n`, we have to give it two slashes `\\n`.
Here is the regular expression to delete the newline character \\n.
```
# note: we want to find all the \n, but need to set the pattern as \\n
dengueseq_vector <- gsub(pattern = "\\n",
replacement = "",
x = dengueseq_fasta)
```
We can use `strtrim()` to see if it worked
```
strtrim(dengueseq_vector, 80)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genomeAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTC"
```
Now for the metadata header. This is a bit complex, but the following code is going to take all the that occurs before the beginning of the sequence (“AGTTGTTAGTC”) and delete it.
First, I’ll define what I want to get rid of in an *R* object. This will make the **call** to `gsub()` a little cleaner to read
```
seq.header <- ">NC_001477.1 Dengue virus 1, complete genome"
```
Now I’ll get rid of the header with with `gsub()`.
```
dengueseq_vector <- gsub(pattern = seq.header, # object defined above
replacement = "",
x = dengueseq_vector)
```
See if it worked:
```
strtrim(dengueseq_vector, 80)
```
```
## [1] "AGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTAACAGTTTTTTATTAG"
```
### 22\.2\.2 Spliting unbroken strings in character vectors
Now the more complex part. We need to split up a continuous, unbroken string of letters into a vector where each letter is on its own. This can be done with the `str_split()` function (“string split”) from the `stringr` package. The notation `stringr::str_split()` mean “use the `str_split` function from from the `stringr` package.” More specifically, it temporarily loads the `stringr` package and gives R access to just the `str_split` function. These allows you to call a single function without loading the whole library.
There are several arguments to `str_split`, and I’ve tacked a `[[1]]` on to the end.
First, run the command
```
dengueseq_vector_split <- stringr::str_split(dengueseq_vector,
pattern = "",
simplify = FALSE)[[1]]
```
Look at the output with str()
```
str(dengueseq_vector_split)
```
```
## chr [1:10735] "A" "G" "T" "T" "G" "T" "T" "A" "G" "T" "C" "T" "A" "C" "G" ...
```
We can explore what the different arguments do by modifying them. Change `pattern = "" to pattern = "A"`. Can you figure out what happened?
```
# re-run the command with "pattern = "A"
dengueseq_vector_split2 <- stringr::str_split(dengueseq_vector,
pattern = "A",
simplify = FALSE)[[1]]
str(dengueseq_vector_split2)
```
```
## chr [1:3427] "" "GTTGTT" "GTCT" "CGTGG" "CCG" "C" "" "G" "" "C" "GTTTCG" ...
```
And try it with `pattern = "" to pattern = "G"`.
```
# re-run the command with "pattern = "G"
dengueseq_vector_split3 <- stringr::str_split(dengueseq_vector,
pattern = "G",
simplify = FALSE)[[1]]
str(dengueseq_vector_split3)
```
```
## chr [1:2771] "A" "TT" "TTA" "TCTAC" "T" "" "ACC" "ACAA" "AACA" "TTTC" ...
```
Run this code to compare the two ways we just used `str_split` (don’t worry what it does). Does this help you see what’s up?
```
options(str = strOptions(vec.len = 10))
str(list(dengueseq_vector_split[1:20],
dengueseq_vector_split2[1:10],
dengueseq_vector_split3[1:10]))
```
```
## List of 3
## $ : chr [1:20] "A" "G" "T" "T" "G" "T" "T" "A" "G" "T" ...
## $ : chr [1:10] "" "GTTGTT" "GTCT" "CGTGG" "CCG" "C" "" "G" "" "C"
## $ : chr [1:10] "A" "TT" "TTA" "TCTAC" "T" "" "ACC" "ACAA" "AACA" "TTTC"
```
So, what does the `pattern = ...` argument do? For more info open up the help file for `str_split` by calling `?str_split`.
Something cool which we will explore in the next exercise is that we can do summaries on vectors of nucleotides, like this:
```
table(dengueseq_vector_split)
```
```
## dengueseq_vector_split
## A C G T
## 3426 2240 2770 2299
```
### 22\.2\.1 Removing unwanted characters
The second item is the easiest to take care of. *R* and many programming languages have tools called **regular expressions** that allow you to manipulate text. R has a function called `gsub()` which allows you to substitute or delete character data from a string. First I’ll remove all those `\n` values.
The regular expression function `gsub()` takes three arguments:
1\. `pattern = ...`. This is what we need it to find so we can replace it.
1\. `replacement = ...`. The replacement.
1\. `x = ...`. A character string or vector where `gsub()` will do its work.
We need to get rid of the `\n` so that we are left with only A, T and G, which are the actually information of the sequence. We want `\n` completely removed, so the replacement will be `""`, which is as set of quotation marks with nothing in the middle, which means “delete the target pattern and put nothing in its place.”
One thing that is tricky about regular expressions is that many characters have special meaning to the functions, such as slashes, dollar signs, and brackets. So, if you want to find and replace one of these specially designated characters you need to put a slash in front of them. So when we set the pattern, instead of setting the pattern to a slash before an n `\n`, we have to give it two slashes `\\n`.
Here is the regular expression to delete the newline character \\n.
```
# note: we want to find all the \n, but need to set the pattern as \\n
dengueseq_vector <- gsub(pattern = "\\n",
replacement = "",
x = dengueseq_fasta)
```
We can use `strtrim()` to see if it worked
```
strtrim(dengueseq_vector, 80)
```
```
## [1] ">NC_001477.1 Dengue virus 1, complete genomeAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTC"
```
Now for the metadata header. This is a bit complex, but the following code is going to take all the that occurs before the beginning of the sequence (“AGTTGTTAGTC”) and delete it.
First, I’ll define what I want to get rid of in an *R* object. This will make the **call** to `gsub()` a little cleaner to read
```
seq.header <- ">NC_001477.1 Dengue virus 1, complete genome"
```
Now I’ll get rid of the header with with `gsub()`.
```
dengueseq_vector <- gsub(pattern = seq.header, # object defined above
replacement = "",
x = dengueseq_vector)
```
See if it worked:
```
strtrim(dengueseq_vector, 80)
```
```
## [1] "AGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTAACAGTTTTTTATTAG"
```
### 22\.2\.2 Spliting unbroken strings in character vectors
Now the more complex part. We need to split up a continuous, unbroken string of letters into a vector where each letter is on its own. This can be done with the `str_split()` function (“string split”) from the `stringr` package. The notation `stringr::str_split()` mean “use the `str_split` function from from the `stringr` package.” More specifically, it temporarily loads the `stringr` package and gives R access to just the `str_split` function. These allows you to call a single function without loading the whole library.
There are several arguments to `str_split`, and I’ve tacked a `[[1]]` on to the end.
First, run the command
```
dengueseq_vector_split <- stringr::str_split(dengueseq_vector,
pattern = "",
simplify = FALSE)[[1]]
```
Look at the output with str()
```
str(dengueseq_vector_split)
```
```
## chr [1:10735] "A" "G" "T" "T" "G" "T" "T" "A" "G" "T" "C" "T" "A" "C" "G" ...
```
We can explore what the different arguments do by modifying them. Change `pattern = "" to pattern = "A"`. Can you figure out what happened?
```
# re-run the command with "pattern = "A"
dengueseq_vector_split2 <- stringr::str_split(dengueseq_vector,
pattern = "A",
simplify = FALSE)[[1]]
str(dengueseq_vector_split2)
```
```
## chr [1:3427] "" "GTTGTT" "GTCT" "CGTGG" "CCG" "C" "" "G" "" "C" "GTTTCG" ...
```
And try it with `pattern = "" to pattern = "G"`.
```
# re-run the command with "pattern = "G"
dengueseq_vector_split3 <- stringr::str_split(dengueseq_vector,
pattern = "G",
simplify = FALSE)[[1]]
str(dengueseq_vector_split3)
```
```
## chr [1:2771] "A" "TT" "TTA" "TCTAC" "T" "" "ACC" "ACAA" "AACA" "TTTC" ...
```
Run this code to compare the two ways we just used `str_split` (don’t worry what it does). Does this help you see what’s up?
```
options(str = strOptions(vec.len = 10))
str(list(dengueseq_vector_split[1:20],
dengueseq_vector_split2[1:10],
dengueseq_vector_split3[1:10]))
```
```
## List of 3
## $ : chr [1:20] "A" "G" "T" "T" "G" "T" "T" "A" "G" "T" ...
## $ : chr [1:10] "" "GTTGTT" "GTCT" "CGTGG" "CCG" "C" "" "G" "" "C"
## $ : chr [1:10] "A" "TT" "TTA" "TCTAC" "T" "" "ACC" "ACAA" "AACA" "TTTC"
```
So, what does the `pattern = ...` argument do? For more info open up the help file for `str_split` by calling `?str_split`.
Something cool which we will explore in the next exercise is that we can do summaries on vectors of nucleotides, like this:
```
table(dengueseq_vector_split)
```
```
## dengueseq_vector_split
## A C G T
## 3426 2240 2770 2299
```
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/dna-descriptive-statics---part-1.html |
Chapter 23 DNA descriptive statics \- Part 1
============================================
**By**: Avril Coghlan
**Adapted, edited and expanded**: Nathan Brouwer ([brouwern@gmail.com](mailto:brouwern@gmail.com)) under the Creative Commons 3\.0 Attribution License [(CC BY 3\.0\)](https://creativecommons.org/licenses/by/3.0/).
23\.1 Preface
-------------
This is a modification of [“DNA Sequence Statistics (1\)”](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/src/chapter1.html) from Avril Coghlan’s [*A little book of R for bioinformatics.*](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/index.html). The text and code were originally written by Dr. Coghlan and distributed under the [Creative Commons 3\.0](https://creativecommons.org/licenses/by/3.0/us/) license.
23\.2 Introduction
------------------
23\.3 Vocabulary
----------------
* GC content
* DNA words
* scatterplots, histograms, piecharts, and boxplots
23\.4 Functions
---------------
* `seqinr::GC()`
* `seqinr::count()`
23\.5 Learning objectives
-------------------------
By the end of this tutorial you will be able to, among other things
* Determine the GC content using GC() and obtain other summary data with count()
23\.6 Preliminaries
-------------------
```
library(compbio4all)
library(seqinr)
library(seqinr)
```
23\.7 Converting DNA from FASTA format
--------------------------------------
In a previous exercise we downloaded and examined DNA sequence in the FASTA format. The sequence we worked with is also stored as a data file within the `compbio4all` pa package and can be brought into memory using the `data()` command.
```
data("dengueseq_fasta")
```
We can look at this data object with the `str()` command
```
str(dengueseq_fasta)
```
```
## chr ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTA"| __truncated__
```
This isn’t in a format we can work with directly so we’ll use the function fasta\_cleaner() to set it up.
```
header. <- ">NC_001477.1 Dengue virus 1, complete genome"
dengueseq_vector <- compbio4all::fasta_cleaner(dengueseq_fasta)
```
Now check it out.
```
str(dengueseq_vector)
```
```
## chr [1:10735] "A" "G" "T" "T" "G" "T" "T" "A" "G" "T" "C" "T" "A" "C" "G" ...
```
What we have here is each base of the sequence in a seperate slot of our vector.
The first four bases are “AGTT”
We can see the first one like this
```
dengueseq_vector[1]
```
```
## [1] "A"
```
The second one like this
```
dengueseq_vector[2]
```
```
## [1] "G"
```
The first and second like this
```
dengueseq_vector[1:2]
```
```
## [1] "A" "G"
```
and all four like this
```
dengueseq_vector[1:4]
```
```
## [1] "A" "G" "T" "T"
```
23\.8 Length of a DNA sequence
------------------------------
Once you have retrieved a DNA sequence, we can obtain some simple statistics to describe that sequence, such as the sequence’s total length in nucleotides. In the above example, we retrieved the DEN\-1 Dengue virus genome sequence, and stored it in the vector variable dengueseq\_vector To obtain the length of the genome sequence, we would use the length() function, typing:
```
length(dengueseq_vector)
```
```
## [1] 10735
```
The length() function will give you back the length of the sequence stored in variable dengueseq\_vector, in nucleotides. The length() function actually gives the number of **elements** (slots) in the input vector that you passed to it, which in this case in the number of elements in the vector dengueseq\_vector. Since each element of the vector dengueseq\_vector contains one nucleotide of the DEN\-1 Dengue virus sequence, the result for the DEN\-1 Dengue virus genome tells us the length of its genome sequence (ie. 10735 nucleotides long).
### 23\.8\.1 Base composition of a DNA sequence
An obvious first analysis of any DNA sequence is to count the number of occurrences of the four different nucleotides (“A”, “C”, “G”, and “T”) in the sequence. This can be done using the the table() function. For example, to find the number of As, Cs, Gs, and Ts in the DEN\-1 Dengue virus sequence (which you have put into vector variable dengueseq\_vector, using the commands above), you would type:
```
table(dengueseq_vector)
```
```
## dengueseq_vector
## A C G T
## 3426 2240 2770 2299
```
This means that the DEN\-1 Dengue virus genome sequence has 3426 As occurring throughout the genome, 2240 Cs, and so forth.
### 23\.8\.2 GC Content of DNA
One of the most fundamental properties of a genome sequence is its **GC content**, the fraction of the sequence that consists of Gs and Cs, ie. the %(G\+C).
The GC content can be calculated as the percentage of the bases in the genome that are Gs or Cs. That is, GC content \= (number of Gs \+ number of Cs)*100/(genome length). For example, if the genome is 100 bp, and 20 bases are Gs and 21 bases are Cs, then the GC content is (20 \+ 21\)*100/100 \= 41%.
You can easily calculate the GC content based on the number of As, Gs, Cs, and Ts in the genome sequence. For example, for the DEN\-1 Dengue virus genome sequence, we know from using the table() function above that the genome contains 3426 As, 2240 Cs, 2770 Gs and 2299 Ts. Therefore, we can calculate the GC content using the command:
```
(2240+2770)*100/(3426+2240+2770+2299)
```
```
## [1] 46.66977
```
Alternatively, if you are feeling lazy, you can use the GC() function in the SeqinR package, which gives the fraction of bases in the sequence that are Gs or Cs.
```
seqinr::GC(dengueseq_vector)
```
```
## [1] 0.4666977
```
The result above means that the fraction of bases in the DEN\-1 Dengue virus genome that are Gs or Cs is 0\.4666977\. To convert the fraction to a percentage, we have to multiply by 100, so the GC content as a percentage is 46\.66977%.
### 23\.8\.3 DNA words
As well as the frequency of each of the individual nucleotides (“A”, “G”, “T”, “C”) in a DNA sequence, it is also interesting to know the frequency of longer **DNA words**, also referred to as **genomic words**. The individual nucleotides are DNA words that are 1 nucleotide long, but we may also want to find out the frequency of DNA words that are 2 nucleotides long (ie. “AA”, “AG”, “AC”, “AT”, “CA”, “CG”, “CC”, “CT”, “GA”, “GG”, “GC”, “GT”, “TA”, “TG”, “TC”, and “TT”), 3 nucleotides long (eg. “AAA”, “AAT”, “ACG”, etc.), 4 nucleotides long, etc.
To find the number of occurrences of DNA words of a particular length, we can use the count() function from the R SeqinR package.
The count() function only works with lower\-case letters, so first we have to use the tolower() function to convert our upper class genome to lower case
```
dengueseq_vector <-tolower(dengueseq_vector)
```
Now we can look for words. For example, to find the number of occurrences of DNA words that are 1 nucleotide long in the sequence dengueseq\_vector, we type:
```
seqinr::count(dengueseq_vector, 1)
```
```
##
## a c g t
## 3426 2240 2770 2299
```
As expected, this gives us the number of occurrences of the individual nucleotides. To find the number of occurrences of DNA words that are 2 nucleotides long, we type:
```
seqinr::count(dengueseq_vector, 2)
```
```
##
## aa ac ag at ca cc cg ct ga gc gg gt ta tc tg tt
## 1108 720 890 708 901 523 261 555 976 500 787 507 440 497 832 529
```
Note that by default the count() function includes all overlapping DNA words in a sequence. Therefore, for example, the sequence “ATG” is considered to contain two words that are two nucleotides long: “AT” and “TG”.
If you type help(‘count’), you will see that the result (output) of the function count() is a table object. This means that you can use double square brackets to extract the values of elements from the table. For example, to extract the value of the third element (the number of Gs in the DEN\-1 Dengue virus sequence), you can type:
```
denguetable_2 <- seqinr::count(dengueseq_vector,2)
denguetable_2[[3]]
```
```
## [1] 890
```
The command above extracts the third element of the table produced by count(dengueseq\_vector,1\), which we have stored in the table variable denguetable.
Alternatively, you can find the value of the element of the table in column “g” by typing:
```
denguetable_2[["aa"]]
```
```
## [1] 1108
```
Once you have table you can make a basic plot
```
barplot(denguetable_2)
```
We can sort by the number of words using the sort() command
```
sort(denguetable_2)
```
```
##
## cg ta tc gc gt cc tt ct at ac gg tg ag ca ga aa
## 261 440 497 500 507 523 529 555 708 720 787 832 890 901 976 1108
```
Let’s save over the original object
```
denguetable_2 <- sort(denguetable_2)
```
```
barplot(denguetable_2)
```
R will automatically try to optimize the appearance of the labels on the graph so you may not see all of them; no worries.
R can also make pie charts. Piecharts only really work when there are a few items being plots, like the four bases.
```
denguetable_1 <- seqinr::count(dengueseq_vector,1)
```
Make a piechart with pie()
```
pie(denguetable_1)
```
### 23\.8\.4 Summary
In this practical, have learned to use the following R functions:
length() for finding the length of a vector or list
table() for printing out a table of the number of occurrences of each type of item in a vector or list.
These functions belong to the standard installation of R.
You have also learnt the following R functions that belong to the SeqinR package:
GC() for calculating the GC content for a DNA sequence
count() for calculating the number of occurrences of DNA words of a particular length in a DNA sequence
23\.9 Acknowledgements
----------------------
This is a modification of [“DNA Sequence Statistics (1\)”](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/src/chapter1.html) from Avril Coghlan’s [*A little book of R for bioinformatics.*](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/index.html). Almost all of text and code was originally written by Dr. Coghlan and distributed under the [Creative Commons 3\.0](https://creativecommons.org/licenses/by/3.0/us/) license.
In “A little book…” Coghlan noted: “Many of the ideas for the examples and exercises for this chapter were inspired by the Matlab case studies on Haemophilus influenzae (www.computational\-genomics.net/case\_studies/haemophilus\_demo.html) and Bacteriophage lambda ([http://www.computational\-genomics.net/case\_studies/lambdaphage\_demo.html](http://www.computational-genomics.net/case_studies/lambdaphage_demo.html)) from the website that accompanies the book Introduction to Computational Genomics: a case studies approach by Cristianini and Hahn (Cambridge University Press; www.computational\-genomics.net/book/).”
### 23\.9\.1 License
The content in this book is licensed under a Creative Commons Attribution 3\.0 License.
[https://creativecommons.org/licenses/by/3\.0/us/](https://creativecommons.org/licenses/by/3.0/us/)
### 23\.9\.2 Exercises
Answer the following questions, using the R package. For each question, please record your answer, and what you typed into R to get this answer.
Model answers to the exercises are given in Answers to the exercises on DNA Sequence Statistics (1\).
1. What are the last twenty nucleotides of the Dengue virus genome sequence?
2. What is the length in nucleotides of the genome sequence for the bacterium Mycobacterium leprae strain TN (accession NC\_002677\)?
Note: Mycobacterium leprae is a bacterium that is responsible for causing leprosy, which is classified by the WHO as a neglected tropical disease. As the genome sequence is a DNA sequence, if you are retrieving its sequence via the NCBI website, you will need to look for it in the NCBI Nucleotide database.
3. How many of each of the four nucleotides A, C, T and G, and any other symbols, are there in the Mycobacterium leprae TN genome sequence?
Note: other symbols apart from the four nucleotides A/C/T/G may appear in a sequence. They correspond to positions in the sequence that are are not clearly one base or another and they are due, for example, to sequencing uncertainties. or example, the symbol ‘N’ means ‘aNy base’, while ‘R’ means ‘A or G’ (puRine). There is a table of symbols at www.bioinformatics.org/sms/iupac.html.
4. What is the GC content of the *Mycobacterium leprae TN* genome sequence, when (i) all non\-A/C/T/G nucleotides are included, (ii) non\-A/C/T/G nucleotides are discarded?
Hint: look at the help page for the GC() function to find out how it deals with non\-A/C/T/G nucleotides.
5. How many of each of the four nucleotides A, C, T and G are there in the complement of the Mycobacterium leprae TN genome sequence? *Hint*: you will first need to search for a function to calculate the complement of a sequence. Once you have found out what function to use, remember to use the help() function to find out what are the arguments (inputs) and results (outputs) of that function. How does the function deal with symbols other than the four nucleotides A, C, T and G? Are the numbers of As, Cs, Ts, and Gs in the complementary sequence what you would expect?
6. How many occurrences of the DNA words CC, CG and GC occur in the Mycobacterium leprae TN genome sequence?
7. How many occurrences of the DNA words CC, CG and GC occur in the (i) first 1000 and (ii) last 1000 nucleotides of the Mycobacterium leprae TN genome sequence?
1\.How can you check that the subsequence that you have looked at is 1000 nucleotides long?
23\.1 Preface
-------------
This is a modification of [“DNA Sequence Statistics (1\)”](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/src/chapter1.html) from Avril Coghlan’s [*A little book of R for bioinformatics.*](https://a-little-book-of-r-for-bioinformatics.readthedocs.io/en/latest/index.html). The text and code were originally written by Dr. Coghlan and distributed under the [Creative Commons 3\.0](https://creativecommons.org/licenses/by/3.0/us/) license.
23\.2 Introduction
------------------
23\.3 Vocabulary
----------------
* GC content
* DNA words
* scatterplots, histograms, piecharts, and boxplots
23\.4 Functions
---------------
* `seqinr::GC()`
* `seqinr::count()`
23\.5 Learning objectives
-------------------------
By the end of this tutorial you will be able to, among other things
* Determine the GC content using GC() and obtain other summary data with count()
23\.6 Preliminaries
-------------------
```
library(compbio4all)
library(seqinr)
library(seqinr)
```
23\.7 Converting DNA from FASTA format
--------------------------------------
In a previous exercise we downloaded and examined DNA sequence in the FASTA format. The sequence we worked with is also stored as a data file within the `compbio4all` pa package and can be brought into memory using the `data()` command.
```
data("dengueseq_fasta")
```
We can look at this data object with the `str()` command
```
str(dengueseq_fasta)
```
```
## chr ">NC_001477.1 Dengue virus 1, complete genome\nAGTTGTTAGTCTACGTGGACCGACAAGAACAGTTTCGAATCGGAAGCTTGCTTAACGTAGTTCTA"| __truncated__
```
This isn’t in a format we can work with directly so we’ll use the function fasta\_cleaner() to set it up.
```
header. <- ">NC_001477.1 Dengue virus 1, complete genome"
dengueseq_vector <- compbio4all::fasta_cleaner(dengueseq_fasta)
```
Now check it out.
```
str(dengueseq_vector)
```
```
## chr [1:10735] "A" "G" "T" "T" "G" "T" "T" "A" "G" "T" "C" "T" "A" "C" "G" ...
```
What we have here is each base of the sequence in a seperate slot of our vector.
The first four bases are “AGTT”
We can see the first one like this
```
dengueseq_vector[1]
```
```
## [1] "A"
```
The second one like this
```
dengueseq_vector[2]
```
```
## [1] "G"
```
The first and second like this
```
dengueseq_vector[1:2]
```
```
## [1] "A" "G"
```
and all four like this
```
dengueseq_vector[1:4]
```
```
## [1] "A" "G" "T" "T"
```
23\.8 Length of a DNA sequence
------------------------------
Once you have retrieved a DNA sequence, we can obtain some simple statistics to describe that sequence, such as the sequence’s total length in nucleotides. In the above example, we retrieved the DEN\-1 Dengue virus genome sequence, and stored it in the vector variable dengueseq\_vector To obtain the length of the genome sequence, we would use the length() function, typing:
```
length(dengueseq_vector)
```
```
## [1] 10735
```
The length() function will give you back the length of the sequence stored in variable dengueseq\_vector, in nucleotides. The length() function actually gives the number of **elements** (slots) in the input vector that you passed to it, which in this case in the number of elements in the vector dengueseq\_vector. Since each element of the vector dengueseq\_vector contains one nucleotide of the DEN\-1 Dengue virus sequence, the result for the DEN\-1 Dengue virus genome tells us the length of its genome sequence (ie. 10735 nucleotides long).
### 23\.8\.1 Base composition of a DNA sequence
An obvious first analysis of any DNA sequence is to count the number of occurrences of the four different nucleotides (“A”, “C”, “G”, and “T”) in the sequence. This can be done using the the table() function. For example, to find the number of As, Cs, Gs, and Ts in the DEN\-1 Dengue virus sequence (which you have put into vector variable dengueseq\_vector, using the commands above), you would type:
```
table(dengueseq_vector)
```
```
## dengueseq_vector
## A C G T
## 3426 2240 2770 2299
```
This means that the DEN\-1 Dengue virus genome sequence has 3426 As occurring throughout the genome, 2240 Cs, and so forth.
### 23\.8\.2 GC Content of DNA
One of the most fundamental properties of a genome sequence is its **GC content**, the fraction of the sequence that consists of Gs and Cs, ie. the %(G\+C).
The GC content can be calculated as the percentage of the bases in the genome that are Gs or Cs. That is, GC content \= (number of Gs \+ number of Cs)*100/(genome length). For example, if the genome is 100 bp, and 20 bases are Gs and 21 bases are Cs, then the GC content is (20 \+ 21\)*100/100 \= 41%.
You can easily calculate the GC content based on the number of As, Gs, Cs, and Ts in the genome sequence. For example, for the DEN\-1 Dengue virus genome sequence, we know from using the table() function above that the genome contains 3426 As, 2240 Cs, 2770 Gs and 2299 Ts. Therefore, we can calculate the GC content using the command:
```
(2240+2770)*100/(3426+2240+2770+2299)
```
```
## [1] 46.66977
```
Alternatively, if you are feeling lazy, you can use the GC() function in the SeqinR package, which gives the fraction of bases in the sequence that are Gs or Cs.
```
seqinr::GC(dengueseq_vector)
```
```
## [1] 0.4666977
```
The result above means that the fraction of bases in the DEN\-1 Dengue virus genome that are Gs or Cs is 0\.4666977\. To convert the fraction to a percentage, we have to multiply by 100, so the GC content as a percentage is 46\.66977%.
### 23\.8\.3 DNA words
As well as the frequency of each of the individual nucleotides (“A”, “G”, “T”, “C”) in a DNA sequence, it is also interesting to know the frequency of longer **DNA words**, also referred to as **genomic words**. The individual nucleotides are DNA words that are 1 nucleotide long, but we may also want to find out the frequency of DNA words that are 2 nucleotides long (ie. “AA”, “AG”, “AC”, “AT”, “CA”, “CG”, “CC”, “CT”, “GA”, “GG”, “GC”, “GT”, “TA”, “TG”, “TC”, and “TT”), 3 nucleotides long (eg. “AAA”, “AAT”, “ACG”, etc.), 4 nucleotides long, etc.
To find the number of occurrences of DNA words of a particular length, we can use the count() function from the R SeqinR package.
The count() function only works with lower\-case letters, so first we have to use the tolower() function to convert our upper class genome to lower case
```
dengueseq_vector <-tolower(dengueseq_vector)
```
Now we can look for words. For example, to find the number of occurrences of DNA words that are 1 nucleotide long in the sequence dengueseq\_vector, we type:
```
seqinr::count(dengueseq_vector, 1)
```
```
##
## a c g t
## 3426 2240 2770 2299
```
As expected, this gives us the number of occurrences of the individual nucleotides. To find the number of occurrences of DNA words that are 2 nucleotides long, we type:
```
seqinr::count(dengueseq_vector, 2)
```
```
##
## aa ac ag at ca cc cg ct ga gc gg gt ta tc tg tt
## 1108 720 890 708 901 523 261 555 976 500 787 507 440 497 832 529
```
Note that by default the count() function includes all overlapping DNA words in a sequence. Therefore, for example, the sequence “ATG” is considered to contain two words that are two nucleotides long: “AT” and “TG”.
If you type help(‘count’), you will see that the result (output) of the function count() is a table object. This means that you can use double square brackets to extract the values of elements from the table. For example, to extract the value of the third element (the number of Gs in the DEN\-1 Dengue virus sequence), you can type:
```
denguetable_2 <- seqinr::count(dengueseq_vector,2)
denguetable_2[[3]]
```
```
## [1] 890
```
The command above extracts the third element of the table produced by count(dengueseq\_vector,1\), which we have stored in the table variable denguetable.
Alternatively, you can find the value of the element of the table in column “g” by typing:
```
denguetable_2[["aa"]]
```
```
## [1] 1108
```
Once you have table you can make a basic plot
```
barplot(denguetable_2)
```
We can sort by the number of words using the sort() command
```
sort(denguetable_2)
```
```
##
## cg ta tc gc gt cc tt ct at ac gg tg ag ca ga aa
## 261 440 497 500 507 523 529 555 708 720 787 832 890 901 976 1108
```
Let’s save over the original object
```
denguetable_2 <- sort(denguetable_2)
```
```
barplot(denguetable_2)
```
R will automatically try to optimize the appearance of the labels on the graph so you may not see all of them; no worries.
R can also make pie charts. Piecharts only really work when there are a few items being plots, like the four bases.
```
denguetable_1 <- seqinr::count(dengueseq_vector,1)
```
Make a piechart with pie()
```
pie(denguetable_1)
```
### 23\.8\.4 Summary
In this practical, have learned to use the following R functions:
length() for finding the length of a vector or list
table() for printing out a table of the number of occurrences of each type of item in a vector or list.
These functions belong to the standard installation of R.
You have also learnt the following R functions that belong to the SeqinR package:
GC() for calculating the GC content for a DNA sequence
count() for calculating the number of occurrences of DNA words of a particular length in a DNA sequence
### 23\.8\.1 Base composition of a DNA sequence
An obvious first analysis of any DNA sequence is to count the number of occurrences of the four different nucleotides (“A”, “C”, “G”, and “T”) in the sequence. This can be done using the the table() function. For example, to find the number of As, Cs, Gs, and Ts in the DEN\-1 Dengue virus sequence (which you have put into vector variable dengueseq\_vector, using the commands above), you would type:
```
table(dengueseq_vector)
```
```
## dengueseq_vector
## A C G T
## 3426 2240 2770 2299
```
This means that the DEN\-1 Dengue virus genome sequence has 3426 As occurring throughout the genome, 2240 Cs, and so forth.
### 23\.8\.2 GC Content of DNA
One of the most fundamental properties of a genome sequence is its **GC content**, the fraction of the sequence that consists of Gs and Cs, ie. the %(G\+C).
The GC content can be calculated as the percentage of the bases in the genome that are Gs or Cs. That is, GC content \= (number of Gs \+ number of Cs)*100/(genome length). For example, if the genome is 100 bp, and 20 bases are Gs and 21 bases are Cs, then the GC content is (20 \+ 21\)*100/100 \= 41%.
You can easily calculate the GC content based on the number of As, Gs, Cs, and Ts in the genome sequence. For example, for the DEN\-1 Dengue virus genome sequence, we know from using the table() function above that the genome contains 3426 As, 2240 Cs, 2770 Gs and 2299 Ts. Therefore, we can calculate the GC content using the command:
```
(2240+2770)*100/(3426+2240+2770+2299)
```
```
## [1] 46.66977
```
Alternatively, if you are feeling lazy, you can use the GC() function in the SeqinR package, which gives the fraction of bases in the sequence that are Gs or Cs.
```
seqinr::GC(dengueseq_vector)
```
```
## [1] 0.4666977
```
The result above means that the fraction of bases in the DEN\-1 Dengue virus genome that are Gs or Cs is 0\.4666977\. To convert the fraction to a percentage, we have to multiply by 100, so the GC content as a percentage is 46\.66977%.
### 23\.8\.3 DNA words
As well as the frequency of each of the individual nucleotides (“A”, “G”, “T”, “C”) in a DNA sequence, it is also interesting to know the frequency of longer **DNA words**, also referred to as **genomic words**. The individual nucleotides are DNA words that are 1 nucleotide long, but we may also want to find out the frequency of DNA words that are 2 nucleotides long (ie. “AA”, “AG”, “AC”, “AT”, “CA”, “CG”, “CC”, “CT”, “GA”, “GG”, “GC”, “GT”, “TA”, “TG”, “TC”, and “TT”), 3 nucleotides long (eg. “AAA”, “AAT”, “ACG”, etc.), 4 nucleotides long, etc.
To find the number of occurrences of DNA words of a particular length, we can use the count() function from the R SeqinR package.
The count() function only works with lower\-case letters, so first we have to use the tolower() function to convert our upper class genome to lower case
```
dengueseq_vector <-tolower(dengueseq_vector)
```
Now we can look for words. For example, to find the number of occurrences of DNA words that are 1 nucleotide long in the sequence dengueseq\_vector, we type:
```
seqinr::count(dengueseq_vector, 1)
```
```
##
## a c g t
## 3426 2240 2770 2299
```
As expected, this gives us the number of occurrences of the individual nucleotides. To find the number of occurrences of DNA words that are 2 nucleotides long, we type:
```
seqinr::count(dengueseq_vector, 2)
```
```
##
## aa ac ag at ca cc cg ct ga gc gg gt ta tc tg tt
## 1108 720 890 708 901 523 261 555 976 500 787 507 440 497 832 529
```
Note that by default the count() function includes all overlapping DNA words in a sequence. Therefore, for example, the sequence “ATG” is considered to contain two words that are two nucleotides long: “AT” and “TG”.
If you type help(‘count’), you will see that the result (output) of the function count() is a table object. This means that you can use double square brackets to extract the values of elements from the table. For example, to extract the value of the third element (the number of Gs in the DEN\-1 Dengue virus sequence), you can type:
```
denguetable_2 <- seqinr::count(dengueseq_vector,2)
denguetable_2[[3]]
```
```
## [1] 890
```
The command above extracts the third element of the table produced by count(dengueseq\_vector,1\), which we have stored in the table variable denguetable.
Alternatively, you can find the value of the element of the table in column “g” by typing:
```
denguetable_2[["aa"]]
```
```
## [1] 1108
```
Once you have table you can make a basic plot
```
barplot(denguetable_2)
```
We can sort by the number of words using the sort() command
```
sort(denguetable_2)
```
```
##
## cg ta tc gc gt cc tt ct at ac gg tg ag ca ga aa
## 261 440 497 500 507 523 529 555 708 720 787 832 890 901 976 1108
```
Let’s save over the original object
```
denguetable_2 <- sort(denguetable_2)
```
```
barplot(denguetable_2)
```
R will automatically try to optimize the appearance of the labels on the graph so you may not see all of them; no worries.
R can also make pie charts. Piecharts only really work when there are a few items being plots, like the four bases.
```
denguetable_1 <- seqinr::count(dengueseq_vector,1)
```
Make a piechart with pie()
```
pie(denguetable_1)
```
### 23\.8\.4 Summary
In this practical, have learned to use the following R functions:
length() for finding the length of a vector or list
table() for printing out a table of the number of occurrences of each type of item in a vector or list.
These functions belong to the standard installation of R.
You have also learnt the following R functions that belong to the SeqinR package:
GC() for calculating the GC content for a DNA sequence
count() for calculating the number of occurrences of DNA words of a particular length in a DNA sequence
### 23\.9\.1 License
The content in this book is licensed under a Creative Commons Attribution 3\.0 License.
[https://creativecommons.org/licenses/by/3\.0/us/](https://creativecommons.org/licenses/by/3.0/us/)
### 23\.9\.2 Exercises
Answer the following questions, using the R package. For each question, please record your answer, and what you typed into R to get this answer.
Model answers to the exercises are given in Answers to the exercises on DNA Sequence Statistics (1\).
1. What are the last twenty nucleotides of the Dengue virus genome sequence?
2. What is the length in nucleotides of the genome sequence for the bacterium Mycobacterium leprae strain TN (accession NC\_002677\)?
Note: Mycobacterium leprae is a bacterium that is responsible for causing leprosy, which is classified by the WHO as a neglected tropical disease. As the genome sequence is a DNA sequence, if you are retrieving its sequence via the NCBI website, you will need to look for it in the NCBI Nucleotide database.
3. How many of each of the four nucleotides A, C, T and G, and any other symbols, are there in the Mycobacterium leprae TN genome sequence?
Note: other symbols apart from the four nucleotides A/C/T/G may appear in a sequence. They correspond to positions in the sequence that are are not clearly one base or another and they are due, for example, to sequencing uncertainties. or example, the symbol ‘N’ means ‘aNy base’, while ‘R’ means ‘A or G’ (puRine). There is a table of symbols at www.bioinformatics.org/sms/iupac.html.
4. What is the GC content of the *Mycobacterium leprae TN* genome sequence, when (i) all non\-A/C/T/G nucleotides are included, (ii) non\-A/C/T/G nucleotides are discarded?
Hint: look at the help page for the GC() function to find out how it deals with non\-A/C/T/G nucleotides.
5. How many of each of the four nucleotides A, C, T and G are there in the complement of the Mycobacterium leprae TN genome sequence? *Hint*: you will first need to search for a function to calculate the complement of a sequence. Once you have found out what function to use, remember to use the help() function to find out what are the arguments (inputs) and results (outputs) of that function. How does the function deal with symbols other than the four nucleotides A, C, T and G? Are the numbers of As, Cs, Ts, and Gs in the complementary sequence what you would expect?
6. How many occurrences of the DNA words CC, CG and GC occur in the Mycobacterium leprae TN genome sequence?
7. How many occurrences of the DNA words CC, CG and GC occur in the (i) first 1000 and (ii) last 1000 nucleotides of the Mycobacterium leprae TN genome sequence?
1\.How can you check that the subsequence that you have looked at is 1000 nucleotides long?
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/getting-started-with-r-itself-or-not.html |
Getting started with R itself (or not)
======================================
Vocabulary
----------
* console
* script editor / source viewer
* interactive programming
* scripts / script files
* .R files
* text files / plain text files
* command execution / execute a command from script editor
* comments / code comments
* commenting out / commenting out code
* stackoverflow.com
* the rstats hashtag
R commands
----------
* c(…)
* mean(…)
* sd(…)
* ?
* read.csv(…)
This is a walk\-through of a very basic R session. It assumes you have successfully installed R and RStudio onto your computer, and nothing else.
Most people who use R do not actually use the program itself \- they use a GUI (graphical user interface) “front end” that make R a bit easier to use. However, you will probably run into the icon for the underlying R program on your desktop or elsewhere on your computer. It usually looks like this:
ADD IMAGE HERE
The long string of numbers have to do with the version and whether is 32 or 64 bit (not important for what we do).
If you are curious you can open it up and take a look \- it actually looks a lot like RStudio, where we will do all our work (or rather, RStudio looks like R). Sometimes when people are getting started with R they will accidentally open R instead of RStudio; if things don’t seem to look or be working the way you think they should, you might be in R, not RStudio
#### 23\.10\.4\.1 R’s console as a scientific calculator
You can interact with R’s console similar to a scientific calculator. For example, you can use parentheses to set up mathematical statements like
```
5*(1+1)
```
```
## [1] 10
```
Note however that you have to be explicit about multiplication. If you try the following it won’t work.
```
5(1+1)
```
R also has built\-in functions that work similar to what you might have used in Excel. For example, in Excel you can calculate the average of a set of numbers by typing “\=average(1,2,3\)” into a cell. R can do the same thing except
* The command is “mean”
* You don’t start with “\=”
* You have to package up the numbers like what is shown below using “c(…)”
```
mean(c(1,2,3))
```
```
## [1] 2
```
Where “c(…)” packages up the numbers the way the mean() function wants to see them.
If you just do the following R will give you an answer, but its the wrong one
```
mean(1,2,3)
```
**This is a common issue with R – and many programs, really – it won’t always tell you when somethind didn’t go as planned. This is because it doesn’t know something didn’t go as planned; you have to learn the rules R plays by.**
#### 23\.10\.4\.2 Practice: math in the console
See if you can reproduce the following results
**Division**
```
10/3
```
```
## [1] 3.333333
```
**The standard deviation**
```
sd(c(5,10,15)) # note the use of "c(...)"
```
```
## [1] 5
```
#### 23\.10\.4\.3 The script editor
While you can interact with R directly within the console, the standard way to work in R is to write what are known as **scripts.** These are computer code instructions written to R in a **script file.** These are save with the extension **.R** but area really just a form of **plain text file.**
To work with scripts, what you do is type commands in the script editor, then tell R to **excute** the command. This can be done several ways.
First, you tell RStudio the line of code you want to run by either
\* Placing the cursor at the end a line of code, OR
\* Clicking and dragging over the code you want to run in order highlight it.
Second, you tell RStudio to run the code by
\* Clicking the “Run” icon in the upper right hand side of the script editor (a grey box with a green error emerging from it)
\* pressing the control key (“ctrl)” and then then enter key on the keyboard
The code you’ve chosen to run will be sent by RStudio from the script editor over to the console. The console will show you both the code and then the output.
You can run several lines of code if you want; the console will run a line, print the output, and then run the next line. First I’ll use the command mean(), and then the command sd() for the standard deviation:
```
mean(c(1,2,3))
```
```
## [1] 2
```
```
sd(c(1,2,3))
```
```
## [1] 1
```
#### 23\.10\.4\.4 Comments
One of the reasons we use script files is that we can combine R code with **comments** that tell us what the R code is doing. Comments are preceded by the hashtag symbol **\#**. Frequently we’ll write code like this:
```
#The mean of 3 numbers
mean(c(1,2,3))
```
If you highlight all of this code (including the comment) and then click on “run”, you’ll see that RStudio sends all of the code over console.
```
## [1] 2
```
Comments can also be placed at the *end* of a line of code
```
mean(c(1,2,3)) #Note the use of c(...)
```
Sometimes we write code and then don’t want R to run it. We can prevent R from executing the code even if its sent to the console by putting a “\#” *infront* of the code.
If I run this code, I will get just the mean but not the sd.
```
mean(c(1,2,3))
#sd(c(1,2,3))
```
Doing this is called **commenting out** a line of code.
23\.11 Help!
------------
There are many resource for figuring out R and RStudio, including
* R’s built in “help” function
* Q\&A websites like **stackoverflow.com**
* twitter, using the hashtag \#rstats
* blogs
* online books and course materials
### 23\.11\.1 Getting “help” from R
If you are using a function in R you can get info about how it works like this
```
?mean
```
In RStudio the help screen should appear, probably above your console. If you start reading this help file, though, you don’t have to go far until you start seeing lots of R lingo, like “S3 method”,“na.rm”, “vectors”. Unfortunately, the R help files are usually not written for beginners, and reading help files is a skill you have to acquire.
For example, when we load data into R in subsequent lessons we will use a function called “read.csv”
Access the help file by typing “?read.csv” into the console and pressing enter. Surprisingly, the function that R give you the help file isn’t what you asked for, but is read.table(). This is a related function to read.csv, but when you’re a beginner thing like this can really throw you off.
Kieran Healy as produced a great cheatsheet for reading R’s help pages as part of his forthcoming book. It should be available online at [http://socviz.co/appendix.html\#a\-little\-more\-about\-r](http://socviz.co/appendix.html#a-little-more-about-r)
### 23\.11\.2 Getting help from the internet
The best way to get help for any topic is to just do an internet search like this: “R read.csv”. Usually the first thing on the results list will be the R help file, but the second or third will be a blog post or something else where a usually helpful person has discussed how that function works.
Sometimes for very basic R commands like this might not always be productive but its always work a try. For but things related to stats, plotting, and programming there is frequently lots of information. Also try searching YouTube.
### 23\.11\.3 Getting help from online forums
Often when you do an internet search for an R topic you’ll see results from the website www.stackoverflow.com, or maybe www.crossvalidated.com if its a statistics topic. These are excellent resources and many questions that you may have already have answers on them. Stackoverflow has an internal search function and also suggests potentially relevant posts.
Before posting to one of these sites yourself, however, do some research; there is a particular type and format of question that is most likely to get a useful response. Sadly, people new to the site often get “flamed” by impatient pros.
### 23\.11\.4 Getting help from twitter
Twitter is a surprisingly good place to get information or to find other people knew to R. Its often most useful to ask people for learning resources or general reference, but you can also post direct questions and see if anyone responds, though usually its more advanced users who engage in twitter\-based code discussion.
A standard tweet might be
“Hey \#rstats twitter, am knew to \#rstats and really stuck on some of the basics. Any suggestions for good resources for someone starting from scratch?”
23\.12 Other features of RStudio
--------------------------------
### 23\.12\.1 Ajusting pane the layout
You can adjust the location of each of RStudio 4 window panes, as well as their size.
To set the pane layout go to
1\. ”Tools” on the top menu
1\. ”Global options”
1\. “Pane Layout”
Use the drop\-down menus to set things up. I recommend
1\. Lower left: “Console””
1\. Top right: “Source”
1\. Top left: “Plot, Packages, Help Viewer”
1\. This will leave the “Environment…” panel in the lower right.
### 23\.12\.2 Adjusting size of windows
You can clicked on the edge of a pane and adjust its size. For most R work we want the console to be big. For beginners, the “Environment, history, files” panel can be made really small.
23\.13 Practice (OPTIONAL)
--------------------------
Practice the following operations. Type the directly into the console and execute them. Also write them in a script in the script editor and run them.
**Square roots**
```
sqrt(42)
```
```
## [1] 6.480741
```
**The date**
Some functions in R can be executed within nothing in the parentheses.
```
date()
```
```
## [1] "Tue May 10 14:54:09 2022"
```
**Exponents**
The **^** is used for exponents
```
42^2
```
```
## [1] 1764
```
**A series of numbers**
A colon between two numbers creates a series of numbers.
```
1:42
```
```
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
## [26] 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
```
**logs**
The default for the log() function is the natural log.
```
log(42)
```
```
## [1] 3.73767
```
log10() gives the base\-10 log.
```
log10(42)
```
```
## [1] 1.623249
```
**exp() raises e to a power**
```
exp(3.73767)
```
```
## [1] 42.00002
```
**Multiple commands can be nested**
```
sqrt(42)^2
log(sqrt(42)^2)
exp(log(sqrt(42)^2))
```
Vocabulary
----------
* console
* script editor / source viewer
* interactive programming
* scripts / script files
* .R files
* text files / plain text files
* command execution / execute a command from script editor
* comments / code comments
* commenting out / commenting out code
* stackoverflow.com
* the rstats hashtag
R commands
----------
* c(…)
* mean(…)
* sd(…)
* ?
* read.csv(…)
This is a walk\-through of a very basic R session. It assumes you have successfully installed R and RStudio onto your computer, and nothing else.
Most people who use R do not actually use the program itself \- they use a GUI (graphical user interface) “front end” that make R a bit easier to use. However, you will probably run into the icon for the underlying R program on your desktop or elsewhere on your computer. It usually looks like this:
ADD IMAGE HERE
The long string of numbers have to do with the version and whether is 32 or 64 bit (not important for what we do).
If you are curious you can open it up and take a look \- it actually looks a lot like RStudio, where we will do all our work (or rather, RStudio looks like R). Sometimes when people are getting started with R they will accidentally open R instead of RStudio; if things don’t seem to look or be working the way you think they should, you might be in R, not RStudio
#### 23\.10\.4\.1 R’s console as a scientific calculator
You can interact with R’s console similar to a scientific calculator. For example, you can use parentheses to set up mathematical statements like
```
5*(1+1)
```
```
## [1] 10
```
Note however that you have to be explicit about multiplication. If you try the following it won’t work.
```
5(1+1)
```
R also has built\-in functions that work similar to what you might have used in Excel. For example, in Excel you can calculate the average of a set of numbers by typing “\=average(1,2,3\)” into a cell. R can do the same thing except
* The command is “mean”
* You don’t start with “\=”
* You have to package up the numbers like what is shown below using “c(…)”
```
mean(c(1,2,3))
```
```
## [1] 2
```
Where “c(…)” packages up the numbers the way the mean() function wants to see them.
If you just do the following R will give you an answer, but its the wrong one
```
mean(1,2,3)
```
**This is a common issue with R – and many programs, really – it won’t always tell you when somethind didn’t go as planned. This is because it doesn’t know something didn’t go as planned; you have to learn the rules R plays by.**
#### 23\.10\.4\.2 Practice: math in the console
See if you can reproduce the following results
**Division**
```
10/3
```
```
## [1] 3.333333
```
**The standard deviation**
```
sd(c(5,10,15)) # note the use of "c(...)"
```
```
## [1] 5
```
#### 23\.10\.4\.3 The script editor
While you can interact with R directly within the console, the standard way to work in R is to write what are known as **scripts.** These are computer code instructions written to R in a **script file.** These are save with the extension **.R** but area really just a form of **plain text file.**
To work with scripts, what you do is type commands in the script editor, then tell R to **excute** the command. This can be done several ways.
First, you tell RStudio the line of code you want to run by either
\* Placing the cursor at the end a line of code, OR
\* Clicking and dragging over the code you want to run in order highlight it.
Second, you tell RStudio to run the code by
\* Clicking the “Run” icon in the upper right hand side of the script editor (a grey box with a green error emerging from it)
\* pressing the control key (“ctrl)” and then then enter key on the keyboard
The code you’ve chosen to run will be sent by RStudio from the script editor over to the console. The console will show you both the code and then the output.
You can run several lines of code if you want; the console will run a line, print the output, and then run the next line. First I’ll use the command mean(), and then the command sd() for the standard deviation:
```
mean(c(1,2,3))
```
```
## [1] 2
```
```
sd(c(1,2,3))
```
```
## [1] 1
```
#### 23\.10\.4\.4 Comments
One of the reasons we use script files is that we can combine R code with **comments** that tell us what the R code is doing. Comments are preceded by the hashtag symbol **\#**. Frequently we’ll write code like this:
```
#The mean of 3 numbers
mean(c(1,2,3))
```
If you highlight all of this code (including the comment) and then click on “run”, you’ll see that RStudio sends all of the code over console.
```
## [1] 2
```
Comments can also be placed at the *end* of a line of code
```
mean(c(1,2,3)) #Note the use of c(...)
```
Sometimes we write code and then don’t want R to run it. We can prevent R from executing the code even if its sent to the console by putting a “\#” *infront* of the code.
If I run this code, I will get just the mean but not the sd.
```
mean(c(1,2,3))
#sd(c(1,2,3))
```
Doing this is called **commenting out** a line of code.
#### 23\.10\.4\.1 R’s console as a scientific calculator
You can interact with R’s console similar to a scientific calculator. For example, you can use parentheses to set up mathematical statements like
```
5*(1+1)
```
```
## [1] 10
```
Note however that you have to be explicit about multiplication. If you try the following it won’t work.
```
5(1+1)
```
R also has built\-in functions that work similar to what you might have used in Excel. For example, in Excel you can calculate the average of a set of numbers by typing “\=average(1,2,3\)” into a cell. R can do the same thing except
* The command is “mean”
* You don’t start with “\=”
* You have to package up the numbers like what is shown below using “c(…)”
```
mean(c(1,2,3))
```
```
## [1] 2
```
Where “c(…)” packages up the numbers the way the mean() function wants to see them.
If you just do the following R will give you an answer, but its the wrong one
```
mean(1,2,3)
```
**This is a common issue with R – and many programs, really – it won’t always tell you when somethind didn’t go as planned. This is because it doesn’t know something didn’t go as planned; you have to learn the rules R plays by.**
#### 23\.10\.4\.2 Practice: math in the console
See if you can reproduce the following results
**Division**
```
10/3
```
```
## [1] 3.333333
```
**The standard deviation**
```
sd(c(5,10,15)) # note the use of "c(...)"
```
```
## [1] 5
```
#### 23\.10\.4\.3 The script editor
While you can interact with R directly within the console, the standard way to work in R is to write what are known as **scripts.** These are computer code instructions written to R in a **script file.** These are save with the extension **.R** but area really just a form of **plain text file.**
To work with scripts, what you do is type commands in the script editor, then tell R to **excute** the command. This can be done several ways.
First, you tell RStudio the line of code you want to run by either
\* Placing the cursor at the end a line of code, OR
\* Clicking and dragging over the code you want to run in order highlight it.
Second, you tell RStudio to run the code by
\* Clicking the “Run” icon in the upper right hand side of the script editor (a grey box with a green error emerging from it)
\* pressing the control key (“ctrl)” and then then enter key on the keyboard
The code you’ve chosen to run will be sent by RStudio from the script editor over to the console. The console will show you both the code and then the output.
You can run several lines of code if you want; the console will run a line, print the output, and then run the next line. First I’ll use the command mean(), and then the command sd() for the standard deviation:
```
mean(c(1,2,3))
```
```
## [1] 2
```
```
sd(c(1,2,3))
```
```
## [1] 1
```
#### 23\.10\.4\.4 Comments
One of the reasons we use script files is that we can combine R code with **comments** that tell us what the R code is doing. Comments are preceded by the hashtag symbol **\#**. Frequently we’ll write code like this:
```
#The mean of 3 numbers
mean(c(1,2,3))
```
If you highlight all of this code (including the comment) and then click on “run”, you’ll see that RStudio sends all of the code over console.
```
## [1] 2
```
Comments can also be placed at the *end* of a line of code
```
mean(c(1,2,3)) #Note the use of c(...)
```
Sometimes we write code and then don’t want R to run it. We can prevent R from executing the code even if its sent to the console by putting a “\#” *infront* of the code.
If I run this code, I will get just the mean but not the sd.
```
mean(c(1,2,3))
#sd(c(1,2,3))
```
Doing this is called **commenting out** a line of code.
23\.11 Help!
------------
There are many resource for figuring out R and RStudio, including
* R’s built in “help” function
* Q\&A websites like **stackoverflow.com**
* twitter, using the hashtag \#rstats
* blogs
* online books and course materials
### 23\.11\.1 Getting “help” from R
If you are using a function in R you can get info about how it works like this
```
?mean
```
In RStudio the help screen should appear, probably above your console. If you start reading this help file, though, you don’t have to go far until you start seeing lots of R lingo, like “S3 method”,“na.rm”, “vectors”. Unfortunately, the R help files are usually not written for beginners, and reading help files is a skill you have to acquire.
For example, when we load data into R in subsequent lessons we will use a function called “read.csv”
Access the help file by typing “?read.csv” into the console and pressing enter. Surprisingly, the function that R give you the help file isn’t what you asked for, but is read.table(). This is a related function to read.csv, but when you’re a beginner thing like this can really throw you off.
Kieran Healy as produced a great cheatsheet for reading R’s help pages as part of his forthcoming book. It should be available online at [http://socviz.co/appendix.html\#a\-little\-more\-about\-r](http://socviz.co/appendix.html#a-little-more-about-r)
### 23\.11\.2 Getting help from the internet
The best way to get help for any topic is to just do an internet search like this: “R read.csv”. Usually the first thing on the results list will be the R help file, but the second or third will be a blog post or something else where a usually helpful person has discussed how that function works.
Sometimes for very basic R commands like this might not always be productive but its always work a try. For but things related to stats, plotting, and programming there is frequently lots of information. Also try searching YouTube.
### 23\.11\.3 Getting help from online forums
Often when you do an internet search for an R topic you’ll see results from the website www.stackoverflow.com, or maybe www.crossvalidated.com if its a statistics topic. These are excellent resources and many questions that you may have already have answers on them. Stackoverflow has an internal search function and also suggests potentially relevant posts.
Before posting to one of these sites yourself, however, do some research; there is a particular type and format of question that is most likely to get a useful response. Sadly, people new to the site often get “flamed” by impatient pros.
### 23\.11\.4 Getting help from twitter
Twitter is a surprisingly good place to get information or to find other people knew to R. Its often most useful to ask people for learning resources or general reference, but you can also post direct questions and see if anyone responds, though usually its more advanced users who engage in twitter\-based code discussion.
A standard tweet might be
“Hey \#rstats twitter, am knew to \#rstats and really stuck on some of the basics. Any suggestions for good resources for someone starting from scratch?”
### 23\.11\.1 Getting “help” from R
If you are using a function in R you can get info about how it works like this
```
?mean
```
In RStudio the help screen should appear, probably above your console. If you start reading this help file, though, you don’t have to go far until you start seeing lots of R lingo, like “S3 method”,“na.rm”, “vectors”. Unfortunately, the R help files are usually not written for beginners, and reading help files is a skill you have to acquire.
For example, when we load data into R in subsequent lessons we will use a function called “read.csv”
Access the help file by typing “?read.csv” into the console and pressing enter. Surprisingly, the function that R give you the help file isn’t what you asked for, but is read.table(). This is a related function to read.csv, but when you’re a beginner thing like this can really throw you off.
Kieran Healy as produced a great cheatsheet for reading R’s help pages as part of his forthcoming book. It should be available online at [http://socviz.co/appendix.html\#a\-little\-more\-about\-r](http://socviz.co/appendix.html#a-little-more-about-r)
### 23\.11\.2 Getting help from the internet
The best way to get help for any topic is to just do an internet search like this: “R read.csv”. Usually the first thing on the results list will be the R help file, but the second or third will be a blog post or something else where a usually helpful person has discussed how that function works.
Sometimes for very basic R commands like this might not always be productive but its always work a try. For but things related to stats, plotting, and programming there is frequently lots of information. Also try searching YouTube.
### 23\.11\.3 Getting help from online forums
Often when you do an internet search for an R topic you’ll see results from the website www.stackoverflow.com, or maybe www.crossvalidated.com if its a statistics topic. These are excellent resources and many questions that you may have already have answers on them. Stackoverflow has an internal search function and also suggests potentially relevant posts.
Before posting to one of these sites yourself, however, do some research; there is a particular type and format of question that is most likely to get a useful response. Sadly, people new to the site often get “flamed” by impatient pros.
### 23\.11\.4 Getting help from twitter
Twitter is a surprisingly good place to get information or to find other people knew to R. Its often most useful to ask people for learning resources or general reference, but you can also post direct questions and see if anyone responds, though usually its more advanced users who engage in twitter\-based code discussion.
A standard tweet might be
“Hey \#rstats twitter, am knew to \#rstats and really stuck on some of the basics. Any suggestions for good resources for someone starting from scratch?”
23\.12 Other features of RStudio
--------------------------------
### 23\.12\.1 Ajusting pane the layout
You can adjust the location of each of RStudio 4 window panes, as well as their size.
To set the pane layout go to
1\. ”Tools” on the top menu
1\. ”Global options”
1\. “Pane Layout”
Use the drop\-down menus to set things up. I recommend
1\. Lower left: “Console””
1\. Top right: “Source”
1\. Top left: “Plot, Packages, Help Viewer”
1\. This will leave the “Environment…” panel in the lower right.
### 23\.12\.2 Adjusting size of windows
You can clicked on the edge of a pane and adjust its size. For most R work we want the console to be big. For beginners, the “Environment, history, files” panel can be made really small.
### 23\.12\.1 Ajusting pane the layout
You can adjust the location of each of RStudio 4 window panes, as well as their size.
To set the pane layout go to
1\. ”Tools” on the top menu
1\. ”Global options”
1\. “Pane Layout”
Use the drop\-down menus to set things up. I recommend
1\. Lower left: “Console””
1\. Top right: “Source”
1\. Top left: “Plot, Packages, Help Viewer”
1\. This will leave the “Environment…” panel in the lower right.
### 23\.12\.2 Adjusting size of windows
You can clicked on the edge of a pane and adjust its size. For most R work we want the console to be big. For beginners, the “Environment, history, files” panel can be made really small.
23\.13 Practice (OPTIONAL)
--------------------------
Practice the following operations. Type the directly into the console and execute them. Also write them in a script in the script editor and run them.
**Square roots**
```
sqrt(42)
```
```
## [1] 6.480741
```
**The date**
Some functions in R can be executed within nothing in the parentheses.
```
date()
```
```
## [1] "Tue May 10 14:54:09 2022"
```
**Exponents**
The **^** is used for exponents
```
42^2
```
```
## [1] 1764
```
**A series of numbers**
A colon between two numbers creates a series of numbers.
```
1:42
```
```
## [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
## [26] 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
```
**logs**
The default for the log() function is the natural log.
```
log(42)
```
```
## [1] 3.73767
```
log10() gives the base\-10 log.
```
log10(42)
```
```
## [1] 1.623249
```
**exp() raises e to a power**
```
exp(3.73767)
```
```
## [1] 42.00002
```
**Multiple commands can be nested**
```
sqrt(42)^2
log(sqrt(42)^2)
exp(log(sqrt(42)^2))
```
| Life Sciences |
brouwern.github.io | https://brouwern.github.io/lbrb/logarithms-in-r.html |
Chapter 24 Logarithms in R
==========================
**By** Nathan Brouwer
Logging splits up multiplication into addition. So, log(m\*n) is the same as log(m) \+ log(n)
You can check this
```
m<-10
n<-11
log(m*n)
```
```
## [1] 4.70048
```
```
log(m)+log(n)
```
```
## [1] 4.70048
```
```
log(m*n) == log(m)+log(n)
```
```
## [1] TRUE
```
Exponentiation undos logs
```
exp(log(m*n))
```
```
## [1] 110
```
```
m*n
```
```
## [1] 110
```
The key equation in BLAST’s E values is
u \= ln(K*m*n)/lambda
This can be changed to
\[ln(K) \+ ln(mn)]/lambda
We can check this
```
K <- 1
m <- 10
n <- 11
lambda <- 110
log(K*m*n)/lambda
```
```
## [1] 0.04273164
```
```
(log(K) + log(m*n))/lambda
```
```
## [1] 0.04273164
```
```
log(K*m*n)/lambda == (log(K) + log(m*n))/lambda
```
```
## [1] TRUE
```
| Life Sciences |
opengeohub.github.io | https://opengeohub.github.io/SoilSamples/index.html |
About
=====
[
Rationale
---------
[
This is a public compendium of global, regional, national and
sub\-national **soil samples** and/or **soil profile** datasets (points with
Observations and Measurements of soil properties and characteristics).
Datasets listed here, assuming [compatible open data license](https://opendefinition.org/licenses/), are afterwards
imported into the [**Global compilation of soil chemical and physical
properties and soil classes**](https://opengeohub.org/about-openlandmap/) and
eventually used to create a better open soil information across countries.
The specific objectives of this initiative are:
* To enable data digitization, import and binding \+ harmonization,
* To accelerate research collaboration and networking,
* To enable development of more accurate / more usable global and
regional soil property and class maps (typically published via
<https://OpenLandMap.org>),
Download compiled data
----------------------
Compiled data (imported, standardized, quality\-controlled) is available
through a diversity of standard formats:
* RDS file (native R data format);
* GPKG file ([Geopackage file](https://www.geopackage.org/) ready to be opened in QGIS);
All files can be downloaded from the `/out` [directory](https://github.com/OpenGeoHub/SoilSamples/-/tree/master/out).
Add your own data
-----------------
The minimum requirements to submit a dataset for inclusion to [the
OpenLandMap repository](https://gitlab.com/openlandmap/) are:
* License and terms of use clearly specified AND,
* Complete and consistent metadata that can ensure correct
standardization and harmonization steps AND,
* **At least 50 unique spatial locations** AND,
* No broken or invalid URLs,
Datasets that do NOT satisfy the above listed minimum requirements might be
removed. If you discover an issue with license, data description or
version number of a dataset, please open a [Github
issue](https://github.com/OpenGeoHub/SoilSamples/issues).
Figure 0\.1: Soil profiles and soil samples with chemical and physical properties global compilation. For more info see: [https://gitlab.com/openlandmap/compiled\-ess\-point\-data\-sets](https://gitlab.com/openlandmap/compiled-ess-point-data-sets).
Recommended settings for all datasets are:
* Peer\-reviewed versions of the datasets (i.e. a dataset accompanied
with a peer\-reviewed publication) should have the priority,
* Register your dataset (use e.g. <https://zenodo.org>) and assign a DOI
to each version,
* Provide enough metadata so that it can be imported and bind with
other data without errors,
* If your dataset is a compilation of previously published datasets, please
indicate in the description,
Information outdated or missing? Please open an issue or best do a
correction and then a [pull
request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request).
Existing soil data projects and initiatives
-------------------------------------------
Multiple international organizations from [FAO’s Global Soil Partnership](http://www.fao.org/global-soil-partnership/en/) to [UNCCD’s Land Degredation Neutrality](https://www.unccd.int/actions/ldn-target-setting-programme), [European Commission](https://esdac.jrc.ec.europa.eu/) and similar,
support soil data collation projects and especially curation of the legacy soil data.
Some existing soil Observations and Measurements (O\&M) soil data initiatives include:
* [**FAO’s Data Hub**](https://www.fao.org/soils-portal/data-hub/en/),
* [**FAO’s Global SoilInformation System (GLOSIS)**](https://data.apps.fao.org/glosis/),
* [**Fine Root Ecology Database (FRED)**](https://roots.ornl.gov/),
* [**FLUXNET global network**](https://fluxnet.fluxdata.org/),
* [**Global database of soil nematodes**](https://www.nature.com/articles/s41597-020-0437-3),
* [**Global soil macrofauna database**](http://macrofauna.earthworms.info/),
* [**Global soil respiration database (SRDB)**](https://github.com/bpbond/srdb),
* [**International Soil Modeling Consortium (ISMC)**](https://soil-modeling.org),
* [**International Soil Moisture Network**](https://ismn.geo.tuwien.ac.at/en/),
* [**International Soil Radiocarbon Database (ISRaD)**](https://soilradiocarbon.org),
* [**International Soil Carbon Network (ISCN)**](http://iscn.fluxdata.org/),
* [**LandPKS project**](http://portal.landpotential.org/#/landpksmap),
* [**Long Term Ecological Research (LTER) Network sites**](https://lternet.edu/site/),
* [**National Ecological Observatory Network (NEON)**](https://www.neonscience.org),
* [**Open Soil Spectral Library (OSSL)**](https://soilspectroscopy.org),
* [**Open Bodem Index**](https://agrocares.github.io/Open-Bodem-Index-Calculator/),
* [**ORNL DAAC theme soil**](https://daac.ornl.gov/get_data/#themes),
* [**Soils Data Harmonization (SoDaH)**](https://lter.github.io/som-website),
* [**WoSIS Soil Profile Database**](https://www.isric.org/explore/wosis),
UN’s FAO hosts [**International Network of Soil Information Institutions**](https://www.fao.org/global-soil-partnership/pillars-action/4-information-data/insii/en/), which we
recommend joining and actively contributing. INSII and FAO’s Global Soil Partnership aim at
developing and hosting [**Global SoilInformation System (GLOSIS)**](https://data.apps.fao.org/glosis/),
although this does not serve yet any soil observations and measurements.
A more in\-depth inventory of all various national and international soil datasets can be found in:
* Rossiter, D.G.,: [**Compendium of Soil Geographical Databases**](https://www.isric.org/explore/soil-geographic-databases)
Target soil variables
---------------------
Soil variables of interest include:
1. **Chemical soil properties**:
* Soil organic carbon, total carbon, total nitrogen,
* Soil pH, effective Cation Exchange Capacity (eCEC),
* Soil sodicity (presence of a high proportion of sodium ions relative to other cations),
* Macro\-nutrients: extractable — potassium (K), calcium (Ca), sodium
(Na), magnesium (Mg) and similar,
* Micro\-nutrients: phosphorus (P), sulfur (S), iron (Fe), zinc (Zn)
and similar,
* Soil pollutants, heavy metals and similar,
* Electrical conductivity,
2. **Physical soil properties**:
* Soil texture and texture fractions: silt, clay and sand, stone content,
* Bulk density, depth to bedrock and similar,
* Hydraulic conductivity, water content — Field Capacity (FC; the amount of water
held in the soil after it has been fully wetted and free drainage has stopped),
Permanent Wilting Point (PWP; the soil moisture condition at which the plant could
not obtain water and would wilt and die), Plant Available Water Capacity (PAWC;
the amount of water between field capacity and permanent wilting point water holding capacity) and
similar,
* Soil temperature,
3. **Soil biological / biodiversity variables**:
* Soil biomass,
* Soil micro\-, meso\-, macro\- and mega\-fauna abundance,
* Soil biodiversity indices,
4. **Soil classification / taxonomy variables**:
* Soil type,
* Soi suitability classes, soil fertility classes,
* Soil texture classes and families,
5. **Soil absorbances / soil spectroscopy variables**:
* Soil absorbance in VIS\-NIR and MIR part of spectra,
Recommended O\&M standards
--------------------------
As a general rule of thumb we recommend all contributors to use the following
[general scheme](https://soilspectroscopy.github.io/ossl-manual/) to organize Soil Observations \& Measurements with 3–4 main tables
and metadata \+ legends organized in other tables:
* Soil site information (geographical coordinates, land use / land cover, soil classification etc),
* Soil horizon information (soil observations and measurements specific to soil layers / diagnostic horizons),
* Proximal soil sensing information including soil scans,
For making soil observations and measurements we recommend following the USDA [National Cooperative Soil Survey (NCSS) Soil
Characterization Database](https://ncsslabdatamart.sc.egov.usda.gov/) codes and specification as much as possible. These are explained in detail in the [**Kellogg Soil Survey Laboratory Methods Manual**](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1253872.pdf)
and [**The Field Book for Describing and Sampling Soils**](https://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/research/guide/).
Likewise, [**FAO Guidelines for soil description**](http://www.fao.org/3/a0541e/a0541e.pdf),
and the FAO’s [GSOC measurement, monitoring, reporting and verification (MRV) protocol](http://www.fao.org/documents/card/en/c/cb0509en/) also explain in
detail how to collect soil samples and setup a system for monitoring soil organic carbon.
It is recommended that one should, as much as possible, use the international standards
and references. Some highly recommended protocols and standards include:
* [UUID generator tool](https://cran.r-project.org/package=uuid) to generate unique ID’s for unique soil sites, horizons, samples etc (to convert an existing local ID to UUID, best use `openssl::md5(local_id)`),
* [Open Location Codes](https://opensource.google/projects/open-location-code) to generate geographic location codes,
* [OGC standards](https://www.ogc.org/standards/om) to prepare metadata and exchange data across field / computer systems;
* [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) to save time and date information,
* [ISO3166](https://en.wikipedia.org/wiki/ISO_3166-1) for country / administrative codes,
* [GPS](https://www.gps.gov/) and WGS84 longitude and latitude in decimal degrees to save the location information,
* [International DOI foundation](https://en.wikipedia.org/wiki/Digital_object_identifier) to refer to specific dataset and/or publication,
* [USDA soil classification system](https://www.nrcs.usda.gov/wps/portal/nrcs/main/soils/survey/class/) and [World Reference Base](http://www.fao.org/soils-portal/data-hub/soil-classification/world-reference-base/en/) to classify soils,
* [USDA soil texture calculator](https://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/survey/?cid=nrcs142p2_054167) to determine and share soil texture classes including texture\-by\-hand,
* [Kellogg Soil Survey Laboratory Methods Manual](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1253872.pdf) for reference physical and chemical soil property determination in laboratory,
* [Soils Laboratory Manual, K\-State Edition](https://kstatelibraries.pressbooks.pub/soilslabmanual/),
* [GLOSOLAN Standard Operating Procedures (SOPs)](http://www.fao.org/global-soil-partnership/glosolan/soil-analysis/standard-operating-procedures/en/#c763834),
* Open Bodem Index specification,
Contributing
------------
Please feel free to contribute entries. See [GitHub
repository](https://github.com/OpenGeoHub/SoilSamples) for more detailed
instructions.
Contributors
------------
If you contribute, add also your name and Twitter, ORCID or blog link
below:
[Tomislav Hengl](https://twitter.com/tom_hengl), [Jonathan Sanderman](https://twitter.com/sandersoil), [Mario Antonio Guevara
Santamaria](https://orcid.org/0000-0002-9788-9947),
This document is based on the <https://www.bigbookofr.com/> repository
by Oscar Baruffa.
Disclaimer
----------
The data is provided “as is”. [OpenGeoHub foundation](https://opengeohub.org/about) and its suppliers and licensors hereby disclaim all warranties of any kind, express or implied, including, without limitation, the warranties of merchantability, fitness for a particular purpose and non\-infringement. Neither OpenGeoHub foundation nor its suppliers and licensors, makes any warranty that the Website will be error free or that access thereto will be continuous or uninterrupted. You understand that you download from, or otherwise obtain content or services through, the Website at your own discretion and risk.
Licence
-------
[
This website/book is free to use, and is licensed under the [Creative Commons Attribution\-ShareAlike 4\.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
To cite this dataset please use:
```
@dataset{hengl_t_2023_4748499,
author = {Hengl, T. and Gupta, S. and Minarik, R.},
title = {{An Open Compendium of Soil Datasets: Soil Observations and Measurements}},
year = 2023,
publisher = {OpenGeoHub foundation},
address = {Wageningen},
version = {v0.2},
doi = {10.5281/zenodo.4748499},
url = {https://doi.org/10.5281/zenodo.4748499}
}
```
Literature
----------
Some other connected publications and initiatives describing collation
and import of legacy soil observations and measurements that might interest
you:
* Arrouays, D., Leenaars, J. G., Richer\-de\-Forges, A. C., Adhikari,
K., Ballabio, C., Greve, M., … \& Heuvelink, G. (2017\). [**Soil
legacy data rescue via GlobalSoilMap and other international and
national initiatives**](https://doi.org/10.1016/j.grj.2017.06.001).
GeoResJ, 14, 1\-19\.
* Beillouin, D., Cardinael, R., Berre, D., Boyer, A., Corbeels, M.,
Fallot, A., … \& Demenois, J. (2021\). A global overview of studies
about land management, land‐use change, and climate change effects
on soil organic carbon. Global Change Biology. [https://doi.org/10\.1111/gcb.15998](https://doi.org/10.1111/gcb.15998)
* Batjes, N. H., Ribeiro, E., van Oostrum, A., Leenaars, J., Hengl,
T., \& de Jesus, J. M. (2017\). [**WoSIS: providing standardised soil
profile data for the world**](http://www.earth-syst-sci-data.net/9/1/2017/). Earth System Science Data, 9(1\), 1\. [https://doi.org/10\.5194/essd\-9\-1\-2017](https://doi.org/10.5194/essd-9-1-2017)
* Billings, S. A., Lajtha, K., Malhotra, A., Berhe, A. A., de Graaff, M. A.,
Earl, S., … \& Wieder, W. (2021\). [**Soil organic carbon is not just for soil
scientists: measurement recommendations for diverse practitioners**](https://doi.org/10.1002/eap.2290). Ecological Applications, 31(3\), e02290\. [https://doi.org/10\.1002/eap.2290](https://doi.org/10.1002/eap.2290)
* Brown, G., Demeterio, W. and Samuel\-Rosa, A. (2021\). [**Towards a more open Soil Science**](https://blog.scielo.org/en/2021/01/08/towards-a-more-open-soil-science/). SciELO in Perspective, 2021 \[viewed 17 June 2021].
* Gupta, S., Hengl, T., Lehmann, P., Bonetti, S., \& Or, D. (2021\). [**SoilKsatDB:
global database of soil saturated hydraulic conductivity measurements for
geoscience applications**](https://doi.org/10.5194/essd-13-1593-2021). Earth System Science Data, 13(4\), 1593\-1612\.
[https://doi.org/10\.5194/essd\-13\-1593\-2021](https://doi.org/10.5194/essd-13-1593-2021)
* Hengl, T., MacMillan, R.A., (2019\). [**Predictive Soil Mapping with
R**](https://soilmapper.org/). OpenGeoHub foundation, Wageningen, the
Netherlands, 370 pages, <https://soilmapper.org>, ISBN:
978\-0\-359\-30635\-0\.
* Moorberg, C. J., \& Crouse, D. A. (2017\). [**An open‐source laboratory manual for introductory, undergraduate soil science courses**](https://doi.org/10.4195/nse2017.06.0013). Natural Sciences Education, 46(1\), 1\-8\.
* Ramcharan, A., Hengl, T., Beaudette, D., \& Wills, S. (2017\). [**A soil
bulk density pedotransfer function based on machine learning: A case
study with the NCSS soil characterization
database**](https://doi.org/10.2136/sssaj2016.12.0421). Soil Science
Society of America Journal, 81(6\), 1279\-1287\.
[https://doi.org/10\.2136/sssaj2016\.12\.0421](https://doi.org/10.2136/sssaj2016.12.0421)
* Ros, G. H., Verweij, S. E., Janssen, S. J., De Haan, J., \& Fujita, Y. (2022\). [**An Open Soil
Health Assessment Framework Facilitating Sustainable Soil Management**](https://doi.org/10.1021/acs.est.2c04516). Environmental Science \& Technology, 56(23\), 17375\-17384\.
* Rossiter, D.G.,: [**Compendium of Soil Geographical
Databases**](https://www.isric.org/explore/soil-geographic-databases).
About OpenGeoHub
----------------
[
**OpenGeoHub foundation** is a not\-for\-profit research foundation
located in Wageningen, the Netherlands. We specifically promote
publishing and sharing of Open Geographical and Geoscientific Data,
using and developing Open Source Software and encouraging and empowering
under\-represented researchers e.g. those from ODA recipient countries
and female researchers. We believe that the key measure of quality of
research in all sciences (and especially in geographical information
sciences) is in transparency and reproducibility of the computer code
used to generate results (read more in: [“Everyone has a right to know
what is happening with the planet”](https://opengeohub.medium.com/)).
Acknowledgments
---------------
[**SoilSpec4GG**](https://soilspectroscopy.org/) is a USDA\-funded [Food and Agriculture Cyberinformatics
Tools Coordinated Innovation Network NIFA Award \#2020\-67021\-32467](https://nifa.usda.gov/press-release/nifa-invests-over-7-million-big-data-artificial-intelligence-and-other) project. It brings together soil
scientists, spectroscopists, informaticians, data scientists and
software engineers to overcome some of the current bottlenecks
preventing wider and more efficient use of soil spectroscopy. For more info refer
to: <https://soilspectroscopy.org/>
**[EcoDataCube.eu](https://EcoDataCube.eu/)** project is co\-financed by the European Union (**[CEF Telecom project 2018\-EU\-IA\-0095](https://ec.europa.eu/inea/en/connecting-europe-facility/cef-telecom/2018-eu-ia-0095)**).
**[EarthMonitor.org](https://EarthMonitor.org/)** project has received funding from the European Union’s Horizon Europe research an innovation programme under grant agreement **[No. 101059548](https://cordis.europa.eu/project/id/101059548)**.
**[AI4SoilHealth.eu](https://AI4SoilHealth.eu/)** project has received funding from
the European Union’s Horizon Europe research an innovation programme under grant
agreement **[No. 101086179](https://cordis.europa.eu/project/id/101086179)**.
Rationale
---------
[
This is a public compendium of global, regional, national and
sub\-national **soil samples** and/or **soil profile** datasets (points with
Observations and Measurements of soil properties and characteristics).
Datasets listed here, assuming [compatible open data license](https://opendefinition.org/licenses/), are afterwards
imported into the [**Global compilation of soil chemical and physical
properties and soil classes**](https://opengeohub.org/about-openlandmap/) and
eventually used to create a better open soil information across countries.
The specific objectives of this initiative are:
* To enable data digitization, import and binding \+ harmonization,
* To accelerate research collaboration and networking,
* To enable development of more accurate / more usable global and
regional soil property and class maps (typically published via
<https://OpenLandMap.org>),
Download compiled data
----------------------
Compiled data (imported, standardized, quality\-controlled) is available
through a diversity of standard formats:
* RDS file (native R data format);
* GPKG file ([Geopackage file](https://www.geopackage.org/) ready to be opened in QGIS);
All files can be downloaded from the `/out` [directory](https://github.com/OpenGeoHub/SoilSamples/-/tree/master/out).
Add your own data
-----------------
The minimum requirements to submit a dataset for inclusion to [the
OpenLandMap repository](https://gitlab.com/openlandmap/) are:
* License and terms of use clearly specified AND,
* Complete and consistent metadata that can ensure correct
standardization and harmonization steps AND,
* **At least 50 unique spatial locations** AND,
* No broken or invalid URLs,
Datasets that do NOT satisfy the above listed minimum requirements might be
removed. If you discover an issue with license, data description or
version number of a dataset, please open a [Github
issue](https://github.com/OpenGeoHub/SoilSamples/issues).
Figure 0\.1: Soil profiles and soil samples with chemical and physical properties global compilation. For more info see: [https://gitlab.com/openlandmap/compiled\-ess\-point\-data\-sets](https://gitlab.com/openlandmap/compiled-ess-point-data-sets).
Recommended settings for all datasets are:
* Peer\-reviewed versions of the datasets (i.e. a dataset accompanied
with a peer\-reviewed publication) should have the priority,
* Register your dataset (use e.g. <https://zenodo.org>) and assign a DOI
to each version,
* Provide enough metadata so that it can be imported and bind with
other data without errors,
* If your dataset is a compilation of previously published datasets, please
indicate in the description,
Information outdated or missing? Please open an issue or best do a
correction and then a [pull
request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request).
Existing soil data projects and initiatives
-------------------------------------------
Multiple international organizations from [FAO’s Global Soil Partnership](http://www.fao.org/global-soil-partnership/en/) to [UNCCD’s Land Degredation Neutrality](https://www.unccd.int/actions/ldn-target-setting-programme), [European Commission](https://esdac.jrc.ec.europa.eu/) and similar,
support soil data collation projects and especially curation of the legacy soil data.
Some existing soil Observations and Measurements (O\&M) soil data initiatives include:
* [**FAO’s Data Hub**](https://www.fao.org/soils-portal/data-hub/en/),
* [**FAO’s Global SoilInformation System (GLOSIS)**](https://data.apps.fao.org/glosis/),
* [**Fine Root Ecology Database (FRED)**](https://roots.ornl.gov/),
* [**FLUXNET global network**](https://fluxnet.fluxdata.org/),
* [**Global database of soil nematodes**](https://www.nature.com/articles/s41597-020-0437-3),
* [**Global soil macrofauna database**](http://macrofauna.earthworms.info/),
* [**Global soil respiration database (SRDB)**](https://github.com/bpbond/srdb),
* [**International Soil Modeling Consortium (ISMC)**](https://soil-modeling.org),
* [**International Soil Moisture Network**](https://ismn.geo.tuwien.ac.at/en/),
* [**International Soil Radiocarbon Database (ISRaD)**](https://soilradiocarbon.org),
* [**International Soil Carbon Network (ISCN)**](http://iscn.fluxdata.org/),
* [**LandPKS project**](http://portal.landpotential.org/#/landpksmap),
* [**Long Term Ecological Research (LTER) Network sites**](https://lternet.edu/site/),
* [**National Ecological Observatory Network (NEON)**](https://www.neonscience.org),
* [**Open Soil Spectral Library (OSSL)**](https://soilspectroscopy.org),
* [**Open Bodem Index**](https://agrocares.github.io/Open-Bodem-Index-Calculator/),
* [**ORNL DAAC theme soil**](https://daac.ornl.gov/get_data/#themes),
* [**Soils Data Harmonization (SoDaH)**](https://lter.github.io/som-website),
* [**WoSIS Soil Profile Database**](https://www.isric.org/explore/wosis),
UN’s FAO hosts [**International Network of Soil Information Institutions**](https://www.fao.org/global-soil-partnership/pillars-action/4-information-data/insii/en/), which we
recommend joining and actively contributing. INSII and FAO’s Global Soil Partnership aim at
developing and hosting [**Global SoilInformation System (GLOSIS)**](https://data.apps.fao.org/glosis/),
although this does not serve yet any soil observations and measurements.
A more in\-depth inventory of all various national and international soil datasets can be found in:
* Rossiter, D.G.,: [**Compendium of Soil Geographical Databases**](https://www.isric.org/explore/soil-geographic-databases)
Target soil variables
---------------------
Soil variables of interest include:
1. **Chemical soil properties**:
* Soil organic carbon, total carbon, total nitrogen,
* Soil pH, effective Cation Exchange Capacity (eCEC),
* Soil sodicity (presence of a high proportion of sodium ions relative to other cations),
* Macro\-nutrients: extractable — potassium (K), calcium (Ca), sodium
(Na), magnesium (Mg) and similar,
* Micro\-nutrients: phosphorus (P), sulfur (S), iron (Fe), zinc (Zn)
and similar,
* Soil pollutants, heavy metals and similar,
* Electrical conductivity,
2. **Physical soil properties**:
* Soil texture and texture fractions: silt, clay and sand, stone content,
* Bulk density, depth to bedrock and similar,
* Hydraulic conductivity, water content — Field Capacity (FC; the amount of water
held in the soil after it has been fully wetted and free drainage has stopped),
Permanent Wilting Point (PWP; the soil moisture condition at which the plant could
not obtain water and would wilt and die), Plant Available Water Capacity (PAWC;
the amount of water between field capacity and permanent wilting point water holding capacity) and
similar,
* Soil temperature,
3. **Soil biological / biodiversity variables**:
* Soil biomass,
* Soil micro\-, meso\-, macro\- and mega\-fauna abundance,
* Soil biodiversity indices,
4. **Soil classification / taxonomy variables**:
* Soil type,
* Soi suitability classes, soil fertility classes,
* Soil texture classes and families,
5. **Soil absorbances / soil spectroscopy variables**:
* Soil absorbance in VIS\-NIR and MIR part of spectra,
Recommended O\&M standards
--------------------------
As a general rule of thumb we recommend all contributors to use the following
[general scheme](https://soilspectroscopy.github.io/ossl-manual/) to organize Soil Observations \& Measurements with 3–4 main tables
and metadata \+ legends organized in other tables:
* Soil site information (geographical coordinates, land use / land cover, soil classification etc),
* Soil horizon information (soil observations and measurements specific to soil layers / diagnostic horizons),
* Proximal soil sensing information including soil scans,
For making soil observations and measurements we recommend following the USDA [National Cooperative Soil Survey (NCSS) Soil
Characterization Database](https://ncsslabdatamart.sc.egov.usda.gov/) codes and specification as much as possible. These are explained in detail in the [**Kellogg Soil Survey Laboratory Methods Manual**](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1253872.pdf)
and [**The Field Book for Describing and Sampling Soils**](https://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/research/guide/).
Likewise, [**FAO Guidelines for soil description**](http://www.fao.org/3/a0541e/a0541e.pdf),
and the FAO’s [GSOC measurement, monitoring, reporting and verification (MRV) protocol](http://www.fao.org/documents/card/en/c/cb0509en/) also explain in
detail how to collect soil samples and setup a system for monitoring soil organic carbon.
It is recommended that one should, as much as possible, use the international standards
and references. Some highly recommended protocols and standards include:
* [UUID generator tool](https://cran.r-project.org/package=uuid) to generate unique ID’s for unique soil sites, horizons, samples etc (to convert an existing local ID to UUID, best use `openssl::md5(local_id)`),
* [Open Location Codes](https://opensource.google/projects/open-location-code) to generate geographic location codes,
* [OGC standards](https://www.ogc.org/standards/om) to prepare metadata and exchange data across field / computer systems;
* [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) to save time and date information,
* [ISO3166](https://en.wikipedia.org/wiki/ISO_3166-1) for country / administrative codes,
* [GPS](https://www.gps.gov/) and WGS84 longitude and latitude in decimal degrees to save the location information,
* [International DOI foundation](https://en.wikipedia.org/wiki/Digital_object_identifier) to refer to specific dataset and/or publication,
* [USDA soil classification system](https://www.nrcs.usda.gov/wps/portal/nrcs/main/soils/survey/class/) and [World Reference Base](http://www.fao.org/soils-portal/data-hub/soil-classification/world-reference-base/en/) to classify soils,
* [USDA soil texture calculator](https://www.nrcs.usda.gov/wps/portal/nrcs/detail/soils/survey/?cid=nrcs142p2_054167) to determine and share soil texture classes including texture\-by\-hand,
* [Kellogg Soil Survey Laboratory Methods Manual](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1253872.pdf) for reference physical and chemical soil property determination in laboratory,
* [Soils Laboratory Manual, K\-State Edition](https://kstatelibraries.pressbooks.pub/soilslabmanual/),
* [GLOSOLAN Standard Operating Procedures (SOPs)](http://www.fao.org/global-soil-partnership/glosolan/soil-analysis/standard-operating-procedures/en/#c763834),
* Open Bodem Index specification,
Contributing
------------
Please feel free to contribute entries. See [GitHub
repository](https://github.com/OpenGeoHub/SoilSamples) for more detailed
instructions.
Contributors
------------
If you contribute, add also your name and Twitter, ORCID or blog link
below:
[Tomislav Hengl](https://twitter.com/tom_hengl), [Jonathan Sanderman](https://twitter.com/sandersoil), [Mario Antonio Guevara
Santamaria](https://orcid.org/0000-0002-9788-9947),
This document is based on the <https://www.bigbookofr.com/> repository
by Oscar Baruffa.
Disclaimer
----------
The data is provided “as is”. [OpenGeoHub foundation](https://opengeohub.org/about) and its suppliers and licensors hereby disclaim all warranties of any kind, express or implied, including, without limitation, the warranties of merchantability, fitness for a particular purpose and non\-infringement. Neither OpenGeoHub foundation nor its suppliers and licensors, makes any warranty that the Website will be error free or that access thereto will be continuous or uninterrupted. You understand that you download from, or otherwise obtain content or services through, the Website at your own discretion and risk.
Licence
-------
[
This website/book is free to use, and is licensed under the [Creative Commons Attribution\-ShareAlike 4\.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
To cite this dataset please use:
```
@dataset{hengl_t_2023_4748499,
author = {Hengl, T. and Gupta, S. and Minarik, R.},
title = {{An Open Compendium of Soil Datasets: Soil Observations and Measurements}},
year = 2023,
publisher = {OpenGeoHub foundation},
address = {Wageningen},
version = {v0.2},
doi = {10.5281/zenodo.4748499},
url = {https://doi.org/10.5281/zenodo.4748499}
}
```
Literature
----------
Some other connected publications and initiatives describing collation
and import of legacy soil observations and measurements that might interest
you:
* Arrouays, D., Leenaars, J. G., Richer\-de\-Forges, A. C., Adhikari,
K., Ballabio, C., Greve, M., … \& Heuvelink, G. (2017\). [**Soil
legacy data rescue via GlobalSoilMap and other international and
national initiatives**](https://doi.org/10.1016/j.grj.2017.06.001).
GeoResJ, 14, 1\-19\.
* Beillouin, D., Cardinael, R., Berre, D., Boyer, A., Corbeels, M.,
Fallot, A., … \& Demenois, J. (2021\). A global overview of studies
about land management, land‐use change, and climate change effects
on soil organic carbon. Global Change Biology. [https://doi.org/10\.1111/gcb.15998](https://doi.org/10.1111/gcb.15998)
* Batjes, N. H., Ribeiro, E., van Oostrum, A., Leenaars, J., Hengl,
T., \& de Jesus, J. M. (2017\). [**WoSIS: providing standardised soil
profile data for the world**](http://www.earth-syst-sci-data.net/9/1/2017/). Earth System Science Data, 9(1\), 1\. [https://doi.org/10\.5194/essd\-9\-1\-2017](https://doi.org/10.5194/essd-9-1-2017)
* Billings, S. A., Lajtha, K., Malhotra, A., Berhe, A. A., de Graaff, M. A.,
Earl, S., … \& Wieder, W. (2021\). [**Soil organic carbon is not just for soil
scientists: measurement recommendations for diverse practitioners**](https://doi.org/10.1002/eap.2290). Ecological Applications, 31(3\), e02290\. [https://doi.org/10\.1002/eap.2290](https://doi.org/10.1002/eap.2290)
* Brown, G., Demeterio, W. and Samuel\-Rosa, A. (2021\). [**Towards a more open Soil Science**](https://blog.scielo.org/en/2021/01/08/towards-a-more-open-soil-science/). SciELO in Perspective, 2021 \[viewed 17 June 2021].
* Gupta, S., Hengl, T., Lehmann, P., Bonetti, S., \& Or, D. (2021\). [**SoilKsatDB:
global database of soil saturated hydraulic conductivity measurements for
geoscience applications**](https://doi.org/10.5194/essd-13-1593-2021). Earth System Science Data, 13(4\), 1593\-1612\.
[https://doi.org/10\.5194/essd\-13\-1593\-2021](https://doi.org/10.5194/essd-13-1593-2021)
* Hengl, T., MacMillan, R.A., (2019\). [**Predictive Soil Mapping with
R**](https://soilmapper.org/). OpenGeoHub foundation, Wageningen, the
Netherlands, 370 pages, <https://soilmapper.org>, ISBN:
978\-0\-359\-30635\-0\.
* Moorberg, C. J., \& Crouse, D. A. (2017\). [**An open‐source laboratory manual for introductory, undergraduate soil science courses**](https://doi.org/10.4195/nse2017.06.0013). Natural Sciences Education, 46(1\), 1\-8\.
* Ramcharan, A., Hengl, T., Beaudette, D., \& Wills, S. (2017\). [**A soil
bulk density pedotransfer function based on machine learning: A case
study with the NCSS soil characterization
database**](https://doi.org/10.2136/sssaj2016.12.0421). Soil Science
Society of America Journal, 81(6\), 1279\-1287\.
[https://doi.org/10\.2136/sssaj2016\.12\.0421](https://doi.org/10.2136/sssaj2016.12.0421)
* Ros, G. H., Verweij, S. E., Janssen, S. J., De Haan, J., \& Fujita, Y. (2022\). [**An Open Soil
Health Assessment Framework Facilitating Sustainable Soil Management**](https://doi.org/10.1021/acs.est.2c04516). Environmental Science \& Technology, 56(23\), 17375\-17384\.
* Rossiter, D.G.,: [**Compendium of Soil Geographical
Databases**](https://www.isric.org/explore/soil-geographic-databases).
About OpenGeoHub
----------------
[
**OpenGeoHub foundation** is a not\-for\-profit research foundation
located in Wageningen, the Netherlands. We specifically promote
publishing and sharing of Open Geographical and Geoscientific Data,
using and developing Open Source Software and encouraging and empowering
under\-represented researchers e.g. those from ODA recipient countries
and female researchers. We believe that the key measure of quality of
research in all sciences (and especially in geographical information
sciences) is in transparency and reproducibility of the computer code
used to generate results (read more in: [“Everyone has a right to know
what is happening with the planet”](https://opengeohub.medium.com/)).
Acknowledgments
---------------
[**SoilSpec4GG**](https://soilspectroscopy.org/) is a USDA\-funded [Food and Agriculture Cyberinformatics
Tools Coordinated Innovation Network NIFA Award \#2020\-67021\-32467](https://nifa.usda.gov/press-release/nifa-invests-over-7-million-big-data-artificial-intelligence-and-other) project. It brings together soil
scientists, spectroscopists, informaticians, data scientists and
software engineers to overcome some of the current bottlenecks
preventing wider and more efficient use of soil spectroscopy. For more info refer
to: <https://soilspectroscopy.org/>
**[EcoDataCube.eu](https://EcoDataCube.eu/)** project is co\-financed by the European Union (**[CEF Telecom project 2018\-EU\-IA\-0095](https://ec.europa.eu/inea/en/connecting-europe-facility/cef-telecom/2018-eu-ia-0095)**).
**[EarthMonitor.org](https://EarthMonitor.org/)** project has received funding from the European Union’s Horizon Europe research an innovation programme under grant agreement **[No. 101059548](https://cordis.europa.eu/project/id/101059548)**.
**[AI4SoilHealth.eu](https://AI4SoilHealth.eu/)** project has received funding from
the European Union’s Horizon Europe research an innovation programme under grant
agreement **[No. 101086179](https://cordis.europa.eu/project/id/101086179)**.
| Life Sciences |
opengeohub.github.io | https://opengeohub.github.io/SoilSamples/soil-chemical-and-physical-properties.html |
5 Soil chemical and physical properties
=======================================
You are reading the work\-in\-progress An Open Compendium of Soil Sample and Soil Profile Datasets. This chapter is currently draft version, a peer\-review publication is pending. You can find the polished first edition at <https://opengeohub.github.io/SoilSamples/>.
Last update: 2023\-05\-10
5\.1 Overview
-------------
This section describes import steps used to produce a global compilation of soil
laboratory data with chemical (and physical) soil properties that can be then
used for predictive soil mapping / modeling at global and regional scales.
Read more about soil chemical properties, global soil profile and sample data sets and functionality:
* Arrouays, D., Leenaars, J. G., Richer\-de\-Forges, A. C., Adhikari, K., Ballabio, C., Greve, M., … \& Heuvelink, G. (2017\). [Soil legacy data rescue via GlobalSoilMap and other international and national initiatives](https://doi.org/10.1016/j.grj.2017.06.001). GeoResJ, 14, 1\-19\.
* Batjes, N. H., Ribeiro, E., van Oostrum, A., Leenaars, J., Hengl, T., \& de Jesus, J. M. (2017\). [WoSIS: providing standardised soil profile data for the world](http://www.earth-syst-sci-data.net/9/1/2017/). Earth System Science Data, 9(1\), 1\.
* Hengl, T., MacMillan, R.A., (2019\). [Predictive Soil Mapping with R](https://soilmapper.org/). OpenGeoHub foundation, Wageningen, the Netherlands, 370 pages, www.soilmapper.org, ISBN: 978\-0\-359\-30635\-0\.
* Rossiter, D.G.,: [Compendium of Soil Geographical Databases](https://www.isric.org/explore/soil-geographic-databases).
5\.2 alt text Specifications
----------------------------
#### 5\.2\.0\.1 Data standards
* Metadata information: [“Soil Survey Investigation Report No. 42\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1253872.pdf) and [“Soil Survey Investigation Report No. 45\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/nrcs142p2_052226.pdf),
* Model DB: [National Cooperative Soil Survey (NCSS) Soil Characterization Database](https://ncsslabdatamart.sc.egov.usda.gov/),
#### 5\.2\.0\.2 *Target variables:*
```
site.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees",
"latitude_decimal_degrees")
hor.names = [c](https://rdrr.io/r/base/c.html)("labsampnum","site_key","layer_sequence","hzn_top","hzn_bot",
"hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa",
"oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2",
"cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext",
"na_ext", "k_ext", "ec_satp", "ec_12pre")
## target structure:
col.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees",
"latitude_decimal_degrees", "labsampnum", "layer_sequence", "hzn_top",
"hzn_bot","hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa",
"sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o",
"ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext",
"mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db",
"confidence_degree", "project_url", "citation_url")
```
Targe variables listed:
* `clay_tot_psa`: Clay, Total in % wt for \<2 mm soil fraction,
* `silt_tot_psa`: Silt, Total in % wt for \<2 mm soil fraction,
* `sand_tot_psa`: Sand, Total in % wt for \<2 mm soil fraction,
* `oc`: Carbon, Organic in g/kg for \<2 mm soil fraction,
* `oc_d`: Soil organic carbon density in kg/m3,
* `c_tot`: Carbon, Total in g/kg for \<2 mm soil fraction,
* `n_tot`: Nitrogen, Total NCS in g/kg for \<2 mm soil fraction,
* `ph_kcl`: pH, KCl Suspension for \<2 mm soil fraction,
* `ph_h2o`: pH, 1:1 Soil\-Water Suspension for \<2 mm soil fraction,
* `ph_cacl2`: pH, CaCl2 Suspension for \<2 mm soil fraction,
* `cec_sum`: Cation Exchange Capacity, Summary, in cmol(\+)/kg for \<2 mm soil fraction,
* `cec_nh4`: Cation Exchange Capacity, NH4 prep, in cmol(\+)/kg for \<2 mm soil fraction,
* `ecec`: Cation Exchange Capacity, Effective, CMS derived value default, standa prep in cmol(\+)/kg for \<2 mm soil fraction,
* `wpg2`: Coarse fragments in % wt for \>2 mm soil fraction,
* `db_od`: Bulk density (Oven Dry) in g/cm3 (4A1h),
* `ca_ext`: Calcium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `mg_ext`: Magnesium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `na_ext`: Sodium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `k_ext`: Potassium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `ec_satp`: Electrical Conductivity, Saturation Extract in dS/m for \<2 mm soil fraction,
* `ec_12pre`: Electrical Conductivity, Predict, 1:2 (w/w) in dS/m for \<2 mm soil fraction,
5\.3 alt text Data import
-------------------------
#### 5\.3\.0\.1 National Cooperative Soil Survey Characterization Database
* National Cooperative Soil Survey, (2020\). National Cooperative Soil Survey Characterization Database. Data download URL: <http://ncsslabdatamart.sc.egov.usda.gov/>
* O’Geen, A., Walkinshaw, M., \& Beaudette, D. (2017\). SoilWeb: A multifaceted interface to soil survey information. Soil Science Society of America Journal, 81(4\), 853\-862\. [https://doi.org/10\.2136/sssaj2016\.11\.0386n](https://doi.org/10.2136/sssaj2016.11.0386n)
This data set is continuously updated.
```
if({
ncss.site <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Site_Location.csv", stringsAsFactors = FALSE)
ncss.layer <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Layer.csv", stringsAsFactors = FALSE)
ncss.bdm <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Bulk_Density_and_Moisture.csv", stringsAsFactors = FALSE)
## multiple measurements
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(ncss.bdm$prep_code))
ncss.bdm.0 <- ncss.bdm[ncss.bdm$prep_code=="S",]
[summary](https://rdrr.io/r/base/summary.html)(ncss.bdm.0$db_od)
## 0 BD values --- error!
ncss.carb <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Carbon_and_Extractions.csv", stringsAsFactors = FALSE)
ncss.organic <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Organic.csv", stringsAsFactors = FALSE)
ncss.pH <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_pH_and_Carbonates.csv", stringsAsFactors = FALSE)
#str(ncss.pH)
#summary(ncss.pH$ph_h2o)
#summary(!is.na(ncss.pH$ph_h2o))
ncss.PSDA <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_PSDA_and_Rock_Fragments.csv", stringsAsFactors = FALSE)
ncss.CEC <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_CEC_and_Bases.csv")
ncss.salt <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Salt.csv")
ncss.horizons <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(ncss.bdm.0, ncss.layer, ncss.carb, ncss.organic[,[c](https://rdrr.io/r/base/c.html)("labsampnum", "result_source_key", "c_tot", "n_tot", "db_od", "oc")], ncss.pH, ncss.PSDA, ncss.CEC, ncss.salt), type = "full", by="labsampnum")
#head(ncss.horizons)
[nrow](https://rdrr.io/r/base/nrow.html)(ncss.horizons)
ncss.horizons$oc_d = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$oc / 100 * ncss.horizons$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(ncss.horizons$wpg2), 0, ncss.horizons$wpg2))/100, 3)
ncss.horizons$ca_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$ca_nh4 * 200, 4)
ncss.horizons$mg_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$mg_nh4 * 121, 3)
ncss.horizons$na_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$na_nh4 * 230, 3)
ncss.horizons$k_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$k_nh4 * 391, 3)
#summary(ncss.horizons$oc_d)
## Values <0!!
chemsprops.NCSS = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(ncss.site[,site.names], ncss.horizons[,hor.names], by="site_key")
chemsprops.NCSS$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(chemsprops.NCSS$site_obsdate, format="%m/%d/%Y"), "%Y-%m-%d")
chemsprops.NCSS$source_db = "USDA_NCSS"
#dim(chemsprops.NCSS)
chemsprops.NCSS$oc = chemsprops.NCSS$oc * 10
chemsprops.NCSS$n_tot = chemsprops.NCSS$n_tot * 10
#hist(log1p(chemsprops.NCSS$oc), breaks=45, col="gray")
chemsprops.NCSS$confidence_degree = 1
chemsprops.NCSS$project_url = "http://ncsslabdatamart.sc.egov.usda.gov/"
chemsprops.NCSS$citation_url = "https://doi.org/10.2136/sssaj2016.11.0386n"
chemsprops.NCSS = complete.vars(chemsprops.NCSS, sel=[c](https://rdrr.io/r/base/c.html)("tex_psda","oc","clay_tot_psa","ecec","ph_h2o","ec_12pre","k_ext"))
#rm(ncss.horizons)
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NCSS)
#> [1] 136011 36
#summary(!is.na(chemsprops.NCSS$oc))
## texture classes need to be cleaned-up
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(chemsprops.NCSS$tex_psda))
#> c C cl CL
#> 2391 10908 19 10158 5
#> cos CoS COS cosl CoSL
#> 2424 6 4 4543 2
#> Fine Sandy Loam fs fsl FSL l
#> 1 2357 10701 16 17038
#> L lcos LCoS lfs ls
#> 2 2933 3 1805 3166
#> LS lvfs s S sc
#> 1 152 2958 1 601
#> scl SCL si sic SiC
#> 5456 5 854 7123 11
#> sicl SiCL sil SiL SIL
#> 13718 14 22230 4 28
#> sl SL vfs vfsl VFSL
#> 5547 13 64 2678 3
#> NA's
#> 6068
```
#### 5\.3\.0\.2 Rapid Carbon Assessment (RaCA)
* Soil Survey Staff. Rapid Carbon Assessment (RaCA) project. United States Department of Agriculture, Natural Resources Conservation Service. Available online. June 1, 2013 (FY2013 official release). Data download URL: [https://www.nrcs.usda.gov/wps/portal/nrcs/detailfull/soils/research/?cid\=nrcs142p2\_054164](https://www.nrcs.usda.gov/wps/portal/nrcs/detailfull/soils/research/?cid=nrcs142p2_054164)
* **Note**: Locations of each site have been degraded due to confidentiality and only reflect the general position of each site.
* Wills, S. et al. (2013\) [“Rapid carbon assessment (RaCA) methodology: Sampling and Initial Summary. United States Department of Agriculture.”](https://www.nrcs.usda.gov/wps/PA_NRCSConsumption/download?cid=nrcs142p2_052841&ext=pdf) Natural Resources Conservation Service, National Soil Survey Center.
```
if({
raca.df <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/RaCA/RaCa_general_location.csv", stringsAsFactors = FALSE)
[names](https://rdrr.io/r/base/names.html)(raca.df)[1] = "rcasiteid"
raca.layer <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/RaCA/RaCA_samples_JULY2016.csv", stringsAsFactors = FALSE)
raca.layer$longitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(raca.layer["rcasiteid"], raca.df, match ="first")$Gen_long
raca.layer$latitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(raca.layer["rcasiteid"], raca.df, match ="first")$Gen_lat
raca.layer$site_obsdate = "2013"
[summary](https://rdrr.io/r/base/summary.html)(raca.layer$Calc_SOC)
#plot(raca.layer[!duplicated(raca.layer$rcasiteid),c("longitude_decimal_degrees", "latitude_decimal_degrees")])
#summary(raca.layer$SOC_pred1)
## some strange groupings around small values
raca.layer$oc_d = [signif](https://rdrr.io/r/base/Round.html)(raca.layer$Calc_SOC / 100 * raca.layer$Bulkdensity * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(raca.layer$fragvolc), 0, raca.layer$fragvolc))/100, 3)
raca.layer$oc = raca.layer$Calc_SOC * 10
#summary(raca.layer$oc_d)
raca.h.lst <- [c](https://rdrr.io/r/base/c.html)("rcasiteid", "lay_field_label1", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "Lab.Sample.No", "layer_Number", "TOP", "BOT", "hzname", "texture", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot_ncs", "n_tot_ncs", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "fragvolc", "Bulkdensity", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = raca.h.lst[[which](https://rdrr.io/r/base/which.html)(!raca.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(raca.layer))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ raca.layer[,i] = NA } }
chemsprops.RaCA = raca.layer[,raca.h.lst]
chemsprops.RaCA$source_db = "RaCA2016"
chemsprops.RaCA$confidence_degree = 4
chemsprops.RaCA$project_url = "https://www.nrcs.usda.gov/survey/raca/"
chemsprops.RaCA$citation_url = "https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/nrcs142p2_052841.pdf"
chemsprops.RaCA = complete.vars(chemsprops.RaCA, sel = [c](https://rdrr.io/r/base/c.html)("oc", "fragvolc"))
}
#> Joining by: rcasiteid
#> Joining by: rcasiteid
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.RaCA)
#> [1] 53664 36
```
#### 5\.3\.0\.3 National Geochemical Database Soil
* Smith, D.B., Cannon, W.F., Woodruff, L.G., Solano, Federico, Kilburn, J.E., and Fey, D.L., (2013\). [Geochemical and
mineralogical data for soils of the conterminous United States](http://pubs.usgs.gov/ds/801/). U.S. Geological Survey Data Series 801, 19 p., <http://pubs.usgs.gov/ds/801/>.
* Grossman, J. N. (2004\). [The National Geochemical Survey\-database and documentation](https://doi.org/10.3133/ofr20041001). U.S. Geological Survey Open\-File Report 2004\-1001\. [DOI:10\.3133/ofr20041001](DOI:10.3133/ofr20041001).
* **Note**: NGS focuses on stream\-sediment samples, but also contains many soil samples.
```
if({
ngs.points <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/geochemical/ds-801-csv/site.txt", sep=",")
## 4857 pnts
ngs.layers <- [lapply](https://rdrr.io/r/base/lapply.html)([c](https://rdrr.io/r/base/c.html)("top5cm.txt", "ahorizon.txt", "chorizon.txt"), function(i){[read.csv](https://rdrr.io/r/utils/read.table.html)([paste0](https://rdrr.io/r/base/paste.html)("/mnt/diskstation/data/Soil_points/USA/geochemical/ds-801-csv/", i), sep=",")})
ngs.layers = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)(ngs.layers)
#dim(ngs.layers)
# 14571 126
#summary(ngs.layers$tot_carb_pct)
#lattice::xyplot(c_org_pct ~ c_tot_pct, ngs.layers, scales=list(x = list(log = 2), y = list(log = 2)))
#lattice::xyplot(c_org_pct ~ tot_clay_pct, ngs.layers, scales=list(y = list(log = 2)))
ngs.layers$c_tot = ngs.layers$c_tot_pct * 10
ngs.layers$oc = ngs.layers$c_org_pct * 10
ngs.layers$hzn_top = [sapply](https://rdrr.io/r/base/lapply.html)(ngs.layers$depth_cm, function(i){[strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-")[[1]][1]})
ngs.layers$hzn_bot = [sapply](https://rdrr.io/r/base/lapply.html)(ngs.layers$depth_cm, function(i){[strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-")[[1]][2]})
#summary(ngs.layers$tot_clay_pct)
#summary(ngs.layers$k_pct) ## very high numbers?
## question is if the geochemical element results are compatible with e.g. k_ext?
t.ngs = [c](https://rdrr.io/r/base/c.html)("lab_id", "site_id", "horizon", "hzn_top", "hzn_bot", "tot_clay_pct", "c_tot", "oc")
ngs.m = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(ngs.points, ngs.layers[
ngs.m$site_obsdate = [as.Date](https://rdrr.io/r/base/as.Date.html)(ngs.m$colldate, format="%Y-%m-%d")
ngs.h.lst <- [c](https://rdrr.io/r/base/c.html)("site_id", "quad", "site_obsdate", "longitude", "latitude", "lab_id", "layer_sequence", "hzn_top", "hzn_bot", "horizon", "tex_psda", "tot_clay_pct", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = ngs.h.lst[[which](https://rdrr.io/r/base/which.html)(!ngs.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(ngs.m))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ ngs.m[,i] = NA } }
chemsprops.USGS.NGS = ngs.m[,ngs.h.lst]
chemsprops.USGS.NGS$source_db = "USGS.NGS"
chemsprops.USGS.NGS$confidence_degree = 1
chemsprops.USGS.NGS$project_url = "https://mrdata.usgs.gov/ds-801/"
chemsprops.USGS.NGS$citation_url = "https://pubs.usgs.gov/ds/801/"
chemsprops.USGS.NGS = complete.vars(chemsprops.USGS.NGS, sel = [c](https://rdrr.io/r/base/c.html)("tot_clay_pct", "oc"), coords = [c](https://rdrr.io/r/base/c.html)("longitude", "latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.USGS.NGS)
#> [1] 9446 36
```
#### 5\.3\.0\.4 Forest Inventory and Analysis Database (FIADB)
* Domke, G. M., Perry, C. H., Walters, B. F., Nave, L. E., Woodall, C. W., \& Swanston, C. W. (2017\). [Toward inventory‐based estimates of soil organic carbon in forests of the United States](https://doi.org/10.1002/eap.1516). Ecological Applications, 27(4\), 1223\-1235\. [https://doi.org/10\.1002/eap.1516](https://doi.org/10.1002/eap.1516)
* Forest Inventory and Analysis, (2014\). The Forest Inventory and Analysis Database: Database description
and user guide version 6\.0\.1 for Phase 3\. U.S. Department of Agriculture, Forest Service. 182 p.
\[Online]. Available: [https://www.fia.fs.fed.us/library/database\-documentation/](https://www.fia.fs.fed.us/library/database-documentation/)
* **Note**: samples are taken only from the top\-soil either 0–10\.16 cm or 10\.16–20\.32 cm.
```
if({
fia.loc <- vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/USA/FIADB/ENTIRE/PLOT.csv")
fia.loc$site_id = [paste](https://rdrr.io/r/base/paste.html)(fia.loc$STATECD, fia.loc$COUNTYCD, fia.loc$PLOT, sep="_")
fia.lab <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/FIADB/ENTIRE/SOILS_LAB.csv")
fia.lab$site_id = [paste](https://rdrr.io/r/base/paste.html)(fia.lab$STATECD, fia.lab$COUNTYCD, fia.lab$PLOT, sep="_")
## 23,765 rows
fia.des <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/FIADB/ENTIRE/SOILS_SAMPLE_LOC.csv")
fia.des$site_id = [paste](https://rdrr.io/r/base/paste.html)(fia.des$STATECD, fia.des$COUNTYCD, fia.des$PLOT, sep="_")
#fia.lab$TXTRLYR1 = plyr::join(fia.lab[c("site_id","INVYR")], fia.des[c("site_id","TXTRLYR1","INVYR")], match ="first")$TXTRLYR1
fia.lab$TXTRLYR2 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","TXTRLYR2","INVYR")], match ="first")$TXTRLYR2
#summary(as.factor(fia.lab$TXTRLYR1))
fia.lab$tex_psda = [factor](https://rdrr.io/r/base/factor.html)(fia.lab$TXTRLYR2, labels = [c](https://rdrr.io/r/base/c.html)("Organic", "Loamy", "Clayey", "Sandy", "Coarse sand", "Not measured"))
#Code Description
# 0 Organic.
# 1 Loamy.
# 2 Clayey.
# 3 Sandy.
# 4 Coarse sand.
# 9 Not measured - make plot notes
fia.lab$FORFLTHK = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","FORFLTHK","INVYR")], match ="first")$FORFLTHK
#summary(fia.lab$FORFLTHK)
fia.lab$LTRLRTHK = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","LTRLRTHK","INVYR")], match ="first")$LTRLRTHK
fia.lab$tot_thk = [rowSums](https://rdrr.io/r/base/colSums.html)(fia.lab[,[c](https://rdrr.io/r/base/c.html)("FORFLTHK", "LTRLRTHK")], na.rm=TRUE)
fia.lab$DPTHSBSL = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","DPTHSBSL","INVYR")], match ="first")$DPTHSBSL
#summary(fia.lab$DPTHSBSL)
sel.fia = fia.loc$site_id [%in%](https://rdrr.io/r/base/match.html) fia.lab$site_id
#summary(sel.fia)
# 15,109
fia.loc = fia.loc[sel.fia, [c](https://rdrr.io/r/base/c.html)("site_id", "LON", "LAT")]
#summary(fia.lab$BULK_DENSITY) ## some strange values for BD!
#quantile(fia.lab$BULK_DENSITY, c(0.02, 0.98), na.rm=TRUE)
#summary(fia.lab$C_ORG_PCT)
#summary(as.factor(fia.lab$LAYER_TYPE))
#lattice::xyplot(BULK_DENSITY ~ C_ORG_PCT, fia.lab, scales=list(x = list(log = 2)))
#dim(fia.lab)
# 14571 126
fia.lab$c_tot = fia.lab$C_ORG_PCT * 10
fia.lab$oc = fia.lab$C_TOTAL_PCT * 10
fia.lab$n_tot = fia.lab$N_TOTAL_PCT * 10
fia.lab$db_od = [ifelse](https://rdrr.io/r/base/ifelse.html)(fia.lab$BULK_DENSITY < 0.001 | fia.lab$BULK_DENSITY > 1.8, NA, fia.lab$BULK_DENSITY)
#lattice::xyplot(db_od ~ C_ORG_PCT, fia.lab, par.settings = list(plot.symbol = list(col=scales::alpha("black", 0.6), fill=scales::alpha("red", 0.6), pch=21, cex=0.6)), scales = list(x=list(log=TRUE, equispaced.log=FALSE)), ylab="Bulk density", xlab="SOC wpct")
#hist(fia.lab$db_od, breaks=45)
## A lot of very small BD measurements
fia.lab$oc_d = [signif](https://rdrr.io/r/base/Round.html)(fia.lab$oc / 100 * fia.lab$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$COARSE_FRACTION_PCT), 0, fia.lab$COARSE_FRACTION_PCT))/100, 3)
#hist(fia.lab$oc_d, breaks=45, col="grey")
fia.lab$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)(fia.lab$LAYER_TYPE=="FF_TOTAL" | fia.lab$LAYER_TYPE=="L_ORG", 0, NA)
fia.lab$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)(fia.lab$LAYER_TYPE=="FF_TOTAL" | fia.lab$LAYER_TYPE=="L_ORG", fia.lab$tot_thk, NA)
fia.lab$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_top) & (fia.lab$LAYER_TYPE=="MIN_2" | fia.lab$LAYER_TYPE=="ORG_2"), 10.2 + fia.lab$tot_thk, fia.lab$hzn_top)
fia.lab$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_bot) & (fia.lab$LAYER_TYPE=="MIN_2" | fia.lab$LAYER_TYPE=="ORG_2"), 20.3 + fia.lab$tot_thk, fia.lab$hzn_bot)
fia.lab$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_top) & (fia.lab$LAYER_TYPE=="MIN_1" | fia.lab$LAYER_TYPE=="ORG_1"), 0 + fia.lab$tot_thk, fia.lab$hzn_top)
fia.lab$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_bot) & (fia.lab$LAYER_TYPE=="MIN_1" | fia.lab$LAYER_TYPE=="ORG_1"), 10.2 + fia.lab$tot_thk, fia.lab$hzn_bot)
#summary(fia.lab$EXCHNG_K) ## Negative values!
fia.m = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab, fia.loc)
#fia.m = fia.m[!duplicated(as.factor(paste(fia.m$site_id, fia.m$INVYR, fia.m$LAYER_TYPE, sep="_"))),]
fia.m$site_obsdate = [as.Date](https://rdrr.io/r/base/as.Date.html)(fia.m$SAMPLE_DATE, format="%Y-%m-%d")
sel.d.fia = fia.m$site_obsdate < [as.Date](https://rdrr.io/r/base/as.Date.html)("1980-01-01", format="%Y-%m-%d")
fia.m$site_obsdate[[which](https://rdrr.io/r/base/which.html)(sel.d.fia)] = NA
#hist(fia.m$site_obsdate, breaks=25)
fia.h.lst <- [c](https://rdrr.io/r/base/c.html)("site_id", "usiteid", "site_obsdate", "LON", "LAT", "SAMPLE_ID", "layer_sequence", "hzn_top", "hzn_bot", "LAYER_TYPE", "tex_psda", "tot_clay_pct", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "PH_H2O", "PH_CACL2", "cec_sum", "cec_nh4", "ECEC", "COARSE_FRACTION_PCT", "db_od", "EXCHNG_CA", "EXCHNG_MG", "EXCHNG_NA", "EXCHNG_K", "ec_satp", "ec_12pre")
x.na = fia.h.lst[[which](https://rdrr.io/r/base/which.html)(!fia.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(fia.m))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ fia.m[,i] = NA } }
chemsprops.FIADB = fia.m[,fia.h.lst]
chemsprops.FIADB$source_db = "FIADB"
chemsprops.FIADB$confidence_degree = 2
chemsprops.FIADB$project_url = "http://www.fia.fs.fed.us/"
chemsprops.FIADB$citation_url = "https://www.fia.fs.fed.us/library/database-documentation/"
chemsprops.FIADB = complete.vars(chemsprops.FIADB, sel = [c](https://rdrr.io/r/base/c.html)("PH_H2O", "oc", "EXCHNG_K"), coords = [c](https://rdrr.io/r/base/c.html)("LON", "LAT"))
#str(unique(paste(chemsprops.FIADB$LON, chemsprops.FIADB$LAT, sep="_")))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.FIADB)
#> [1] 23208 36
#write.csv(chemsprops.FIADB, "/mnt/diskstation/data/Soil_points/USA/FIADB/fiadb_soil.pnts.csv")
```
#### 5\.3\.0\.5 Africa soil profiles database
* Leenaars, J. G., Van Oostrum, A. J. M., \& Ruiperez Gonzalez, M. (2014\). [Africa soil profiles database version 1\.2\. A compilation of georeferenced and standardized legacy soil profile data for Sub\-Saharan Africa (with dataset)](https://www.isric.org/projects/africa-soil-profiles-database-afsp). Wageningen: ISRIC Report 2014/01; 2014\. Data download URL: <https://data.isric.org/>
```
if({
[library](https://rdrr.io/r/base/library.html)([foreign](https://svn.r-project.org/R-packages/trunk/foreign))
afspdb.profiles <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Profiles.dbf", as.is=TRUE)
afspdb.layers <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Layers.dbf", as.is=TRUE)
afspdb.s.lst <- [c](https://rdrr.io/r/base/c.html)("ProfileID", "FldMnl_ID", "T_Year", "X_LonDD", "Y_LatDD")
#summary(afspdb.layers$BlkDens)
## add missing columns
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(afspdb.layers)){
if([is.numeric](https://rdrr.io/r/base/numeric.html)(afspdb.layers[,j])) {
afspdb.layers[,j] <- [ifelse](https://rdrr.io/r/base/ifelse.html)(afspdb.layers[,j] < 0, NA, afspdb.layers[,j])
}
}
afspdb.layers$ca_ext = afspdb.layers$ExCa * 200
afspdb.layers$mg_ext = afspdb.layers$ExMg * 121
afspdb.layers$na_ext = afspdb.layers$ExNa * 230
afspdb.layers$k_ext = afspdb.layers$ExK * 391
#summary(afspdb.layers$k_ext)
afspdb.m = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(afspdb.profiles[,afspdb.s.lst], afspdb.layers)
afspdb.m$oc_d = [signif](https://rdrr.io/r/base/Round.html)(afspdb.m$OrgC * afspdb.m$BlkDens * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(afspdb.m$CfPc), 0, afspdb.m$CfPc))/100, 3)
#summary(afspdb.m$T_Year)
afspdb.m$T_Year = [ifelse](https://rdrr.io/r/base/ifelse.html)(afspdb.m$T_Year < 0, NA, afspdb.m$T_Year)
afspdb.h.lst <- [c](https://rdrr.io/r/base/c.html)("ProfileID", "FldMnl_ID", "T_Year", "X_LonDD", "Y_LatDD", "LayerID", "LayerNr", "UpDpth", "LowDpth", "HorDes", "LabTxtr", "Clay", "Silt", "Sand", "OrgC", "oc_d", "TotC", "TotalN", "PHKCl", "PHH2O", "PHCaCl2", "CecSoil", "cec_nh4", "Ecec", "CfPc" , "BlkDens", "ca_ext", "mg_ext", "na_ext", "k_ext", "EC", "ec_12pre")
x.na = afspdb.h.lst[[which](https://rdrr.io/r/base/which.html)(!afspdb.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(afspdb.m))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ afspdb.m[,i] = NA } }
chemsprops.AfSPDB = afspdb.m[,afspdb.h.lst]
chemsprops.AfSPDB$source_db = "AfSPDB"
chemsprops.AfSPDB$confidence_degree = 5
chemsprops.AfSPDB$project_url = "https://www.isric.org/projects/africa-soil-profiles-database-afsp"
chemsprops.AfSPDB$citation_url = "https://www.isric.org/sites/default/files/isric_report_2014_01.pdf"
chemsprops.AfSPDB = complete.vars(chemsprops.AfSPDB, sel = [c](https://rdrr.io/r/base/c.html)("LabTxtr","OrgC","Clay","Ecec","PHH2O","EC","k_ext"), coords = [c](https://rdrr.io/r/base/c.html)("X_LonDD", "Y_LatDD"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.AfSPDB)
#> [1] 60306 36
```
#### 5\.3\.0\.6 Africa Soil Information Service (AfSIS) Soil Chemistry
* Towett, E. K., Shepherd, K. D., Tondoh, J. E., Winowiecki, L. A., Lulseged, T., Nyambura, M., … \& Cadisch, G. (2015\). Total elemental composition of soils in Sub\-Saharan Africa and relationship with soil forming factors. Geoderma Regional, 5, 157\-168\. [https://doi.org/10\.1016/j.geodrs.2015\.06\.002](https://doi.org/10.1016/j.geodrs.2015.06.002)
* [AfSIS Soil Chemistry](https://github.com/qedsoftware/afsis-soil-chem-tutorial) produced by World Agroforestry Centre (ICRAF), Quantitative Engineering Design (QED), Center for International Earth Science Information Network (CIESIN), The International Center for Tropical Agriculture (CIAT), Crop Nutrition Laboratory Services (CROPNUTS) and Rothamsted Research (RRES). Data download URL: <https://registry.opendata.aws/afsis/>
```
if({
afsis1.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/2009-2013/Georeferences/georeferences.csv")
afsis1.xy$Sampling.date = 2011
afsis1.lst = [list.files](https://rdrr.io/r/base/list.files.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/2009-2013/Wet_Chemistry", pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("*.csv$"), full.names = TRUE, recursive = TRUE)
afsis1.hor = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([lapply](https://rdrr.io/r/base/lapply.html)(afsis1.lst, read.csv))
tansis.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/tansis/Georeferences/georeferences.csv")
#summary(tansis.xy$Sampling.date)
tansis.xy$Sampling.date = 2018
tansis.lst = [list.files](https://rdrr.io/r/base/list.files.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/tansis/Wet_Chemistry", pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("*.csv$"), full.names = TRUE, recursive = TRUE)
tansis.hor = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([lapply](https://rdrr.io/r/base/lapply.html)(tansis.lst, read.csv))
afsis1t.df = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([list](https://rdrr.io/r/base/list.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(afsis1.hor, afsis1.xy, by="SSN"), plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tansis.hor, tansis.xy, by="SSN")))
afsis1t.df$UpDpth = [ifelse](https://rdrr.io/r/base/ifelse.html)(afsis1t.df$Depth=="sub", 20, 0)
afsis1t.df$LowDpth = [ifelse](https://rdrr.io/r/base/ifelse.html)(afsis1t.df$Depth=="sub", 50, 20)
afsis1t.df$LayerNr = [ifelse](https://rdrr.io/r/base/ifelse.html)(afsis1t.df$Depth=="sub", 2, 1)
#summary(afsis1t.df$C...Org)
afsis1t.df$oc = [rowMeans](https://rdrr.io/r/base/colSums.html)(afsis1t.df[,[c](https://rdrr.io/r/base/c.html)("C...Org", "X.C")], na.rm=TRUE) * 10
afsis1t.df$c_tot = afsis1t.df$Total.carbon
afsis1t.df$n_tot = [rowMeans](https://rdrr.io/r/base/colSums.html)(afsis1t.df[,[c](https://rdrr.io/r/base/c.html)("Total.nitrogen", "X.N")], na.rm=TRUE) * 10
afsis1t.df$ph_h2o = [rowMeans](https://rdrr.io/r/base/colSums.html)(afsis1t.df[,[c](https://rdrr.io/r/base/c.html)("PH", "pH")], na.rm=TRUE)
## multiple texture fractons - which one is the total clay, sand, silt?
## Clay content for water dispersed particles-recorded after 4 minutes of ultrasonication
#summary(afsis1t.df$Psa.w4clay)
#plot(afsis1t.df[,c("Longitude", "Latitude")])
afsis1.h.lst <- [c](https://rdrr.io/r/base/c.html)("SSN", "Site", "Sampling.date", "Longitude", "Latitude", "Soil.material", "LayerNr", "UpDpth", "LowDpth", "HorDes", "LabTxtr", "Psa.w4clay", "Psa.w4silt", "Psa.w4sand", "oc", "oc_d", "c_tot", "n_tot", "PHKCl", "ph_h2o", "PHCaCl2", "CecSoil", "cec_nh4", "Ecec", "CfPc" , "BlkDens", "ca_ext", "M3.Mg", "M3.Na", "M3.K", "EC", "ec_12pre")
x.na = afspdb.h.lst[[which](https://rdrr.io/r/base/which.html)(!afsis1.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(afsis1t.df))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ afsis1t.df[,i] = NA } }
chemsprops.AfSIS1 = afsis1t.df[,afsis1.h.lst]
chemsprops.AfSIS1$source_db = "AfSIS1"
chemsprops.AfSIS1$confidence_degree = 2
chemsprops.AfSIS1$project_url = "https://registry.opendata.aws/afsis/"
chemsprops.AfSIS1$citation_url = "https://doi.org/10.1016/j.geodrs.2015.06.002"
chemsprops.AfSIS1 = complete.vars(chemsprops.AfSIS1, sel = [c](https://rdrr.io/r/base/c.html)("Psa.w4clay","oc","ph_h2o","M3.K"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.AfSIS1)
#> [1] 4162 36
```
#### 5\.3\.0\.7 Fine Root Ecology Database (FRED)
* Iversen CM, McCormack ML, Baer JK, Powell AS, Chen W, Collins C, Fan Y, Fanin N, Freschet GT, Guo D, Hogan JA, Kou L, Laughlin DC, Lavely E, Liese R, Lin D, Meier IC, Montagnoli A, Roumet C, See CR, Soper F, Terzaghi M, Valverde\-Barrantes OJ, Wang C, Wright SJ, Wurzburger N, Zadworny M. (2021\). [Fine\-Root Ecology Database (FRED): A Global Collection of Root Trait Data with Coincident Site, Vegetation, Edaphic, and Climatic Data, Version 3](https://roots.ornl.gov/). Oak Ridge National Laboratory, TES SFA, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A. Access on\-line at: [https://doi.org/10\.25581/ornlsfa.014/1459186](https://doi.org/10.25581/ornlsfa.014/1459186).
```
if({
[Sys.setenv](https://rdrr.io/r/base/Sys.setenv.html)("VROOM_CONNECTION_SIZE" = 131072 * 2)
fred = vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED3_Entire_Database_2021.csv", skip = 10, col_names=FALSE)
## 57,190 x 1,164
#nm.fred = read.csv("/mnt/diskstation/data/Soil_points/INT/FRED/FRED3_Column_Definitions_20210423-091040.csv", header=TRUE)
nm.fred0 = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED3_Entire_Database_2021.csv", nrows=2)
[names](https://rdrr.io/r/base/names.html)(fred) = [make.names](https://rdrr.io/r/base/make.names.html)([t](https://rdrr.io/r/base/t.html)(nm.fred0)[,1])
## 1164 columns!
fred.h.lst = [c](https://rdrr.io/r/base/c.html)("Notes_Row.ID", "Data.source_DOI", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "Soil.horizon", "Soil.texture", "Soil.texture_Fraction.clay", "Soil.texture_Fraction.silt", "Soil.texture_Fraction.sand", "Soil.organic.C.content", "oc_d", "c_tot", "Soil.N.content", "ph_kcl", "Soil.pH_Water", "Soil.pH_Salt", "Soil.cation.exchange.capacity..CEC.", "cec_nh4", "Soil.effective.cation.exchange.capacity..ECEC.", "wpg2", "Soil.bulk.density", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db", "confidence_degree")
fred$site_obsdate = [as.integer](https://rdrr.io/r/base/integer.html)([rowMeans](https://rdrr.io/r/base/colSums.html)(fred[,[c](https://rdrr.io/r/base/c.html)("Sample.collection_Year.ending.collection", "Sample.collection_Year.beginning.collection")], na.rm=TRUE))
#summary(fred$site_obsdate)
fred$longitude_decimal_degrees = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Longitude), fred$Longitude_Estimated, fred$Longitude)
fred$latitude_decimal_degrees = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Latitude), fred$Latitude_Estimated, fred$Latitude)
#names(fred)[grep("Notes_Row", names(fred))]
#summary(fred[,grep("clay", names(fred))])
#summary(fred[,grep("cation.exchange", names(fred))])
#summary(fred[,grep("organic.C", names(fred))])
#summary(fred$Soil.organic.C.content)
#summary(fred$Soil.bulk.density)
#summary(as.factor(fred$Soil.horizon))
fred$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Lower.sampling.depth), fred$Soil.depth - 5, fred$Soil.depth_Lower.sampling.depth)
fred$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Upper.sampling.depth), fred$Soil.depth + 5, fred$Soil.depth_Upper.sampling.depth)
fred$oc_d = [signif](https://rdrr.io/r/base/Round.html)(fred$Soil.organic.C.content / 1000 * fred$Soil.bulk.density * 1000, 3)
#summary(fred$oc_d)
x.na = fred.h.lst[[which](https://rdrr.io/r/base/which.html)(!fred.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(fred))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ fred[,i] = NA } }
chemsprops.FRED = fred[,fred.h.lst]
#plot(chemsprops.FRED[,4:5])
chemsprops.FRED$source_db = "FRED"
chemsprops.FRED$confidence_degree = 5
chemsprops.FRED$project_url = "https://roots.ornl.gov/"
chemsprops.FRED$citation_url = "https://doi.org/10.25581/ornlsfa.014/1459186"
chemsprops.FRED = complete.vars(chemsprops.FRED, sel = [c](https://rdrr.io/r/base/c.html)("Soil.organic.C.content", "Soil.texture_Fraction.clay", "Soil.pH_Water"))
## many duplicates
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.FRED)
#> [1] 858 36
```
#### 5\.3\.0\.8 Global root traits (GRooT) database (compilation)
* Guerrero‐Ramírez, N. R., Mommer, L., Freschet, G. T., Iversen, C. M., McCormack, M. L., Kattge, J., … \& Weigelt, A. (2021\). [Global root traits (GRooT) database](https://dx.doi.org/10.1111/geb.13179). Global ecology and biogeography, 30(1\), 25\-37\. [https://dx.doi.org/10\.1111/geb.13179](https://dx.doi.org/10.1111/geb.13179)
```
if({
#Sys.setenv("VROOM_CONNECTION_SIZE" = 131072 * 2)
GROOT = vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/INT/GRooT/GRooTFullVersion.csv")
## 114,222 x 73
[c](https://rdrr.io/r/base/c.html)("locationID", "GRooTID", "originalID", "source", "year", "decimalLatitude", "decimalLongitud", "soilpH", "soilTexture", "soilCarbon", "soilNitrogen", "soilPhosphorus", "soilCarbonToNitrogen", "soilBaseCationSaturation", "soilCationExchangeCapacity", "soilOrganicMatter", "soilWaterGravimetric", "soilWaterVolumetric")
#summary(GROOT$soilCarbon)
#summary(!is.na(GROOT$soilCarbon))
#summary(GROOT$soilOrganicMatter)
#summary(GROOT$soilNitrogen)
#summary(GROOT$soilpH)
#summary(as.factor(GROOT$soilTexture))
#lattice::xyplot(soilCarbon ~ soilpH, GROOT, par.settings = list(plot.symbol = list(col=scales::alpha("black", 0.6), fill=scales::alpha("red", 0.6), pch=21, cex=0.6)), scales = list(y=list(log=TRUE, equispaced.log=FALSE)), ylab="SOC", xlab="pH")
GROOT$site_obsdate = [as.Date](https://rdrr.io/r/base/as.Date.html)([paste0](https://rdrr.io/r/base/paste.html)(GROOT$year, "-01-01"), format="%Y-%m-%d")
GROOT$hzn_top = 0
GROOT$hzn_bot = 30
GROOT.h.lst = [c](https://rdrr.io/r/base/c.html)("locationID", "originalID", "site_obsdate", "decimalLongitud", "decimalLatitude", "GRooTID", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "soilTexture", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "soilCarbon", "oc_d", "c_tot", "soilNitrogen", "ph_kcl", "soilpH", "ph_cacl2", "soilCationExchangeCapacity", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = GROOT.h.lst[[which](https://rdrr.io/r/base/which.html)(!GROOT.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(GROOT))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ GROOT[,i] = NA } }
chemsprops.GROOT = GROOT[,GROOT.h.lst]
chemsprops.GROOT$source_db = "GROOT"
chemsprops.GROOT$confidence_degree = 8
chemsprops.GROOT$project_url = "https://groot-database.github.io/GRooT/"
chemsprops.GROOT$citation_url = "https://dx.doi.org/10.1111/geb.13179"
chemsprops.GROOT = complete.vars(chemsprops.GROOT, sel = [c](https://rdrr.io/r/base/c.html)("soilCarbon", "soilpH"), coords = [c](https://rdrr.io/r/base/c.html)("decimalLongitud", "decimalLatitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.GROOT)
#> [1] 718 36
```
#### 5\.3\.0\.9 Global Soil Respiration DB
* Bond\-Lamberty, B. and Thomson, A. (2010\). A global database of soil respiration data, Biogeosciences, 7, 1915–1926, [https://doi.org/10\.5194/bg\-7\-1915\-2010](https://doi.org/10.5194/bg-7-1915-2010)
```
if({
srdb = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SRDB/srdb-data.csv")
## 10366 x 85
srdb.h.lst = [c](https://rdrr.io/r/base/c.html)("Site_ID", "Notes", "Study_midyear", "Longitude", "Latitude", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psd", "Soil_clay", "Soil_silt", "Soil_sand", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "Soil_BD", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db", "confidence_degree")
#summary(srdb$Study_midyear)
srdb$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(srdb$C_soildepth), 100, srdb$C_soildepth)
srdb$hzn_top = 0
#summary(srdb$Soil_clay)
#summary(srdb$C_soilmineral)
srdb$oc_d = [signif](https://rdrr.io/r/base/Round.html)(srdb$C_soilmineral / 1000 / (srdb$hzn_bot/100), 3)
#summary(srdb$oc_d)
#summary(srdb$Soil_BD)
srdb$oc = srdb$oc_d / srdb$Soil_BD
#summary(srdb$oc)
x.na = srdb.h.lst[[which](https://rdrr.io/r/base/which.html)(!srdb.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(srdb))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ srdb[,i] = NA } }
chemsprops.SRDB = srdb[,srdb.h.lst]
#plot(chemsprops.SRDB[,4:5])
chemsprops.SRDB$source_db = "SRDB"
chemsprops.SRDB$confidence_degree = 5
chemsprops.SRDB$project_url = "https://github.com/bpbond/srdb/"
chemsprops.SRDB$citation_url = "https://doi.org/10.5194/bg-7-1915-2010"
chemsprops.SRDB = complete.vars(chemsprops.SRDB, sel = [c](https://rdrr.io/r/base/c.html)("oc", "Soil_clay", "Soil_BD"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SRDB)
#> [1] 1596 36
```
#### 5\.3\.0\.10 SOils DAta Harmonization database (SoDaH)
* Wieder, W. R., Pierson, D., Earl, S., Lajtha, K., Baer, S., Ballantyne, F., … \& Weintraub, S. (2020\). [SoDaH: the SOils DAta Harmonization database, an open\-source synthesis of soil data from research networks, version 1\.0](https://doi.org/10.5194/essd-2020-195). Earth System Science Data Discussions, 1\-19\. [https://doi.org/10\.5194/essd\-2020\-195](https://doi.org/10.5194/essd-2020-195). Data download URL: [https://doi.org/10\.6073/pasta/9733f6b6d2ffd12bf126dc36a763e0b4](https://doi.org/10.6073/pasta/9733f6b6d2ffd12bf126dc36a763e0b4)
```
if({
sodah.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SoDaH/521_soils_data_harmonization_6e8416fa0c9a2c2872f21ba208e6a919.csv")
#head(sodah.hor)
#summary(sodah.hor$coarse_frac)
#summary(sodah.hor$lyr_soc)
#summary(sodah.hor$lyr_som_WalkleyBlack/1.724)
#summary(as.factor(sodah.hor$observation_date))
sodah.hor$site_obsdate = [as.integer](https://rdrr.io/r/base/integer.html)([substr](https://rdrr.io/r/base/substr.html)(sodah.hor$observation_date, 1, 4))
sodah.hor$oc = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sodah.hor$lyr_soc), sodah.hor$lyr_som_WalkleyBlack/1.724, sodah.hor$lyr_soc) * 10
sodah.hor$n_tot = sodah.hor$lyr_n_tot * 10
sodah.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(sodah.hor$oc / 1000 * sodah.hor$bd_samp * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sodah.hor$coarse_frac), 0, sodah.hor$coarse_frac))/100, 3)
sodah.hor$site_key = [paste](https://rdrr.io/r/base/paste.html)(sodah.hor$network, sodah.hor$location_name, sep="_")
sodah.hor$labsampnum = [make.unique](https://rdrr.io/r/base/make.unique.html)([paste](https://rdrr.io/r/base/paste.html)(sodah.hor$network, sodah.hor$location_name, sodah.hor$L1, sep="_"))
#summary(sodah.hor$oc_d)
sodah.h.lst = [c](https://rdrr.io/r/base/c.html)("site_key", "data_file", "observation_date", "long", "lat", "labsampnum", "layer_sequence", "layer_top", "layer_bot", "hzn", "profile_texture_class", "clay", "silt", "sand", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl", "cec_sum", "cec_nh4", "ecec", "coarse_frac", "bd_samp", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db", "confidence_degree")
x.na = sodah.h.lst[[which](https://rdrr.io/r/base/which.html)(!sodah.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sodah.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sodah.hor[,i] = NA } }
chemsprops.SoDaH = sodah.hor[,sodah.h.lst]
#plot(chemsprops.SoDaH[,4:5])
chemsprops.SoDaH$source_db = "SoDaH"
chemsprops.SoDaH$confidence_degree = 3
chemsprops.SoDaH$project_url = "https://lter.github.io/som-website"
chemsprops.SoDaH$citation_url = "https://doi.org/10.5194/essd-2020-195"
chemsprops.SoDaH = complete.vars(chemsprops.SoDaH, sel = [c](https://rdrr.io/r/base/c.html)("oc", "clay", "ph_h2o"), coords = [c](https://rdrr.io/r/base/c.html)("long", "lat"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SoDaH)
#> [1] 20383 36
```
#### 5\.3\.0\.11 ISRIC WISE harmonized soil profile data
* Batjes, N.H. (2019\). [Harmonized soil profile data for applications at global and continental scales: updates to the WISE database](http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x). Soil Use and Management 5:124–127\. Data download URL: [https://files.isric.org/public/wise/WD\-WISE.zip](https://files.isric.org/public/wise/WD-WISE.zip)
```
if({
wise.site <- [read.table](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_SITE.csv", sep=",", header=TRUE, stringsAsFactors = FALSE, fill=TRUE)
wise.s.lst <- [c](https://rdrr.io/r/base/c.html)("WISE3_id", "PITREF", "DATEYR", "LONDD", "LATDD")
wise.site$LONDD = [as.numeric](https://rdrr.io/r/base/numeric.html)(wise.site$LONDD)
wise.site$LATDD = [as.numeric](https://rdrr.io/r/base/numeric.html)(wise.site$LATDD)
wise.layer <- [read.table](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_HORIZON.csv", sep=",", header=TRUE, stringsAsFactors = FALSE, fill=TRUE)
wise.layer$ca_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXCA * 200, 4)
wise.layer$mg_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXMG * 121, 3)
wise.layer$na_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXNA * 230, 3)
wise.layer$k_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXK * 391, 3)
wise.layer$oc_d = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$ORGC / 1000 * wise.layer$BULKDENS * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(wise.layer$GRAVEL), 0, wise.layer$GRAVEL))/100, 3)
wise.h.lst <- [c](https://rdrr.io/r/base/c.html)("WISE3_ID", "labsampnum", "HONU", "TOPDEP", "BOTDEP", "DESIG", "tex_psda", "CLAY", "SILT", "SAND", "ORGC", "oc_d", "c_tot", "TOTN", "PHKCL", "PHH2O", "PHCACL2", "CECSOIL", "cec_nh4", "ecec", "GRAVEL" , "BULKDENS", "ca_ext", "mg_ext", "na_ext", "k_ext", "ECE", "ec_12pre")
x.na = wise.h.lst[[which](https://rdrr.io/r/base/which.html)(!wise.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(wise.layer))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ wise.layer[,i] = NA } }
chemsprops.WISE = [merge](https://rdrr.io/r/base/merge.html)(wise.site[,wise.s.lst], wise.layer[,wise.h.lst], by.x="WISE3_id", by.y="WISE3_ID")
chemsprops.WISE$source_db = "ISRIC_WISE"
chemsprops.WISE$confidence_degree = 4
chemsprops.WISE$project_url = "https://isric.org"
chemsprops.WISE$citation_url = "http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x"
chemsprops.WISE = complete.vars(chemsprops.WISE, sel = [c](https://rdrr.io/r/base/c.html)("ORGC","CLAY","PHH2O","CECSOIL","k_ext"), coords = [c](https://rdrr.io/r/base/c.html)("LONDD", "LATDD"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.WISE)
#> [1] 23278 36
```
#### 5\.3\.0\.12 GEMAS
* Reimann, C., Fabian, K., Birke, M., Filzmoser, P., Demetriades, A., Négrel, P., … \& Anderson, M. (2018\). [GEMAS: Establishing geochemical background and threshold for 53 chemical elements in European agricultural soil](https://doi.org/10.1016/j.apgeochem.2017.01.021). Applied Geochemistry, 88, 302\-318\. Data download URL: <http://gemas.geolba.ac.at/>
```
if({
gemas.samples <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/EU/GEMAS/GEMAS.csv", stringsAsFactors = FALSE)
## GEMAS, agricultural soil, 0-20 cm, air dried, <2 mm, aqua regia Data from ACME, total C, TOC, CEC, ph_CaCl2
gemas.samples$hzn_top = 0
gemas.samples$hzn_bot = 20
gemas.samples$oc = gemas.samples$TOC * 10
#summary(gemas.samples$oc)
gemas.samples$c_tot = gemas.samples$C_tot * 10
gemas.samples$site_obsdate = 2009
gemas.h.lst <- [c](https://rdrr.io/r/base/c.html)("ID", "COUNRTY", "site_obsdate", "XCOO", "YCOO", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "TYPE", "tex_psda", "clay", "silt", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "pH_CaCl2", "CEC", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = gemas.h.lst[[which](https://rdrr.io/r/base/which.html)(!gemas.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(gemas.samples))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ gemas.samples[,i] = NA } }
chemsprops.GEMAS <- gemas.samples[,gemas.h.lst]
chemsprops.GEMAS$source_db = "GEMAS_2009"
chemsprops.GEMAS$confidence_degree = 2
chemsprops.GEMAS$project_url = "http://gemas.geolba.ac.at/"
chemsprops.GEMAS$citation_url = "https://doi.org/10.1016/j.apgeochem.2017.01.021"
chemsprops.GEMAS = complete.vars(chemsprops.GEMAS, sel = [c](https://rdrr.io/r/base/c.html)("oc","clay","pH_CaCl2"), coords = [c](https://rdrr.io/r/base/c.html)("XCOO", "YCOO"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.GEMAS)
#> [1] 4131 36
```
#### 5\.3\.0\.13 LUCAS soil
* Orgiazzi, A., Ballabio, C., Panagos, P., Jones, A., \& Fernández‐Ugalde, O. (2018\). [LUCAS Soil, the largest expandable soil dataset for Europe: a review](https://doi.org/10.1111/ejss.12499). European Journal of Soil Science, 69(1\), 140\-153\. Data download URL: [https://esdac.jrc.ec.europa.eu/content/lucas\-2009\-topsoil\-data](https://esdac.jrc.ec.europa.eu/content/lucas-2009-topsoil-data)
```
if({
lucas.samples <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_TOPSOIL_v1.xlsx", sheet = 1)
lucas.samples$site_obsdate <- "2009"
#summary(lucas.samples$N)
lucas.ro <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/EU/LUCAS/Romania.xlsx", sheet = 1)
lucas.ro$site_obsdate <- "2012"
[names](https://rdrr.io/r/base/names.html)(lucas.samples)[[which](https://rdrr.io/r/base/which.html)(]
lucas.ro = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(lucas.ro, replace=[c](https://rdrr.io/r/base/c.html)("Soil.ID"="sample_ID", "GPS_X_LONG"="GPS_LONG", "GPS_Y_LAT"="GPS_LAT", "pHinH2O"="pH_in_H2O", "pHinCaCl2"="pH_in_CaCl"))
lucas.bu <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/EU/LUCAS/Bulgaria.xlsx", sheet = 1)
lucas.bu$site_obsdate <- "2012"
[names](https://rdrr.io/r/base/names.html)(lucas.samples)[[which](https://rdrr.io/r/base/which.html)(]
#lucas.ch <- openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_2015_Topsoil_data_of_Switzerland-with-coordinates.xlsx_.xlsx", sheet = 1, startRow = 2)
#lucas.ch = plyr::rename(lucas.ch, replace=c("Soil_ID"="sample_ID", "GPS_.LAT"="GPS_LAT", "pH.in.H2O"="pH_in_H2O", "pH.in.CaCl2"="pH_in_CaCl", "Calcium.carbonate/.g.kg–1"="CaCO3", "Silt/.g.kg–1"="silt", "Sand/.g.kg–1"="sand", "Clay/.g.kg–1"="clay", "Organic.carbon/.g.kg–1"="OC"))
## Double readings?
lucas.t = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([list](https://rdrr.io/r/base/list.html)(lucas.samples, lucas.ro, lucas.bu))
lucas.h.lst <- [c](https://rdrr.io/r/base/c.html)("POINT_ID", "usiteid", "site_obsdate", "GPS_LONG", "GPS_LAT", "sample_ID", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psda", "clay", "silt", "sand", "OC", "oc_d", "c_tot", "N", "ph_kcl", "pH_in_H2O", "pH_in_CaCl", "CEC", "cec_nh4", "ecec", "coarse", "db_od", "ca_ext", "mg_ext", "na_ext", "K", "ec_satp", "ec_12pre")
x.na = lucas.h.lst[[which](https://rdrr.io/r/base/which.html)(!lucas.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(lucas.t))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ lucas.t[,i] = NA } }
chemsprops.LUCAS <- lucas.t[,lucas.h.lst]
chemsprops.LUCAS$source_db = "LUCAS_2009"
chemsprops.LUCAS$hzn_top <- 0
chemsprops.LUCAS$hzn_bot <- 20
chemsprops.LUCAS$confidence_degree = 2
chemsprops.LUCAS$project_url = "https://esdac.jrc.ec.europa.eu/"
chemsprops.LUCAS$citation_url = "https://doi.org/10.1111/ejss.12499"
chemsprops.LUCAS = complete.vars(chemsprops.LUCAS, sel = [c](https://rdrr.io/r/base/c.html)("OC","clay","pH_in_H2O"), coords = [c](https://rdrr.io/r/base/c.html)("GPS_LONG", "GPS_LAT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.LUCAS)
#> [1] 21272 36
```
```
if({
#lucas2015.samples <- openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_Topsoil_2015_20200323.xlsx", sheet = 1)
lucas2015.xy = readOGR("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_Topsoil_2015_20200323.shp")
#head(as.data.frame(lucas2015.xy))
lucas2015.xy = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(lucas2015.xy)
## https://www.aqion.de/site/130
## 1 mS/m = 100 mS/cm | 1 dS/m = 1 mS/cm = 1 mS/m / 100
lucas2015.xy$ec_satp = lucas2015.xy$EC / 100
lucas2015.h.lst <- [c](https://rdrr.io/r/base/c.html)("Point_ID", "LC0_Desc", "site_obsdate", "coords.x1", "coords.x2", "sample_ID", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psda", "Clay", "Silt", "Sand", "OC", "oc_d", "c_tot", "N", "ph_kcl", "pH_H20", "pH_CaCl2", "CEC", "cec_nh4", "ecec", "coarse", "db_od", "ca_ext", "mg_ext", "na_ext", "K", "ec_satp", "ec_12pre")
x.na = lucas2015.h.lst[[which](https://rdrr.io/r/base/which.html)(!lucas2015.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(lucas2015.xy))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ lucas2015.xy[,i] = NA } }
chemsprops.LUCAS2 <- lucas2015.xy[,lucas2015.h.lst]
chemsprops.LUCAS2$source_db = "LUCAS_2015"
chemsprops.LUCAS2$hzn_top <- 0
chemsprops.LUCAS2$hzn_bot <- 20
chemsprops.LUCAS2$site_obsdate <- "2015"
chemsprops.LUCAS2$confidence_degree = 2
chemsprops.LUCAS2$project_url = "https://esdac.jrc.ec.europa.eu/"
chemsprops.LUCAS2$citation_url = "https://doi.org/10.1111/ejss.12499"
chemsprops.LUCAS2 = complete.vars(chemsprops.LUCAS2, sel = [c](https://rdrr.io/r/base/c.html)("OC","Clay","pH_H20"), coords = [c](https://rdrr.io/r/base/c.html)("coords.x1", "coords.x2"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.LUCAS2)
#> [1] 21859 36
```
#### 5\.3\.0\.14 Mangrove forest soil DB
* Sanderman, J., Hengl, T., Fiske, G., Solvik, K., Adame, M. F., Benson, L., … \& Duncan, C. (2018\). [A global map of mangrove forest soil carbon at 30 m spatial resolution](https://doi.org/10.1088/1748-9326/aabe1c). Environmental Research Letters, 13(5\), 055002\. Data download URL: [https://dataverse.harvard.edu/dataset.xhtml?persistentId\=doi:10\.7910/DVN/OCYUIT](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/OCYUIT)
```
if({
mng.profs <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/TNC_mangroves/mangrove_soc_database_v10_sites.csv", skip=1)
mng.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/TNC_mangroves/mangrove_soc_database_v10_horizons.csv", skip=1)
mngALL = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(mng.hors, mng.profs, by=[c](https://rdrr.io/r/base/c.html)("Site.name"))
mngALL$oc = mngALL$OC_final * 10
mngALL$oc_d = mngALL$CD_calc * 1000
mngALL$hzn_top = mngALL$U_depth * 100
mngALL$hzn_bot = mngALL$L_depth * 100
mngALL$wpg2 = 0
#summary(mngALL$BD_reported) ## some very high values 3.26 t/m3
mngALL$Year = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(mngALL$Year_sampled), mngALL$Years_collected, mngALL$Year_sampled)
mng.col = [c](https://rdrr.io/r/base/c.html)("Site.name", "Site..", "Year", "Longitude_Adjusted", "Latitude_Adjusted", "labsampnum", "layer_sequence","hzn_top","hzn_bot","hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "BD_reported", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = mng.col[[which](https://rdrr.io/r/base/which.html)(!mng.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(mngALL))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ mngALL[,i] = NA } }
chemsprops.Mangroves = mngALL[,mng.col]
chemsprops.Mangroves$source_db = "MangrovesDB"
chemsprops.Mangroves$confidence_degree = 4
chemsprops.Mangroves$project_url = "http://maps.oceanwealth.org/mangrove-restoration/"
chemsprops.Mangroves$citation_url = "https://doi.org/10.1088/1748-9326/aabe1c"
chemsprops.Mangroves = complete.vars(chemsprops.Mangroves, sel = [c](https://rdrr.io/r/base/c.html)("oc","BD_reported"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude_Adjusted", "Latitude_Adjusted"))
#head(chemsprops.Mangroves)
#levels(as.factor(mngALL$OK.to.release.))
mng.rm = chemsprops.Mangroves$Site.name[chemsprops.Mangroves$Site.name [%in%](https://rdrr.io/r/base/match.html) mngALL$Site.name[[grep](https://rdrr.io/r/base/grep.html)("N", mngALL$OK.to.release., ignore.case = FALSE)]]
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Mangroves)
#> [1] 7734 36
```
#### 5\.3\.0\.15 CIFOR peatland points
Peatland soil measurements (points) from the literature described in:
* Murdiyarso, D., Roman\-Cuesta, R. M., Verchot, L. V., Herold, M., Gumbricht, T., Herold, N., \& Martius, C. (2017\). New map reveals more peat in the tropics (Vol. 189\). CIFOR. [https://doi.org/10\.17528/cifor/006452](https://doi.org/10.17528/cifor/006452)
```
if({
cif.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/CIFOR_peatlands/SOC_literature_CIFOR.csv")
#summary(cif.hors$BD..g.cm..)
#summary(cif.hors$SOC)
cif.hors$oc = cif.hors$SOC * 10
cif.hors$wpg2 = 0
cif.hors$c_tot = cif.hors$TOC.content.... * 10
cif.hors$oc_d = cif.hors$C.density..kg.C.m..
cif.hors$site_obsdate = [as.integer](https://rdrr.io/r/base/integer.html)([substr](https://rdrr.io/r/base/substr.html)(cif.hors$year, 1, 4))-1
cif.col = [c](https://rdrr.io/r/base/c.html)("SOURCEID", "usiteid", "site_obsdate", "modelling.x", "modelling.y", "labsampnum", "layer_sequence", "Upper", "Lower", "hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "BD..g.cm..", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = cif.col[[which](https://rdrr.io/r/base/which.html)(!cif.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(cif.hors))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ cif.hors[,i] = NA } }
chemsprops.Peatlands = cif.hors[,cif.col]
chemsprops.Peatlands$source_db = "CIFOR"
chemsprops.Peatlands$confidence_degree = 4
chemsprops.Peatlands$project_url = "https://www.cifor.org/"
chemsprops.Peatlands$citation_url = "https://doi.org/10.17528/cifor/006452"
chemsprops.Peatlands = complete.vars(chemsprops.Peatlands, sel = [c](https://rdrr.io/r/base/c.html)("oc","BD..g.cm.."), coords = [c](https://rdrr.io/r/base/c.html)("modelling.x", "modelling.y"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Peatlands)
#> [1] 756 36
```
#### 5\.3\.0\.16 LandPKS observations
* Herrick, J. E., Urama, K. C., Karl, J. W., Boos, J., Johnson, M. V. V., Shepherd, K. D., … \& Kosnik, C. (2013\). [The Global Land\-Potential Knowledge System (LandPKS): Supporting Evidence\-based, Site\-specific Land Use and Management through Cloud Computing, Mobile Applications, and Crowdsourcing](https://doi.org/10.2489/jswc.68.1.5A). Journal of Soil and Water Conservation, 68(1\), 5A\-12A. Data download URL: [http://portal.landpotential.org/\#/landpksmap](http://portal.landpotential.org/#/landpksmap)
```
if({
pks = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/LandPKS/Export_LandInfo_Data.csv", stringsAsFactors = FALSE)
#str(pks)
pks.hor = [data.frame](https://rdrr.io/r/base/data.frame.html)(rock_fragments = [c](https://rdrr.io/r/base/c.html)(pks$rock_fragments_layer_0_1cm,
pks$rock_fragments_layer_1_10cm,
pks$rock_fragments_layer_10_20cm,
pks$rock_fragments_layer_20_50cm,
pks$rock_fragments_layer_50_70cm,
pks$rock_fragments_layer_70_100cm,
pks$rock_fragments_layer_100_120cm),
tex_field = [c](https://rdrr.io/r/base/c.html)(pks$texture_layer_0_1cm,
pks$texture_layer_1_10cm,
pks$texture_layer_10_20cm,
pks$texture_layer_20_50cm,
pks$texture_layer_50_70cm,
pks$texture_layer_70_100cm,
pks$texture_layer_100_120cm))
pks.hor$hzn_top = [c](https://rdrr.io/r/base/c.html)([rep](https://rdrr.io/r/base/rep.html)(0, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(1, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(10, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(20, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(50, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(70, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(100, [nrow](https://rdrr.io/r/base/nrow.html)(pks)))
pks.hor$hzn_bot = [c](https://rdrr.io/r/base/c.html)([rep](https://rdrr.io/r/base/rep.html)(1, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(10, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(20, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(50, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(70, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(100, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(120, [nrow](https://rdrr.io/r/base/nrow.html)(pks)))
pks.hor$longitude_decimal_degrees = [rep](https://rdrr.io/r/base/rep.html)(pks$longitude, 7)
pks.hor$latitude_decimal_degrees = [rep](https://rdrr.io/r/base/rep.html)(pks$latitude, 7)
pks.hor$site_obsdate = [rep](https://rdrr.io/r/base/rep.html)(pks$modified_date, 7)
pks.hor$site_key = [rep](https://rdrr.io/r/base/rep.html)(pks$id, 7)
#summary(as.factor(pks.hor$tex_field))
tex.tr = [data.frame](https://rdrr.io/r/base/data.frame.html)(tex_field=[c](https://rdrr.io/r/base/c.html)("CLAY", "CLAY LOAM", "LOAM", "LOAMY SAND", "SAND", "SANDY CLAY", "SANDY CLAY LOAM", "SANDY LOAM", "SILT LOAM", "SILTY CLAY", "SILTY CLAY LOAM"),
clay_tot_psa=[c](https://rdrr.io/r/base/c.html)(62.4, 34.0, 19.0, 5.8, 3.3, 41.7, 27.0, 10.0, 13.1, 46.7, 34.0),
silt_tot_psa=[c](https://rdrr.io/r/base/c.html)(17.8, 34.0, 40.0, 12.0, 5.0, 6.7, 13.0, 25.0, 65.7, 46.7, 56.0),
sand_tot_psa=[c](https://rdrr.io/r/base/c.html)(19.8, 32.0, 41.0, 82.2, 91.7, 51.6, 60.0, 65.0, 21.2, 6.7, 10.0))
pks.hor$clay_tot_psa = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(pks.hor["tex_field"], tex.tr)$clay_tot_psa
pks.hor$silt_tot_psa = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(pks.hor["tex_field"], tex.tr)$silt_tot_psa
pks.hor$sand_tot_psa = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(pks.hor["tex_field"], tex.tr)$sand_tot_psa
#summary(as.factor(pks.hor$rock_fragments))
pks.hor$wpg2 = [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments==">60%", 65, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="35-60%", 47.5, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="15-35%", 25, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="1-15%" | pks.hor$rock_fragments=="0-15%", 7.5, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="0-1%", 0.5, NA)))))
#head(pks.hor)
#plot(pks.hor[,c("longitude_decimal_degrees","latitude_decimal_degrees")])
pks.col = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence","hzn_top","hzn_bot","hzn_desgn", "tex_field", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = pks.col[[which](https://rdrr.io/r/base/which.html)(!pks.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(pks.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ pks.hor[,i] = NA } }
chemsprops.LandPKS = pks.hor[,pks.col]
chemsprops.LandPKS$source_db = "LandPKS"
chemsprops.LandPKS$confidence_degree = 8
chemsprops.LandPKS$project_url = "http://portal.landpotential.org"
chemsprops.LandPKS$citation_url = "https://doi.org/10.2489/jswc.68.1.5A"
chemsprops.LandPKS = complete.vars(chemsprops.LandPKS, sel = [c](https://rdrr.io/r/base/c.html)("clay_tot_psa","wpg2"), coords = [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.LandPKS)
#> [1] 41644 36
```
#### 5\.3\.0\.17 EGRPR
* [Russian Federation: The Unified State Register of Soil Resources (EGRPR)](http://egrpr.esoil.ru/). Data download URL: <http://egrpr.esoil.ru/content/1DB.html>
```
if({
russ.HOR = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Russia/EGRPR/Russia_EGRPR_soil_pedons.csv")
russ.HOR$SOURCEID = [paste](https://rdrr.io/r/base/paste.html)(russ.HOR$CardID, russ.HOR$SOIL_ID, sep="_")
russ.HOR$wpg2 = russ.HOR$TEXTSTNS
russ.HOR$SNDPPT <- russ.HOR$TEXTSAF + russ.HOR$TEXSCM
russ.HOR$SLTPPT <- russ.HOR$TEXTSIC + russ.HOR$TEXTSIM + 0.8 * [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(russ.HOR$TEXTSIF), 0, russ.HOR$TEXTSIF)
russ.HOR$CLYPPT <- russ.HOR$TEXTCL + 0.2 * [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(russ.HOR$TEXTSIF), 0, russ.HOR$TEXTSIF)
## Correct texture fractions:
sumTex <- [rowSums](https://rdrr.io/r/base/colSums.html)(russ.HOR[,[c](https://rdrr.io/r/base/c.html)("SLTPPT","CLYPPT","SNDPPT")])
russ.HOR$SNDPPT <- russ.HOR$SNDPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$SLTPPT <- russ.HOR$SLTPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$oc <- [rowMeans](https://rdrr.io/r/base/colSums.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(x1=russ.HOR$CORG * 10, x2=russ.HOR$ORGMAT/1.724 * 10), na.rm=TRUE)
russ.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(russ.HOR$oc / 1000 * russ.HOR$DVOL * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(russ.HOR$wpg2), 0, russ.HOR$wpg2))/100, 3)
russ.HOR$n_tot <- russ.HOR$NTOT * 10
russ.HOR$ca_ext = russ.HOR$EXCA * 200
russ.HOR$mg_ext = russ.HOR$EXMG * 121
russ.HOR$na_ext = russ.HOR$EXNA * 230
russ.HOR$k_ext = russ.HOR$EXK * 391
## Sampling year not available but with high confidence <2000
russ.HOR$site_obsdate = "1982"
russ.sel.h <- [c](https://rdrr.io/r/base/c.html)("SOURCEID", "SOIL_ID", "site_obsdate", "LONG", "LAT", "labsampnum", "HORNMB", "HORTOP", "HORBOT", "HISMMN", "tex_psda", "CLYPPT", "SLTPPT", "SNDPPT", "oc", "oc_d", "c_tot", "NTOT", "PHSLT", "PHH2O", "ph_cacl2", "CECST", "cec_nh4", "ecec", "wpg2", "DVOL", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = russ.sel.h[[which](https://rdrr.io/r/base/which.html)(!russ.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(russ.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ russ.HOR[,i] = NA } }
chemsprops.EGRPR = russ.HOR[,russ.sel.h]
chemsprops.EGRPR$source_db = "Russia_EGRPR"
chemsprops.EGRPR$confidence_degree = 2
chemsprops.EGRPR$project_url = "http://egrpr.esoil.ru/"
chemsprops.EGRPR$citation_url = "https://doi.org/10.19047/0136-1694-2016-86-115-123"
chemsprops.EGRPR <- complete.vars(chemsprops.EGRPR, sel=[c](https://rdrr.io/r/base/c.html)("oc", "CLYPPT"), coords = [c](https://rdrr.io/r/base/c.html)("LONG", "LAT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.EGRPR)
#> [1] 4437 36
```
#### 5\.3\.0\.18 Canada National Pedon DB
* [Agriculture and Agri\-Food Canada National Pedon Database](https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0). Data download URL: <https://open.canada.ca/data/en/>
```
if({
NPDB.nm = [c](https://rdrr.io/r/base/c.html)("NPDB_V2_sum_source_info.csv","NPDB_V2_sum_chemical.csv", "NPDB_V2_sum_horizons_raw.csv", "NPDB_V2_sum_physical.csv")
NPDB.HOR = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)([paste0](https://rdrr.io/r/base/paste.html)("/mnt/diskstation/data/Soil_points/Canada/NPDB/", NPDB.nm), read.csv), type = "full")
NPDB.HOR$HISMMN = [paste0](https://rdrr.io/r/base/paste.html)(NPDB.HOR$HZN_MAS, NPDB.HOR$HZN_SUF, NPDB.HOR$HZN_MOD)
NPDB.HOR$CARB_ORG[NPDB.HOR$CARB_ORG==9] <- NA
NPDB.HOR$N_TOTAL[NPDB.HOR$N_TOTAL==9] <- NA
NPDB.HOR$oc = NPDB.HOR$CARB_ORG * 10
NPDB.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(NPDB.HOR$oc / 1000 * NPDB.HOR$BULK_DEN * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(NPDB.HOR$VC_SAND), 0, NPDB.HOR$VC_SAND))/100, 3)
NPDB.HOR$ca_ext = NPDB.HOR$EXCH_CA * 200
NPDB.HOR$mg_ext = NPDB.HOR$EXCH_MG * 121
NPDB.HOR$na_ext = NPDB.HOR$EXCH_NA * 230
NPDB.HOR$k_ext = NPDB.HOR$EXCH_K * 391
npdb.sel.h = [c](https://rdrr.io/r/base/c.html)("PEDON_ID", "usiteid", "CAL_YEAR", "DD_LONG", "DD_LAT", "labsampnum", "layer_sequence", "U_DEPTH", "L_DEPTH", "HISMMN", "tex_psda", "T_CLAY", "T_SILT", "T_SAND", "oc", "oc_d", "c_tot", "N_TOTAL", "ph_kcl", "PH_H2O", "PH_CACL2", "CEC", "cec_nh4", "ecec", "VC_SAND", "BULK_DEN", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = npdb.sel.h[[which](https://rdrr.io/r/base/which.html)(!npdb.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(NPDB.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ NPDB.HOR[,i] = NA } }
chemsprops.NPDB = NPDB.HOR[,npdb.sel.h]
chemsprops.NPDB$source_db = "Canada_NPDB"
chemsprops.NPDB$confidence_degree = 2
chemsprops.NPDB$project_url = "https://open.canada.ca/data/en/"
chemsprops.NPDB$citation_url = "https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0"
chemsprops.NPDB <- complete.vars(chemsprops.NPDB, sel=[c](https://rdrr.io/r/base/c.html)("oc", "PH_H2O", "T_CLAY"), coords = [c](https://rdrr.io/r/base/c.html)("DD_LONG", "DD_LAT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NPDB)
#> [1] 15946 36
```
#### 5\.3\.0\.19 Canadian upland forest soil profile and carbon stocks database
* Shaw, C., Hilger, A., Filiatrault, M., \& Kurz, W. (2018\). [A Canadian upland forest soil profile and carbon stocks database](https://doi.org/10.1002/ecy.2159). Ecology, 99(4\), 989\-989\. Data download URL: [https://esajournals.onlinelibrary.wiley.com/action/downloadSupplement?doi\=10\.1002%2Fecy.2159\&file\=ecy2159\-sup\-0001\-DataS1\.zip](https://esajournals.onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1002%2Fecy.2159&file=ecy2159-sup-0001-DataS1.zip)
\*Organic horizons have negative values, the first mineral soil horizon has a value of 0 cm, and other mineral soil horizons have positive values. This needs to be corrected before the values can be bind with other international sets.
```
if({
## Reading of the .dat file was tricky
cufs.HOR = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Canada/CUFSDB/PROFILES.csv", stringsAsFactors = FALSE)
cufs.HOR$LOWER_HZN_LIMIT =cufs.HOR$UPPER_HZN_LIMIT + cufs.HOR$HZN_THICKNESS
## Correct depth (Canadian data can have negative depths for soil horizons):
z.min.cufs <- ddply(cufs.HOR, .(LOCATION_ID), summarize, aggregated = [min](https://rdrr.io/r/base/Extremes.html)(UPPER_HZN_LIMIT, na.rm=TRUE))
z.shift.cufs <- join(cufs.HOR["LOCATION_ID"], z.min.cufs, type="left")$aggregated
## fixed shift
z.shift.cufs <- [ifelse](https://rdrr.io/r/base/ifelse.html)(z.shift.cufs>0, 0, z.shift.cufs)
cufs.HOR$hzn_top <- cufs.HOR$UPPER_HZN_LIMIT - z.shift.cufs
cufs.HOR$hzn_bot <- cufs.HOR$LOWER_HZN_LIMIT - z.shift.cufs
cufs.SITE = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Canada/CUFSDB/SITES.csv", stringsAsFactors = FALSE)
cufs.HOR$longitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$LONGITUDE
cufs.HOR$latitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$LATITUDE
cufs.HOR$site_obsdate = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$YEAR_SAMPLED
cufs.HOR$usiteid = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$RELEASE_SOURCE_SITEID
#summary(cufs.HOR$ORG_CARB_PCT)
#hist(cufs.HOR$ORG_CARB_PCT, breaks=45)
cufs.HOR$oc = cufs.HOR$ORG_CARB_PCT*10
#cufs.HOR$c_tot = cufs.HOR$oc + ifelse(is.na(cufs.HOR$CARBONATE_CARB_PCT), 0, cufs.HOR$CARBONATE_CARB_PCT*10)
cufs.HOR$n_tot = cufs.HOR$TOT_NITRO_PCT*10
cufs.HOR$ca_ext = cufs.HOR$EXCH_Ca * 200
cufs.HOR$mg_ext = cufs.HOR$EXCH_Mg * 121
cufs.HOR$na_ext = cufs.HOR$EXCH_Na * 230
cufs.HOR$k_ext = cufs.HOR$EXCH_K * 391
cufs.HOR$ph_cacl2 = cufs.HOR$pH
cufs.HOR$ph_cacl2[!cufs.HOR$pH_H2O_CACL2=="CACL2"] = NA
cufs.HOR$ph_h2o = cufs.HOR$pH
cufs.HOR$ph_h2o[!cufs.HOR$pH_H2O_CACL2=="H2O"] = NA
#summary(cufs.HOR$CF_VOL_PCT) ## is NA == 0??
cufs.HOR$wpg2 = [ifelse](https://rdrr.io/r/base/ifelse.html)(cufs.HOR$CF_CORR_FACTOR==1, 0, cufs.HOR$CF_VOL_PCT)
cufs.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(cufs.HOR$oc / 1000 * cufs.HOR$BULK_DENSITY * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(cufs.HOR$wpg2), 0, cufs.HOR$wpg2))/100, 3)
cufs.sel.h = [c](https://rdrr.io/r/base/c.html)("LOCATION_ID", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "HZN_SEQ_NO", "hzn_top", "hzn_bot", "HORIZON", "TEXT_CLASS", "CLAY_PCT", "SILT_PCT", "SAND_PCT", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "CEC_CALCULATED", "cec_nh4", "ecec", "wpg2", "BULK_DENSITY", "ca_ext", "mg_ext", "na_ext", "k_ext", "ELEC_COND", "ec_12pre")
x.na = cufs.sel.h[[which](https://rdrr.io/r/base/which.html)(!cufs.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(cufs.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ cufs.HOR[,i] = NA } }
chemsprops.CUFS = cufs.HOR[,cufs.sel.h]
chemsprops.CUFS$source_db = "Canada_CUFS"
chemsprops.CUFS$confidence_degree = 1
chemsprops.CUFS$project_url = "https://cfs.nrcan.gc.ca/publications/centre/nofc"
chemsprops.CUFS$citation_url = "https://doi.org/10.1002/ecy.2159"
chemsprops.CUFS <- complete.vars(chemsprops.CUFS, sel=[c](https://rdrr.io/r/base/c.html)("oc", "ph_h2o", "CLAY_PCT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CUFS)
#> [1] 15162 36
```
#### 5\.3\.0\.20 Permafrost in subarctic Canada
* Estop\-Aragones, C.; Fisher, J.P.; Cooper, M.A.; Thierry, A.; Treharne, R.; Murton, J.B.; Phoenix, G.K.; Charman, D.J.; Williams, M.; Hartley, I.P. (2016\). Bulk density, carbon and nitrogen content in soil profiles from permafrost in subarctic Canada. NERC Environmental Information Data Centre. [https://doi.org/10\.5285/efa2a84b\-3505\-4221\-a7da\-12af3cdc1952](https://doi.org/10.5285/efa2a84b-3505-4221-a7da-12af3cdc1952). Data download URL:
```
if({
caperm.HOR = vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/Canada/NorthCanada/Bulk_density_CandNcontent_profiles_all_sites.csv")
#measurements::conv_unit("-99 36 15.7", from = "deg_min_sec", to = "dec_deg")
#caperm.HOR$longitude_decimal_degrees = as.numeric(measurements::conv_unit(paste0("-", gsub('\"W', '', gsub("'", ' ', iconv(caperm.HOR$Coordinates_West, "UTF-8", "UTF-8", sub=' ')), fixed = TRUE)), from = "deg_min_sec", to = "dec_deg"))
caperm.HOR$longitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)(measurements::[conv_unit](https://rdrr.io/pkg/measurements/man/conv_unit.html)([paste0](https://rdrr.io/r/base/paste.html)("-", caperm.HOR$Cordinates_West), from = "deg_min_sec", to = "dec_deg"))
#caperm.HOR$latitude_decimal_degrees = as.numeric(measurements::conv_unit(gsub('\"N', '', gsub('o', '', gsub("'", ' ', iconv(caperm.HOR$Coordinates_North, "UTF-8", "UTF-8", sub=' '))), fixed = TRUE), from = "deg_min_sec", to = "dec_deg"))
caperm.HOR$latitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)(measurements::[conv_unit](https://rdrr.io/pkg/measurements/man/conv_unit.html)(caperm.HOR$Cordinates_North, from = "deg_min_sec", to = "dec_deg"))
#plot(caperm.HOR[,c("longitude_decimal_degrees","latitude_decimal_degrees")])
caperm.HOR$site_obsdate = "2013"
caperm.HOR$site_key = [make.unique](https://rdrr.io/r/base/make.unique.html)(caperm.HOR$Soil.core)
#summary(as.factor(caperm.HOR$Soil_depth_cm))
caperm.HOR$hzn_top = caperm.HOR$Soil_depth_cm-1
caperm.HOR$hzn_bot = caperm.HOR$Soil_depth_cm+1
caperm.HOR$db_od = caperm.HOR$Bulk_density_gdrysoil_cm3wetsoil
caperm.HOR$oc = caperm.HOR$Ccontent_percentage_on_drymass * 10
caperm.HOR$n_tot = caperm.HOR$Ncontent_percentage_on_drymass * 10
caperm.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(caperm.HOR$oc / 1000 * caperm.HOR$db_od * 1000, 3)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(caperm.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ caperm.HOR[,i] = NA } }
chemsprops.CAPERM = caperm.HOR[,col.names]
chemsprops.CAPERM$source_db = "Canada_subarctic"
chemsprops.CAPERM$confidence_degree = 2
chemsprops.CAPERM$project_url = "http://arp.arctic.ac.uk/projects/carbon-cycling-linkages-permafrost-systems-cyclops/"
chemsprops.CAPERM$citation_url = "https://doi.org/10.5285/efa2a84b-3505-4221-a7da-12af3cdc1952"
chemsprops.CAPERM <- complete.vars(chemsprops.CAPERM, sel=[c](https://rdrr.io/r/base/c.html)("oc", "n_tot"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CAPERM)
#> [1] 1180 36
```
#### 5\.3\.0\.21 SOTER China soil profiles
* Dijkshoorn, K., van Engelen, V., \& Huting, J. (2008\). [Soil and landform properties for LADA partner countries](https://isric.org/sites/default/files/isric_report_2008_06.pdf). ISRIC report 2008/06 and GLADA report 2008/03, ISRIC – World Soil Information and FAO, Wageningen. Data download URL: [https://files.isric.org/public/soter/CN\-SOTER.zip](https://files.isric.org/public/soter/CN-SOTER.zip)
```
if({
sot.sites = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/China/China_SOTERv1/CHINA_SOTERv1_Profile.csv")
sot.horizons = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/China/China_SOTERv1/CHINA_SOTERv1_Horizon.csv")
sot.HOR = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(sot.sites, sot.horizons), type = "full")
sot.HOR$oc = sot.HOR$SOCA * 10
sot.HOR$ca_ext = sot.HOR$EXCA * 200
sot.HOR$mg_ext = sot.HOR$EXMG * 121
sot.HOR$na_ext = sot.HOR$EXNA * 230
sot.HOR$k_ext = sot.HOR$EXCK * 391
## upper depth missing needs to be derived manually
sot.HOR$hzn_top = NA
sot.HOR$hzn_top[2:[nrow](https://rdrr.io/r/base/nrow.html)(sot.HOR)] <- sot.HOR$HBDE[1:([nrow](https://rdrr.io/r/base/nrow.html)(sot.HOR)-1)]
sot.HOR$hzn_top <- [ifelse](https://rdrr.io/r/base/ifelse.html)(sot.HOR$hzn_top > sot.HOR$HBDE, 0, sot.HOR$hzn_top)
sot.HOR$hzn_top <- [ifelse](https://rdrr.io/r/base/ifelse.html)(sot.HOR$HONU==1 & [is.na](https://rdrr.io/r/base/NA.html)(sot.HOR$hzn_top), 0, sot.HOR$hzn_top)
sot.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(sot.HOR$oc / 1000 * sot.HOR$BULK * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sot.HOR$SDVC), 0, sot.HOR$SDVC))/100, 3)
sot.sel.h = [c](https://rdrr.io/r/base/c.html)("PRID", "PDID", "SAYR", "LNGI", "LATI", "labsampnum", "HONU", "hzn_top","HBDE","HODE", "PSCL", "CLPC", "STPC", "SDTO", "oc", "oc_d", "TOTC", "TOTN", "PHKC", "PHAQ", "ph_cacl2", "CECS", "cec_nh4", "ecec", "SDVC", "BULK", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = sot.sel.h[[which](https://rdrr.io/r/base/which.html)(!sot.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sot.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sot.HOR[,i] = NA } }
chemsprops.CNSOT = sot.HOR[,sot.sel.h]
chemsprops.CNSOT$source_db = "China_SOTER"
chemsprops.CNSOT$confidence_degree = 8
chemsprops.CNSOT$project_url = "https://www.isric.org/explore/soter"
chemsprops.CNSOT$citation_url = "https://isric.org/sites/default/files/isric_report_2008_06.pdf"
chemsprops.CNSOT <- complete.vars(chemsprops.CNSOT, sel=[c](https://rdrr.io/r/base/c.html)("TOTC", "PHAQ", "CLPC"), coords = [c](https://rdrr.io/r/base/c.html)("LNGI", "LATI"))
}
#> Joining by: PRID, INFR
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CNSOT)
#> [1] 5105 36
```
#### 5\.3\.0\.22 SISLAC
* Sistema de Información de Suelos de Latinoamérica (SISLAC), Data download URL: [http://54\.229\.242\.119/sislac/es](http://54.229.242.119/sislac/es)
```
if({
sis.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/SA/SISLAC/sislac_profiles_es.csv", stringsAsFactors = FALSE)
#str(sis.hor)
## SOC for Urugvay do not match the original soil profile data (see e.g. http://www.mgap.gub.uy/sites/default/files/multimedia/skmbt_c45111090914030.pdf)
## compare with:
#sis.hor[sis.hor$perfil_id=="23861",]
## Subset to SISINTA/WOSIS points:
cor.sel = [c](https://rdrr.io/r/base/c.html)([grep](https://rdrr.io/r/base/grep.html)("WoSIS", [paste](https://rdrr.io/r/base/paste.html)(sis.hor$perfil_numero)), [grep](https://rdrr.io/r/base/grep.html)("SISINTA", [paste](https://rdrr.io/r/base/paste.html)(sis.hor$perfil_numero)))
#length(cor.sel)
sis.hor = sis.hor[cor.sel,]
#summary(sis.hor$analitico_carbono_organico_c)
sis.hor$oc = sis.hor$analitico_carbono_organico_c * 10
sis.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(sis.hor$oc / 1000 * sis.hor$analitico_densidad_aparente * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sis.hor$analitico_gravas), 0, sis.hor$analitico_gravas))/100, 3)
#summary(sis.hor$analitico_base_k)
#summary(as.factor(sis.hor$perfil_fecha))
sis.sel.h = [c](https://rdrr.io/r/base/c.html)("perfil_id", "perfil_numero", "perfil_fecha", "perfil_ubicacion_longitud", "perfil_ubicacion_latitud", "id", "layer_sequence", "profundidad_superior", "profundidad_inferior", "hzn_desgn", "tex_psda", "analitico_arcilla", "analitico_limo_2_50", "analitico_arena_total", "oc", "oc_d", "c_tot", "n_tot", "analitico_ph_kcl", "analitico_ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "analitico_gravas", "analitico_densidad_aparente", "ca_ext", "mg_ext", "na_ext", "k_ext", "analitico_conductividad", "ec_12pre")
x.na = sis.sel.h[[which](https://rdrr.io/r/base/which.html)(!sis.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sis.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sis.hor[,i] = NA } }
chemsprops.SISLAC = sis.hor[,sis.sel.h]
chemsprops.SISLAC$source_db = "SISLAC"
chemsprops.SISLAC$confidence_degree = 4
chemsprops.SISLAC$project_url = "http://54.229.242.119/sislac/es"
chemsprops.SISLAC$citation_url = "https://hdl.handle.net/10568/49611"
chemsprops.SISLAC <- complete.vars(chemsprops.SISLAC, sel=[c](https://rdrr.io/r/base/c.html)("oc","analitico_ph_kcl","analitico_arcilla"), coords = [c](https://rdrr.io/r/base/c.html)("perfil_ubicacion_longitud", "perfil_ubicacion_latitud"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SISLAC)
#> [1] 49994 36
```
#### 5\.3\.0\.23 FEBR
* Samuel\-Rosa, A., Dalmolin, R. S. D., Moura\-Bueno, J. M., Teixeira, W. G., \& Alba, J. M. F. (2020\). Open legacy soil survey data in Brazil: geospatial data quality and how to improve it. Scientia Agricola, 77(1\). [https://doi.org/10\.1590/1678\-992x\-2017\-0430](https://doi.org/10.1590/1678-992x-2017-0430)
* Free Brazilian Repository for Open Soil Data – febr. Data download URL: <http://www.ufsm.br/febr/>
```
if({
#library(febr)
## download up-to-date copy of data
#febr.lab = febr::layer(dataset = "all", variable="all")
#febr.lab = febr::observation(dataset = "all")
febr.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Brasil/FEBR/febr-superconjunto.csv", stringsAsFactors = FALSE, dec = ",", sep = ";")
#head(febr.hor)
#summary(febr.hor$carbono)
#summary(febr.hor$ph)
#summary(febr.hor$dsi) ## bulk density of total soil
febr.hor$clay_tot_psa = febr.hor$argila /10
febr.hor$sand_tot_psa = febr.hor$areia /10
febr.hor$silt_tot_psa = febr.hor$silte /10
febr.hor$wpg2 = (1000-febr.hor$terrafina)/10
febr.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(febr.hor$carbono / 1000 * febr.hor$dsi * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(febr.hor$wpg2), 0, febr.hor$wpg2))/100, 3)
febr.sel.h <- [c](https://rdrr.io/r/base/c.html)("observacao_id", "usiteid", "observacao_data", "coord_x", "coord_y", "sisb_id", "camada_id", "profund_sup", "profund_inf", "camada_nome", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "carbono", "oc_d", "c_tot", "nitrogenio", "ph_kcl", "ph", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "dsi", "ca_ext", "mg_ext", "na_ext", "k_ext", "ce", "ec_12pre")
x.na = febr.sel.h[[which](https://rdrr.io/r/base/which.html)(!febr.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(febr.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ febr.hor[,i] = NA } }
chemsprops.FEBR = febr.hor[,febr.sel.h]
chemsprops.FEBR$source_db = "FEBR"
chemsprops.FEBR$confidence_degree = 4
chemsprops.FEBR$project_url = "http://www.ufsm.br/febr/"
chemsprops.FEBR$citation_url = "https://doi.org/10.1590/1678-992x-2017-0430"
chemsprops.FEBR <- complete.vars(chemsprops.FEBR, sel=[c](https://rdrr.io/r/base/c.html)("carbono","ph","clay_tot_psa","dsi"), coords = [c](https://rdrr.io/r/base/c.html)("coord_x", "coord_y"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.FEBR)
#> [1] 7842 36
```
#### 5\.3\.0\.24 PRONASOLOS
* POLIDORO, J., COELHO, M., CARVALHO FILHO, A. D., LUMBRERAS, J., de OLIVEIRA, A. P., VASQUES, G. D. M., … \& BREFIN, M. (2021\). [Programa Nacional de Levantamento e Interpretação de Solos do Brasil (PronaSolos): diretrizes para implementação](https://www.infoteca.cnptia.embrapa.br/infoteca/handle/doc/1135056). Embrapa Solos\-Documentos (INFOTECA\-E).
* Download URL: <http://geoinfo.cnps.embrapa.br/documents/3013/download>
```
if({
pronas.hor = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(sf::[read_sf](https://r-spatial.github.io/sf/reference/st_read.html)("/mnt/diskstation/data/Soil_points/Brasil/Pronasolos/Perfis_PronaSolos_20201202v2.shp"))
## 34,464 rows
#head(pronas.hor)
#summary(as.numeric(pronas.hor$carbono_or))
#summary(as.numeric(pronas.hor$densidade_))
#summary(as.numeric(pronas.hor$argila))
#summary(as.numeric(pronas.hor$cascalho))
#summary(as.numeric(pronas.hor$ph_h2o))
#summary(as.numeric(pronas.hor$complexo_2))
## A lot of errors / typos e.g. very high values and 0 values!!
#pronas.hor$data_colet[1:50]
pronas.in.name = [c](https://rdrr.io/r/base/c.html)("sigla", "codigo_pon", "data_colet", "gcs_latitu", "gcs_longit", "simbolo_ho", "profundida",
"profundi_1", "cascalho", "areia_tota", "silte", "argila", "densidade_", "ph_h2o", "ph_kcl",
"complexo_s", "complexo_1", "complexo_2", "complexo_3", "valor_s", "carbono_or", "nitrogenio",
"condutivid", "classe_tex")
#pronas.in.name[which(!pronas.in.name %in% names(pronas.hor))]
pronas.x = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(pronas.hor[,pronas.in.name])
pronas.out.name = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "latitude_decimal_degrees", "longitude_decimal_degrees",
"hzn_desgn", "hzn_bot", "hzn_top", "wpg2", "sand_tot_psa", "silt_tot_psa",
"clay_tot_psa", "db_od", "ph_h2o", "ph_kcl", "ca_ext",
"mg_ext", "k_ext", "na_ext", "cec_sum", "oc", "n_tot", "ec_satp", "tex_psda")
## translate values
pronas.fun.lst = [as.list](https://rdrr.io/r/base/list.html)([rep](https://rdrr.io/r/base/rep.html)("as.numeric(x)*1", [length](https://rdrr.io/r/base/length.html)(pronas.in.name)))
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="sigla")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="codigo_pon")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="data_colet")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="simbolo_ho")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="classe_tex")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_s")]] = "as.numeric(x)*200"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_1")]] = "as.numeric(x)*121"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_2")]] = "as.numeric(x)*391"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_3")]] = "as.numeric(x)*230"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="areia_tota")]] = "round(as.numeric(x)/10, 1)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="silte")]] = "round(as.numeric(x)/10, 1)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="argila")]] = "round(as.numeric(x)/10, 1)"
## save translation rules:
[write.csv](https://rdrr.io/r/utils/write.table.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(pronas.in.name, pronas.out.name, [unlist](https://rdrr.io/r/base/unlist.html)(pronas.fun.lst)), "pronas_soilab_transvalues.csv")
pronas.soil = transvalues(pronas.x, pronas.out.name, pronas.in.name, pronas.fun.lst)
pronas.soil$oc_d = [signif](https://rdrr.io/r/base/Round.html)(pronas.soil$oc / 1000 * pronas.soil$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(pronas.soil$wpg2), 0, pronas.soil$wpg2))/100, 3)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(pronas.soil))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ pronas.soil[,i] = NA } }
chemsprops.PRONASOLOS = pronas.soil[,col.names]
chemsprops.PRONASOLOS$source_db = "PRONASOLOS"
chemsprops.PRONASOLOS$confidence_degree = 2
chemsprops.PRONASOLOS$project_url = "https://geoportal.cprm.gov.br/pronasolos/"
chemsprops.PRONASOLOS$citation_url = "https://www.infoteca.cnptia.embrapa.br/infoteca/handle/doc/1135056"
chemsprops.PRONASOLOS <- complete.vars(chemsprops.PRONASOLOS, sel=[c](https://rdrr.io/r/base/c.html)("oc","ph_h2o","clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.PRONASOLOS)
#> [1] 31747 36
```
#### 5\.3\.0\.25 Soil Profile DB for Costa Rica
* Mata, R., Vázquez, A., Rosales, A., \& Salazar, D. (2012\). [Mapa digital de suelos de Costa Rica](http://www.cia.ucr.ac.cr/?page_id=139). Asociación Costarricense de la Ciencia del Suelo, San José, CRC. Escala, 1, 200000\. Data download URL: [http://www.cia.ucr.ac.cr/wp\-content/recursosnaturales/Base%20perfiles%20de%20suelos%20v1\.1\.rar](http://www.cia.ucr.ac.cr/wp-content/recursosnaturales/Base%20perfiles%20de%20suelos%20v1.1.rar)
* Mata\-Chinchilla, R., \& Castro\-Chinchilla, J. (2019\). Geoportal de suelos de Costa Rica como Bien Público al servicio del país. Revista Tecnología En Marcha, 32(7\), Pág. 51\-56\. [https://doi.org/10\.18845/tm.v32i7\.4259](https://doi.org/10.18845/tm.v32i7.4259)
```
if({
cr.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Costa_Rica/Base_de_datos_version_1.2.3.csv", stringsAsFactors = FALSE)
#plot(cr.hor[,c("X","Y")], pch="+", asp=1)
cr.hor$usiteid = [paste](https://rdrr.io/r/base/paste.html)(cr.hor$Provincia, cr.hor$Cantón, cr.hor$Id, sep="_")
#summary(cr.hor$Corg.)
cr.hor$oc = cr.hor$Corg. * 10
cr.hor$Densidad.Aparente = [as.numeric](https://rdrr.io/r/base/numeric.html)([paste0](https://rdrr.io/r/base/paste.html)(cr.hor$Densidad.Aparente))
#summary(cr.hor$K)
cr.hor$ca_ext = cr.hor$Ca * 200
cr.hor$mg_ext = cr.hor$Mg * 121
#cr.hor$na_ext = cr.hor$Na * 230
cr.hor$k_ext = cr.hor$K * 391
cr.hor$wpg2 = NA
cr.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(cr.hor$oc / 1000 * cr.hor$Densidad.Aparente * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(cr.hor$wpg2), 0, cr.hor$wpg2))/100, 3)
cr.sel.h = [c](https://rdrr.io/r/base/c.html)("Id", "usiteid", "Fecha", "X", "Y", "labsampnum", "horizonte", "prof_inicio", "prof_final", "id_hz", "Clase.Textural", "ARCILLA", "LIMO", "ARENA", "oc", "oc_d", "c_tot", "n_tot", "pHKCl", "pH_H2O", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "Densidad.Aparente", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = cr.sel.h[[which](https://rdrr.io/r/base/which.html)(!cr.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(cr.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ cr.hor[,i] = NA } }
chemsprops.CostaRica = cr.hor[,cr.sel.h]
chemsprops.CostaRica$source_db = "CostaRica"
chemsprops.CostaRica$confidence_degree = 4
chemsprops.CostaRica$project_url = "http://www.cia.ucr.ac.cr"
chemsprops.CostaRica$citation_url = "https://doi.org/10.18845/tm.v32i7.42"
chemsprops.CostaRica <- complete.vars(chemsprops.CostaRica, sel=[c](https://rdrr.io/r/base/c.html)("oc","pH_H2O","ARCILLA","Densidad.Aparente"), coords = [c](https://rdrr.io/r/base/c.html)("X", "Y"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CostaRica)
#> [1] 2042 36
```
#### 5\.3\.0\.26 Iran soil profile DB
* Dewan, M. L., \& Famouri, J. (1964\). The soils of Iran. Food and Agriculture Organization of the United Nations.
* Hengl, T., Toomanian, N., Reuter, H. I., \& Malakouti, M. J. (2007\). [Methods to interpolate soil categorical variables from profile observations: Lessons from Iran](https://doi.org/10.1016/j.geoderma.2007.04.022). Geoderma, 140(4\), 417\-427\.
* Mohammad, H. B. (2000\). Soil resources and use potentiality map of Iran. Soil and Water Research Institute, Teheran, Iran.
```
if({
na.s = [c](https://rdrr.io/r/base/c.html)("?","","?.","??", -2147483647, -1.00e+308, "<NA>")
iran.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Iran/iran_sdbana.txt", stringsAsFactors = FALSE, na.strings = na.s, header = FALSE)[,1:12]
[names](https://rdrr.io/r/base/names.html)(iran.hor) = [c](https://rdrr.io/r/base/c.html)("site_key", "hzn_desgn", "hzn_top", "hzn_bot", "ph_h2o", "ec_satp", "oc", "CACO", "PBS", "sand_tot_psa", "silt_tot_psa", "clay_tot_psa")
iran.hor$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(iran.hor$hzn_top) & iran.hor$hzn_desgn=="A", 0, iran.hor$hzn_top)
iran.hor2 = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Iran/iran_sdbhor.txt", stringsAsFactors = FALSE, na.strings = na.s, header = FALSE)[,1:8]
[names](https://rdrr.io/r/base/names.html)(iran.hor2) = [c](https://rdrr.io/r/base/c.html)("site_key", "layer_sequence", "DESI", "hzn_top", "hzn_bot", "M_colour", "tex_psda", "hzn_desgn")
iran.site = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Iran/iran_sgdb.txt", stringsAsFactors = FALSE, na.strings = na.s, header = FALSE)
[names](https://rdrr.io/r/base/names.html)(iran.site) = [c](https://rdrr.io/r/base/c.html)("usiteid", "latitude_decimal_degrees", "longitude_decimal_degrees", "FAO", "Tax", "site_key")
iran.db = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(iran.site, iran.hor, iran.hor2))
iran.db$oc = iran.db$oc * 10
#summary(iran.db$oc)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(iran.db))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ iran.db[,i] = NA } }
chemsprops.IRANSPDB = iran.db[,col.names]
chemsprops.IRANSPDB$source_db = "Iran_SPDB"
chemsprops.IRANSPDB$confidence_degree = 4
chemsprops.IRANSPDB$project_url = ""
chemsprops.IRANSPDB$citation_url = "https://doi.org/10.1016/j.geoderma.2007.04.022"
chemsprops.IRANSPDB <- complete.vars(chemsprops.IRANSPDB, sel=[c](https://rdrr.io/r/base/c.html)("oc","ph_h2o","clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.IRANSPDB)
#> [1] 4759 36
```
#### 5\.3\.0\.27 Northern circumpolar permafrost soil profiles
* Hugelius, G., Bockheim, J. G., Camill, P., Elberling, B., Grosse, G., Harden, J. W., … \& Michaelson, G. (2013\). [A new data set for estimating organic carbon storage to 3 m depth in soils of the northern circumpolar permafrost region](https://doi.org/10.5194/essd-5-393-2013). Earth System Science Data (Online), 5(2\). Data download URL: [http://dx.doi.org/10\.5879/ECDS/00000002](http://dx.doi.org/10.5879/ECDS/00000002)
```
if({
ncscd.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/NCSCD/Harden_etal_2012_Hugelius_etal_2013_cleaned_data.csv", stringsAsFactors = FALSE)
ncscd.hors$oc = [as.numeric](https://rdrr.io/r/base/numeric.html)(ncscd.hors$X.C)*10
#summary(ncscd.hors$oc)
#hist(ncscd.hors$Layer.thickness.cm, breaks = 45)
ncscd.hors$Layer.thickness.cm = [ifelse](https://rdrr.io/r/base/ifelse.html)(ncscd.hors$Layer.thickness.cm<0, NA, ncscd.hors$Layer.thickness.cm)
ncscd.hors$hzn_bot = ncscd.hors$Basal.Depth.cm + ncscd.hors$Layer.thickness.cm
ncscd.hors$db_od = [as.numeric](https://rdrr.io/r/base/numeric.html)(ncscd.hors$bulk.density.g.cm.3)
## Can we assume no coarse fragments?
ncscd.hors$wpg2 = 0
ncscd.hors$oc_d = [signif](https://rdrr.io/r/base/Round.html)(ncscd.hors$oc / 1000 * ncscd.hors$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(ncscd.hors$wpg2), 0, ncscd.hors$wpg2))/100, 3)
## very high values >40 kg/m3
ncscd.hors$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(ncscd.hors$Sample.date, format="%d-%m-%Y"), "%Y-%m-%d")
#summary(ncscd.hors$db_od)
ncscd.col = [c](https://rdrr.io/r/base/c.html)("Profile.ID", "citation", "site_obsdate", "Long", "Lat", "labsampnum", "layer_sequence", "Basal.Depth.cm", "hzn_bot", "Horizon.type", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = ncscd.col[[which](https://rdrr.io/r/base/which.html)(!ncscd.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(ncscd.hors))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ ncscd.hors[,i] = NA } }
chemsprops.NCSCD = ncscd.hors[,ncscd.col]
chemsprops.NCSCD$source_db = "NCSCD"
chemsprops.NCSCD$confidence_degree = 10
chemsprops.NCSCD$project_url = "https://bolin.su.se/data/ncscd/"
chemsprops.NCSCD$citation_url = "https://doi.org/10.5194/essd-5-393-2013"
chemsprops.NCSCD = complete.vars(chemsprops.NCSCD, sel = [c](https://rdrr.io/r/base/c.html)("oc","db_od"), coords = [c](https://rdrr.io/r/base/c.html)("Long", "Lat"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NCSCD)
#> [1] 7104 36
```
#### 5\.3\.0\.28 CSIRO National Soil Site Database
* CSIRO (2020\). CSIRO National Soil Site Database. v4\. CSIRO. Data Collection. <https://data.csiro.au/collections/collection/CIcsiro:7526v004>. Data download URL: [https://doi.org/10\.25919/5eeb2a56eac12](https://doi.org/10.25919/5eeb2a56eac12) (available upon request)
* Searle, R. (2014\). The Australian site data collation to support the GlobalSoilMap. GlobalSoilMap: Basis of the global spatial soil information system, 127\.
```
if({
[library](https://rdrr.io/r/base/library.html)([Hmisc](http://biostat.mc.vanderbilt.edu/Hmisc))
cmdb <- [mdb.get](https://rdrr.io/pkg/Hmisc/man/mdb.get.html)("/mnt/diskstation/data/Soil_points/Australia/CSIRO/NatSoil_v2_20200612.mdb")
#str(cmdb$SITES)
au.obs = cmdb$OBSERVATIONS[,[c](https://rdrr.io/r/base/c.html)("s.id", "o.location.notes", "o.date.desc", "o.latitude.GDA94", "o.longitude.GDA94")]
au.obs = au.obs[,]
coordinates(au.obs) <- ~o.longitude.GDA94+o.latitude.GDA94
proj4string(au.obs) <- CRS("+proj=longlat +ellps=GRS80 +no_defs")
au.xy <- [data.frame](https://rdrr.io/r/base/data.frame.html)(spTransform(au.obs, CRS("+proj=longlat +ellps=WGS84 +datum=WGS84")))
#plot(au.xy[,c("o.longitude.GDA94", "o.latitude.GDA94")])
## all variables in one column and need to be sorted based on the lab method
#summary(cmdb$LAB_METHODS$LABM.SHORT.NAME)
#write.csv(cmdb$LAB_METHODS, "/mnt/diskstation/data/Soil_points/Australia/CSIRO/NatSoil_v2_20200612_lab_methods.csv")
lab.tbl = [list](https://rdrr.io/r/base/list.html)(
[c](https://rdrr.io/r/base/c.html)("6_DC", "6A1", "6A1_UC", "6B1", "6B2", "6B2a", "6B2b", "6B3", "6B4", "6B4a", "6B4b", "6Z"), # %
[c](https://rdrr.io/r/base/c.html)("6B3a"), # g/kg
[c](https://rdrr.io/r/base/c.html)("6H4", "6H4_SCaRP"), # %
[c](https://rdrr.io/r/base/c.html)("7_C_B", "7_NR", "7A1", "7A2", "7A2a", "7A2b", "7A3", "7A4", "7A5", "7A6", "7A6a", "7A6b", "7A6b_MCLW"), # g/kg
[c](https://rdrr.io/r/base/c.html)("4A1", "4_NR", "4A_C_2.5", "4A_C_1", "4G1"),
[c](https://rdrr.io/r/base/c.html)("4C_C_1", "4C1", "4C2", "23A"),
[c](https://rdrr.io/r/base/c.html)("4B_C_2.5", "4B1", "4B2"),
[c](https://rdrr.io/r/base/c.html)("P10_NR_C", "P10_HYD_C", "P10_PB_C", "P10_PB1_C", "P10_CF_C", "P10_I_C"),
[c](https://rdrr.io/r/base/c.html)("P10_NR_Z", "P10_HYD_Z", "P10_PB_Z", "P10_PB1_Z", "P10_CF_Z", "P10_I_Z"),
[c](https://rdrr.io/r/base/c.html)("P10_NR_S", "P10_HYD_S", "P10_PB_S", "P10_PB1_S", "P10_CF_S", "P10_I_S"),
[c](https://rdrr.io/r/base/c.html)("15C1modCEC", "15_HSK_CEC", "15J_CEC"),
[c](https://rdrr.io/r/base/c.html)("15I1", "15I2", "15I3", "15I4", "15D3_CEC"),
[c](https://rdrr.io/r/base/c.html)("15_BASES", "15_NR", "15J_H", "15J1"),
[c](https://rdrr.io/r/base/c.html)("2Z2_Grav", "P10_GRAV"),
[c](https://rdrr.io/r/base/c.html)("503.08a", "P3A_NR", "P3A1", "P3A1_C4", "P3A1_CLOD", "P3A1_e"),
[c](https://rdrr.io/r/base/c.html)("18F1_CA"),
[c](https://rdrr.io/r/base/c.html)("18F1_MG"),
[c](https://rdrr.io/r/base/c.html)("18F1_NA"),
[c](https://rdrr.io/r/base/c.html)("18F1_K", "18F2", "18A1mod", "18_NR", "18A1", "18A1_NR", "18B1", "18B2"),
[c](https://rdrr.io/r/base/c.html)("3_C_B", "3_NR", "3A_TSS"),
[c](https://rdrr.io/r/base/c.html)("3A_C_2.5", "3A1")
)
[names](https://rdrr.io/r/base/names.html)(lab.tbl) = [c](https://rdrr.io/r/base/c.html)("oc", "ocP", "c_tot", "n_tot", "ph_h2o", "ph_kcl", "ph_cacl2", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
val.lst = [lapply](https://rdrr.io/r/base/lapply.html)(1:[length](https://rdrr.io/r/base/length.html)(lab.tbl), function(i){x <- cmdb$LAB_RESULTS[cmdb$LAB_RESULTS$labm.code [%in%](https://rdrr.io/r/base/match.html) lab.tbl[[i]], [c](https://rdrr.io/r/base/c.html)("agency.code", "proj.code", "s.id", "o.id", "h.no", "labr.value")]; [names](https://rdrr.io/r/base/names.html)(x)[6] <- [names](https://rdrr.io/r/base/names.html)(lab.tbl)[i]; [return](https://rdrr.io/r/base/function.html)(x) })
[names](https://rdrr.io/r/base/names.html)(val.lst) = [names](https://rdrr.io/r/base/names.html)(lab.tbl)
val.lst$oc$oc = val.lst$oc$oc * 10
[names](https://rdrr.io/r/base/names.html)(val.lst$ocP)[6] = "oc"
val.lst$oc <- [rbind](https://rdrr.io/r/base/cbind.html)(val.lst$oc, val.lst$ocP)
val.lst$ocP = NULL
#summary(val.lst$oc$oc)
#str(val.lst, max.level = 1)
for(i in 1:[length](https://rdrr.io/r/base/length.html)(val.lst)){ val.lst[[i]]$h.id <- [paste](https://rdrr.io/r/base/paste.html)(val.lst[[i]]$agency.code, val.lst[[i]]$proj.code, val.lst[[i]]$s.id, val.lst[[i]]$o.id, val.lst[[i]]$h.no, sep="_") }
au.hor <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)(val.lst, function(x){x[,6:7]}), match="first")
#str(as.factor(au.hor$h.id))
cmdb$HORIZONS$h.id = [paste](https://rdrr.io/r/base/paste.html)(cmdb$HORIZONS$agency.code, cmdb$HORIZONS$proj.code, cmdb$HORIZONS$s.id, cmdb$HORIZONS$o.id, cmdb$HORIZONS$h.no, sep="_")
cmdb$HORIZONS$hzn_desgn = [paste](https://rdrr.io/r/base/paste.html)(cmdb$HORIZONS$h.desig.master, cmdb$HORIZONS$h.desig.subdiv, cmdb$HORIZONS$h.desig.suffix, sep="")
au.horT <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(cmdb$HORIZONS[,[c](https://rdrr.io/r/base/c.html)("h.id","s.id","h.no","h.texture","hzn_desgn","h.upper.depth","h.lower.depth")], au.hor, au.xy))
au.horT$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(au.horT$o.date.desc, format="%d%m%Y"), "%Y-%m-%d")
au.horT$sand_tot_psa = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(au.horT$sand_tot_psa), 100-(au.horT$clay_tot_psa + au.horT$silt_tot_psa), au.horT$sand_tot_psa)
au.horT$hzn_top = au.horT$h.upper.depth*100
au.horT$hzn_bot = au.horT$h.lower.depth*100
au.horT$oc_d = [signif](https://rdrr.io/r/base/Round.html)(au.horT$oc / 1000 * au.horT$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(au.horT$wpg2), 0, au.horT$wpg2))/100, 3)
au.cols.n = [c](https://rdrr.io/r/base/c.html)("s.id", "o.location.notes", "site_obsdate", "o.longitude.GDA94", "o.latitude.GDA94", "h.id", "h.no", "hzn_top", "hzn_bot", "hzn_desgn", "h.texture", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = au.cols.n[[which](https://rdrr.io/r/base/which.html)(!au.cols.n [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(au.horT))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ au.horT[,i] = NA } }
chemsprops.NatSoil = au.horT[,au.cols.n]
chemsprops.NatSoil$source_db = "CSIRO_NatSoil"
chemsprops.NatSoil$confidence_degree = 4
chemsprops.NatSoil$project_url = "https://www.csiro.au/en/Do-business/Services/Enviro/Soil-archive"
chemsprops.NatSoil$citation_url = "https://doi.org/10.25919/5eeb2a56eac12"
chemsprops.NatSoil = complete.vars(chemsprops.NatSoil, sel = [c](https://rdrr.io/r/base/c.html)("oc","db_od","clay_tot_psa","ph_h2o"), coords = [c](https://rdrr.io/r/base/c.html)("o.longitude.GDA94", "o.latitude.GDA94"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NatSoil)
#> [1] 70791 36
```
#### 5\.3\.0\.29 NAMSOTER
* Coetzee, M. E. (2001\). [NAMSOTER, a SOTER database for Namibia](https://edepot.wur.nl/485173). Agroecological Zoning, 458\.
* Coetzee, M. E. (2009\). Chemical characterisation of the soils of East Central Namibia (Doctoral dissertation, Stellenbosch: University of Stellenbosch).
```
if({
nam.profs <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Namibia/NAMSOTER/Namibia_all_profiles.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("-9999", "999", "9999", "NA"))
nam.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Namibia/NAMSOTER/Namibia_all_horizons.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("-9999", "999", "9999", "NA"))
#summary(nam.hors$TOTN)
#summary(nam.hors$TOTC)
nam.hors$hzn_top <- NA
nam.hors$hzn_top <- [ifelse](https://rdrr.io/r/base/ifelse.html)(nam.hors$HONU==1, 0, nam.hors$hzn_top)
h.lst <- [lapply](https://rdrr.io/r/base/lapply.html)(1:7, function(x){[which](https://rdrr.io/r/base/which.html)(nam.hors$HONU==x)})
for(i in 2:7){
sel <- [match](https://rdrr.io/r/base/match.html)(nam.hors$PRID[h.lst[[i]]], nam.hors$PRID[h.lst[[i-1]]])
nam.hors$hzn_top[h.lst[[i]]] <- nam.hors$HBDE[h.lst[[i-1]]][sel]
}
nam.hors$HBDE <- [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(nam.hors$HBDE), nam.hors$hzn_top+50, nam.hors$HBDE)
#summary(nam.hors$HBDE)
namALL = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(nam.hors, nam.profs, by=[c](https://rdrr.io/r/base/c.html)("PRID"))
namALL$k_ext = namALL$EXCK * 391
namALL$ca_ext = namALL$EXCA * 200
namALL$mg_ext = namALL$EXMG * 121
namALL$na_ext = namALL$EXNA * 230
#summary(namALL$MINA)
namALL$BULK <- [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$BULK>2.4, NA, namALL$BULK)
namALL$wpg2 = [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="D", 80, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="A", 60, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="M", 25, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="C", 10, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="V", 1, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="F", 2.5, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="M/A", 40, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="C/M", 15, 0))))))))
#hist(namALL$wpg2)
namALL$oc_d = [signif](https://rdrr.io/r/base/Round.html)(namALL$TOTC / 1000 * namALL$BULK * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(namALL$wpg2), 0, namALL$wpg2))/100, 3)
#summary(namALL$oc_d)
#summary(namALL$PHAQ) ## very high ph
namALL$site_obsdate = 2000
nam.col = [c](https://rdrr.io/r/base/c.html)("PRID", "SLID", "site_obsdate", "LONG", "LATI", "labsampnum", "HONU", "hzn_top", "HBDE", "HODE", "PSCL", "CLPC", "STPC", "SDTO", "TOTC", "oc_d", "c_tot", "TOTN", "PHKC", "PHAQ", "ph_cacl2", "CECS", "cec_nh4", "ecec", "wpg2", "BULK", "ca_ext", "mg_ext", "na_ext", "k_ext", "ELCO", "ec_12pre")
x.na = nam.col[[which](https://rdrr.io/r/base/which.html)(!nam.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(namALL))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ namALL[,i] = NA } }
chemsprops.NAMSOTER = namALL[,nam.col]
chemsprops.NAMSOTER$source_db = "NAMSOTER"
chemsprops.NAMSOTER$confidence_degree = 2
chemsprops.NAMSOTER$project_url = ""
chemsprops.NAMSOTER$citation_url = "https://edepot.wur.nl/485173"
chemsprops.NAMSOTER = complete.vars(chemsprops.NAMSOTER, sel = [c](https://rdrr.io/r/base/c.html)("TOTC","CLPC","PHAQ"), coords = [c](https://rdrr.io/r/base/c.html)("LONG", "LATI"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NAMSOTER)
#> [1] 2953 36
```
#### 5\.3\.0\.30 Worldwide organic soil carbon and nitrogen data
* Zinke, P. J., Millemann, R. E., \& Boden, T. A. (1986\). [Worldwide organic soil carbon and nitrogen data](https://cdiac.ess-dive.lbl.gov/ftp/ndp018/ndp018.pdf). Carbon Dioxide Information Center, Environmental Sciences Division, Oak Ridge National Laboratory. Data download URL: [https://dx.doi.org/10\.3334/CDIAC/lue.ndp018](https://dx.doi.org/10.3334/CDIAC/lue.ndp018)
* Note: poor spatial location accuracy i.e. \<10 km. Bulk density for many points has been estimated not measured. Sampling year has not been but literature indicates: 1965, 1974, 1976, 1978, 1979, 1984\. Most of samples come from natural vegetation (undisturbed) areas.
```
if({
ndp.profs <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISCND/ndp018.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("-9999", "?", "NA"), stringsAsFactors = FALSE)
[names](https://rdrr.io/r/base/names.html)(ndp.profs) = [c](https://rdrr.io/r/base/c.html)("PROFILE", "CODE", "CARBON", "NITROGEN", "LAT", "LONG", "ELEV", "SOURCE", "HOLDRIGE", "OLSON", "PARENT")
for(j in [c](https://rdrr.io/r/base/c.html)("CARBON","NITROGEN","ELEV")){ ndp.profs[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(ndp.profs[,j]) }
#summary(ndp.profs$CARBON)
lat.s <- [grep](https://rdrr.io/r/base/grep.html)("S", ndp.profs$LAT) # lat.n <- grep("N", ndp.profs$LAT)
ndp.profs$latitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)("[^0-9.-]", "", ndp.profs$LAT))
ndp.profs$latitude_decimal_degrees[lat.s] = ndp.profs$latitude_decimal_degrees[lat.s] * -1
lon.w <- [grep](https://rdrr.io/r/base/grep.html)("W", ndp.profs$LONG) # lon.e <- grep("E", ndp.profs$LONG, fixed = TRUE)
ndp.profs$longitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)("[^0-9.-]", "", ndp.profs$LONG))
ndp.profs$longitude_decimal_degrees[lon.w] = ndp.profs$longitude_decimal_degrees[lon.w] * -1
#plot(ndp.profs[,c("longitude_decimal_degrees", "latitude_decimal_degrees")])
ndp.profs$hzn_top = 0; ndp.profs$hzn_bot = 100
## Sampling years from the doc: 1965, 1974, 1976, 1978, 1979, 1984
ndp.profs$site_obsdate = "1982"
ndp.col = [c](https://rdrr.io/r/base/c.html)("PROFILE", "CODE", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence","hzn_top","hzn_bot","hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "CARBON", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = ndp.col[[which](https://rdrr.io/r/base/which.html)(!ndp.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(ndp.profs))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ ndp.profs[,i] = NA } }
chemsprops.ISCND = ndp.profs[,ndp.col]
chemsprops.ISCND$source_db = "ISCND"
chemsprops.ISCND$confidence_degree = 8
chemsprops.ISCND$project_url = "https://iscn.fluxdata.org/data/"
chemsprops.ISCND$citation_url = "https://dx.doi.org/10.3334/CDIAC/lue.ndp018"
chemsprops.ISCND = complete.vars(chemsprops.ISCND, sel = [c](https://rdrr.io/r/base/c.html)("CARBON"), coords = [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.ISCND)
#> [1] 3977 36
```
#### 5\.3\.0\.31 Interior Alaska Carbon and Nitrogen stocks
* Manies, K., Waldrop, M., and Harden, J. (2020\): Generalized models to estimate carbon and nitrogen stocks of organic soil horizons in Interior Alaska, Earth Syst. Sci. Data, 12, 1745–1757, [https://doi.org/10\.5194/essd\-12\-1745\-2020](https://doi.org/10.5194/essd-12-1745-2020), Data download URL: [https://doi.org/10\.5066/P960N1F9](https://doi.org/10.5066/P960N1F9)
```
if({
al.gps <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/Alaska_Interior/Site_GPS_coordinates_v1-1.csv", stringsAsFactors = FALSE)
## Different datums!
#summary(as.factor(al.gps$Datum))
al.gps1 = al.gps[al.gps$Datum=="NAD83",]
coordinates(al.gps1) = ~ Longitude + Latitude
proj4string(al.gps1) = "+proj=longlat +datum=NAD83"
al.gps0 = spTransform(al.gps1, CRS("+proj=longlat +datum=WGS84"))
al.gps[[which](https://rdrr.io/r/base/which.html)(al.gps$Datum=="NAD83"),"Longitude"] = al.gps0@coords[,1]
al.gps[[which](https://rdrr.io/r/base/which.html)(al.gps$Datum=="NAD83"),"Latitude"] = al.gps0@coords[,2]
al.gps$site = al.gps$Site
al.hor <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/Alaska_Interior/Generalized_models_for_CandN_Alaska_v1-1.csv", stringsAsFactors = FALSE)
al.hor$hzn_top = al.hor$depth - [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$thickness)
al.hor$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(al.hor$date, format = "%m/%d/%Y"), "%Y-%m-%d")
al.hor$oc = [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$carbon) * 10
al.hor$n_tot = [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$nitrogen) * 10
al.hor$oc_d = [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$Cdensity) * 1000
#summary(al.hor$oc_d)
al.horA = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(al.hor, al.gps, by=[c](https://rdrr.io/r/base/c.html)("site"))
al.col = [c](https://rdrr.io/r/base/c.html)("profile", "description", "site_obsdate", "Longitude", "Latitude", "sampleID", "layer_sequence", "hzn_top", "depth", "Hcode", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "BDfine", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = al.col[[which](https://rdrr.io/r/base/which.html)(!al.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(al.horA))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ al.horA[,i] = NA } }
chemsprops.Alaska = al.horA[,al.col]
chemsprops.Alaska$source_db = "Alaska_interior"
chemsprops.Alaska$confidence_degree = 1
chemsprops.Alaska$project_url = "https://www.usgs.gov/centers/gmeg"
chemsprops.Alaska$citation_url = "https://doi.org/10.5194/essd-12-1745-2020"
chemsprops.Alaska = complete.vars(chemsprops.Alaska, sel = [c](https://rdrr.io/r/base/c.html)("oc","oc_d"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Alaska)
#> [1] 3882 36
```
#### 5\.3\.0\.32 Croatian Soil Pedon data
* Martinović J., (2000\) [“Tla u Hrvatskoj”](https://books.google.nl/books?id=k_a2MgAACAAJ), Monografija, Državna uprava za zaštitu prirode i okoliša, str. 269, Zagreb. ISBN: 9536793059
* Bašić F., (2014\) [“The Soils of Croatia”](https://books.google.nl/books?id=VbJEAAAAQBAJ). World Soils Book Series, Springer Science \& Business Media, 179 pp. ISBN: 9400758154
```
if({
bpht.site <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Croatia/WBSoilHR_sites_1997.csv", stringsAsFactors = FALSE)
bpht.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Croatia/WBSoilHR_1997.csv", stringsAsFactors = FALSE)
## filter typos
for(j in [c](https://rdrr.io/r/base/c.html)("GOR", "DON", "MKP", "PH1", "PH2", "MSP", "MP", "MG", "HUM", "EXTN", "EXTP", "EXTK", "CAR")){
bpht.hors[,j] = [as.numeric](https://rdrr.io/r/base/numeric.html)(bpht.hors[,j])
}
## Convert to the USDA standard
bpht.hors$sand_tot_psa <- bpht.hors$MSP * 0.8 + bpht.hors$MKP
bpht.hors$silt_tot_psa <- bpht.hors$MP + bpht.hors$MSP * 0.2
bpht.hors$oc <- [signif](https://rdrr.io/r/base/Round.html)(bpht.hors$HUM/1.724 * 10, 3)
## summary(bpht.hors$sand_tot_psa)
bpht.s.lst <- [c](https://rdrr.io/r/base/c.html)("site_key", "UZORAK", "Cro16.30_X", "Cro16.30_Y", "FITOC", "STIJENA", "HID_DREN", "DUBINA")
bpht.hor = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(bpht.site[,bpht.s.lst], bpht.hors)
bpht.hor$wpg2 = bpht.hor$STIJENA
bpht.hor$DON <- [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(bpht.hor$DON), bpht.hor$GOR+50, bpht.hor$DON)
bpht.hor$depth <- bpht.hor$GOR + (bpht.hor$DON - bpht.hor$GOR)/2
bpht.hor = bpht.hor[,]
bpht.hor$wpg2[[which](https://rdrr.io/r/base/which.html)(bpht.hor$GOR<30)] <- bpht.hor$wpg2[[which](https://rdrr.io/r/base/which.html)(bpht.hor$GOR<30)]*.3
bpht.hor$sample_key = [make.unique](https://rdrr.io/r/base/make.unique.html)([paste](https://rdrr.io/r/base/paste.html)(bpht.hor$PEDOL_ID, bpht.hor$OZN, sep="_"))
bpht.hor$sand_tot_psa[bpht.hor$sample_key=="805_Amo"] <- bpht.hor$sand_tot_psa[bpht.hor$sample_key=="805_Amo"]/10
## convert N, P, K
#summary(bpht.hor$EXTK) -- measurements units?
bpht.hor$p_ext = bpht.hor$EXTP * 4.364
bpht.hor$k_ext = bpht.hor$EXTK * 8.3013
bpht.hor = bpht.hor[,]
## coordinates:
bpht.pnts = SpatialPointsDataFrame(bpht.hor[,[c](https://rdrr.io/r/base/c.html)("Cro16.30_X","Cro16.30_Y")], bpht.hor["site_key"], proj4string = CRS("+proj=tmerc +lat_0=0 +lon_0=16.5 +k=0.9999 +x_0=2500000 +y_0=0 +ellps=bessel +towgs84=550.499,164.116,475.142,5.80967,2.07902,-11.62386,0.99999445824 +units=m"))
bpht.pnts.ll <- spTransform(bpht.pnts, CRS("+proj=longlat +datum=WGS84"))
bpht.hor$longitude_decimal_degrees = bpht.pnts.ll@coords[,1]
bpht.hor$latitude_decimal_degrees = bpht.pnts.ll@coords[,2]
bpht.h.lst <- [c](https://rdrr.io/r/base/c.html)('site_key', 'OZ_LIST_PROF', 'UZORAK', 'longitude_decimal_degrees', 'latitude_decimal_degrees', 'labsampnum', 'layer_sequence', 'GOR', 'DON', 'OZN', 'TT', 'MG', 'silt_tot_psa', 'sand_tot_psa', 'oc', 'oc_d', 'c_tot', 'EXTN', 'PH2', 'PH1', 'ph_cacl2', 'cec_sum', 'cec_nh4', 'ecec', 'wpg2', 'db_od', 'ca_ext', 'mg_ext', 'na_ext', 'k_ext', 'ec_satp', 'ec_12pre')
x.na = bpht.h.lst[[which](https://rdrr.io/r/base/which.html)(!bpht.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(bpht.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ bpht.hor[,i] = NA } }
chemsprops.bpht = bpht.hor[,bpht.h.lst]
chemsprops.bpht$source_db = "Croatian_Soil_Pedon"
chemsprops.bpht$confidence_degree = 1
chemsprops.bpht$project_url = "http://www.haop.hr/"
chemsprops.bpht$citation_url = "https://books.google.nl/books?id=k_a2MgAACAAJ"
chemsprops.bpht = complete.vars(chemsprops.bpht, sel = [c](https://rdrr.io/r/base/c.html)("oc","MG","PH1","k_ext"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.bpht)
#> [1] 5746 36
```
#### 5\.3\.0\.33 Remnant native SOC database
* Sanderman, J., (2017\) “Remnant native SOC database for release.xlsx”, Soil carbon profile data from paired land use comparisons, [https://doi.org/10\.7910/DVN/QQQM8V/8MSBNI](https://doi.org/10.7910/DVN/QQQM8V/8MSBNI), Harvard Dataverse, V1
```
if({
rem.hor <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/WHRC_remnant_SOC/remnant+native+SOC+database+for+release.xlsx", sheet = 3)
rem.site <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/WHRC_remnant_SOC/remnant+native+SOC+database+for+release.xlsx", sheet = 2)
rem.ref <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/WHRC_remnant_SOC/remnant+native+SOC+database+for+release.xlsx", sheet = 4)
rem.site = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(rem.site, rem.ref[,[c](https://rdrr.io/r/base/c.html)("Source.No.","DOI","Sample_year")], by=[c](https://rdrr.io/r/base/c.html)("Source.No."))
rem.site$Site = rem.site$Site.ID
rem.horA = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(rem.hor, rem.site, by=[c](https://rdrr.io/r/base/c.html)("Site"))
rem.horA$hzn_top = rem.horA$'U_depth.(m)'*100
rem.horA$hzn_bot = rem.horA$'L_depth.(m)'*100
rem.horA$db_od = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(rem.horA$'measured.BD.(Mg/m3)')), [as.numeric](https://rdrr.io/r/base/numeric.html)(rem.horA$'estimated.BD.(Mg/m3)'), [as.numeric](https://rdrr.io/r/base/numeric.html)(rem.horA$'measured.BD.(Mg/m3)'))
rem.horA$oc_d = [signif](https://rdrr.io/r/base/Round.html)(rem.horA$'OC.(g/kg)' * rem.horA$db_od, 3)
#summary(rem.horA$oc_d)
rem.col = [c](https://rdrr.io/r/base/c.html)("Source.No.", "Site", "Sample_year", "Longitude", "Latitude", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "OC.(g/kg)", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = rem.col[[which](https://rdrr.io/r/base/which.html)(!rem.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(rem.horA))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ rem.horA[,i] = NA } }
chemsprops.RemnantSOC = rem.horA[,rem.col]
chemsprops.RemnantSOC$source_db = "WHRC_remnant_SOC"
chemsprops.RemnantSOC$confidence_degree = 8
chemsprops.RemnantSOC$project_url = "https://www.woodwellclimate.org/research-area/carbon/"
chemsprops.RemnantSOC$citation_url = "http://dx.doi.org/10.1073/pnas.1706103114"
chemsprops.RemnantSOC = complete.vars(chemsprops.RemnantSOC, sel = [c](https://rdrr.io/r/base/c.html)("OC.(g/kg)","oc_d"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.RemnantSOC)
#> [1] 1604 36
```
#### 5\.3\.0\.34 Soil Health DB
* Jian, J., Du, X., \& Stewart, R. D. (2020\). A database for global soil health assessment. Scientific Data, 7(1\), 1\-8\. [https://doi.org/10\.1038/s41597\-020\-0356\-3](https://doi.org/10.1038/s41597-020-0356-3). Data download URL: <https://github.com/jinshijian/SoilHealthDB>
Note: some information is available about column names ([https://www.nature.com/articles/s41597\-020\-0356\-3/tables/3](https://www.nature.com/articles/s41597-020-0356-3/tables/3)) but detailed explanation is missing.
```
if({
shdb.hor <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/SoilHealthDB/SoilHealthDB_V2.xlsx", sheet = 1, na.strings = [c](https://rdrr.io/r/base/c.html)("NA", "NotAvailable", "Not-available"))
#summary(as.factor(shdb.hor$SamplingDepth))
shdb.hor$hzn_top = [as.numeric](https://rdrr.io/r/base/numeric.html)([sapply](https://rdrr.io/r/base/lapply.html)(shdb.hor$SamplingDepth, function(i){ [strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-to-")[[1]][1] }))
shdb.hor$hzn_bot = [as.numeric](https://rdrr.io/r/base/numeric.html)([sapply](https://rdrr.io/r/base/lapply.html)(shdb.hor$SamplingDepth, function(i){ [strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-to-")[[1]][2] }))
shdb.hor$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(shdb.hor$hzn_top), 0, shdb.hor$hzn_top)
shdb.hor$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(shdb.hor$hzn_bot), 15, shdb.hor$hzn_bot)
shdb.hor$oc = [as.numeric](https://rdrr.io/r/base/numeric.html)(shdb.hor$BackgroundSOC) * 10
shdb.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(shdb.hor$oc * shdb.hor$SoilBD, 3)
for(j in [c](https://rdrr.io/r/base/c.html)("ClayPerc", "SiltPerc", "SandPerc", "SoilpH")){ shdb.hor[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(shdb.hor[,j]) }
#summary(shdb.hor$oc_d)
shdb.col = [c](https://rdrr.io/r/base/c.html)("StudyID", "ExperimentID", "SamplingYear", "Longitude", "Latitude", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "Texture", "ClayPerc", "SiltPerc", "SandPerc", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "SoilpH", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "SoilBD", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = shdb.col[[which](https://rdrr.io/r/base/which.html)(!shdb.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(shdb.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ shdb.hor[,i] = NA } }
chemsprops.SoilHealthDB = shdb.hor[,shdb.col]
chemsprops.SoilHealthDB$source_db = "SoilHealthDB"
chemsprops.SoilHealthDB$confidence_degree = 8
chemsprops.SoilHealthDB$project_url = "https://github.com/jinshijian/SoilHealthDB"
chemsprops.SoilHealthDB$citation_url = "https://doi.org/10.1038/s41597-020-0356-3"
chemsprops.SoilHealthDB = complete.vars(chemsprops.SoilHealthDB, sel = [c](https://rdrr.io/r/base/c.html)("ClayPerc", "SoilpH", "oc"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SoilHealthDB)
#> [1] 120 36
```
#### 5\.3\.0\.35 Global Harmonized Dataset of SOC change under perennial crops
* Ledo, A., Hillier, J., Smith, P. et al. (2019\) A global, empirical, harmonised dataset of soil organic carbon changes under perennial crops. Sci Data 6, 57\. [https://doi.org/10\.1038/s41597\-019\-0062\-1](https://doi.org/10.1038/s41597-019-0062-1). Data download URL: [https://doi.org/10\.6084/m9\.figshare.7637210\.v2](https://doi.org/10.6084/m9.figshare.7637210.v2)
Note: Many missing years for PREVIOUS SOC AND SOIL CHARACTERISTICS.
```
if({
[library](https://rdrr.io/r/base/library.html)(["readxl"](https://readxl.tidyverse.org))
socpdb <- readxl::[read_excel](https://readxl.tidyverse.org/reference/read_excel.html)("/mnt/diskstation/data/Soil_points/INT/SOCPDB/SOC_perennials_DATABASE.xls", skip=1, sheet = 1)
#names(socpdb)
#summary(as.numeric(socpdb$year_measure))
socpdb$year_measure = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$year_measure)), [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$yearPpub)-5, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$year_measure))
socpdb$year_measure = [ifelse](https://rdrr.io/r/base/ifelse.html)(socpdb$year_measure<1960, NA, socpdb$year_measure)
socpdb$depth_current = socpdb$soil_to_cm_current - socpdb$soil_from_cm_current
socpdb = socpdb[socpdb$depth_current>5,]
socpdb$SOC_g_kg_current = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_current)), [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_Mg_ha_current) / (socpdb$depth_current/100 * [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$bulk_density_Mg_m3_current) * 1000) * 10, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_current))
socpdb$depth_previous = socpdb$soil_to_cm_previous - socpdb$soil_from_cm_previous
socpdb$SOC_g_kg_previous = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_previous)), [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_Mg_ha_previous) / (socpdb$depth_previous/100 * [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$Bulkdensity_previous) * 1000) * 10, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_previous))
hor.b = [which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(socpdb) [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("ID", "plotID", "Longitud", "Latitud", "year_measure", "years_since_luc", "USDA", "original_source"))
socpdb1 = socpdb[,[c](https://rdrr.io/r/base/c.html)(hor.b, [grep](https://rdrr.io/r/base/grep.html)("_current", [names](https://rdrr.io/r/base/names.html)(socpdb)))]
#summary(as.numeric(socpdb1$years_since_luc))
## 10 yrs median
socpdb1$site_obsdate = socpdb1$year_measure
socpdb2 = socpdb[,[c](https://rdrr.io/r/base/c.html)(hor.b, [grep](https://rdrr.io/r/base/grep.html)("_previous", [names](https://rdrr.io/r/base/names.html)(socpdb)))]
socpdb2$site_obsdate = socpdb2$year_measure - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb2$years_since_luc)), 10, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb2$years_since_luc))
[colnames](https://rdrr.io/r/base/colnames.html)(socpdb2) <- [sub](https://rdrr.io/r/base/grep.html)("_previous", "_current", [colnames](https://rdrr.io/r/base/colnames.html)(socpdb2))
nm.socpdb = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "hzn_top", "hzn_bot", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_h2o", "db_od")
sel.socdpb1 = [c](https://rdrr.io/r/base/c.html)("ID", "original_source", "site_obsdate", "Longitud", "Latitud", "soil_from_cm_current", "soil_to_cm_current", "%clay_current", "%silt_current", "%sand_current", "SOC_g_kg_current", "ph_current", "bulk_density_Mg_m3_current")
sel.socdpb2 = [c](https://rdrr.io/r/base/c.html)("ID", "original_source", "site_obsdate", "Longitud", "Latitud", "soil_from_cm_current", "soil_to_cm_current", "%clay_current", "%silt_current", "%sand_current", "SOC_g_kg_current", "ph_current", "Bulkdensity_current")
socpdbALL = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)([lapply](https://rdrr.io/r/base/lapply.html)([list](https://rdrr.io/r/base/list.html)(socpdb1[,sel.socdpb1], socpdb2[,sel.socdpb2]), function(i){ dplyr::[mutate_all](https://dplyr.tidyverse.org/reference/mutate_all.html)([setNames](https://rdrr.io/r/stats/setNames.html)(i, nm.socpdb), as.character) })))
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(socpdbALL)){ socpdbALL[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdbALL[,j]) }
#summary(socpdbALL$oc) ## mean = 15
#summary(socpdbALL$db_od)
#summary(socpdbALL$ph_h2o)
socpdbALL$oc_d = [signif](https://rdrr.io/r/base/Round.html)(socpdbALL$oc * socpdbALL$db_od, 3)
#summary(socpdbALL$oc_d)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(socpdbALL))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ socpdbALL[,i] = NA } }
chemsprops.SOCPDB <- socpdbALL[,col.names]
chemsprops.SOCPDB$source_db = "SOCPDB"
chemsprops.SOCPDB$confidence_degree = 5
chemsprops.SOCPDB$project_url = "https://africap.info/"
chemsprops.SOCPDB$citation_url = "https://doi.org/10.1038/s41597-019-0062-1"
chemsprops.SOCPDB = complete.vars(chemsprops.SOCPDB, sel = [c](https://rdrr.io/r/base/c.html)("oc","ph_h2o","clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SOCPDB)
#> [1] 1526 36
```
#### 5\.3\.0\.36 Stocks of organic carbon in German agricultural soils (BZE\_LW)
* Poeplau, C., Jacobs, A., Don, A., Vos, C., Schneider, F., Wittnebel, M., … \& Flessa, H. (2020\). [Stocks of organic carbon in German agricultural soils—Key results of the first comprehensive inventory](https://doi.org/10.1002/jpln.202000113). Journal of Plant Nutrition and Soil Science, 183(6\), 665\-681\. [https://doi.org/10\.1002/jpln.202000113](https://doi.org/10.1002/jpln.202000113). Data download URL: [https://doi.org/10\.3220/DATA20200203151139](https://doi.org/10.3220/DATA20200203151139)
Note: For protection of data privacy, the coordinate was randomly generated within a radius of 4\-km around the planned sampling point. This data is hence probably not suitable for spatial analysis, predictive soil mapping.
```
if({
site.de <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/Germany/SITE.xlsx", sheet = 1)
site.de$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)([paste0](https://rdrr.io/r/base/paste.html)("01-", site.de$Sampling_month, "-", site.de$Sampling_year), format="%d-%m-%Y"), "%Y-%m-%d")
site.de.xy = site.de[,[c](https://rdrr.io/r/base/c.html)("PointID","xcoord","ycoord")]
## 3104
coordinates(site.de.xy) <- ~xcoord+ycoord
proj4string(site.de.xy) <- CRS("+proj=utm +zone=32 +ellps=WGS84 +datum=WGS84 +units=m +no_defs")
site.de.ll <- [data.frame](https://rdrr.io/r/base/data.frame.html)(spTransform(site.de.xy, CRS("+proj=longlat +ellps=WGS84 +datum=WGS84")))
site.de$longitude_decimal_degrees = site.de.ll[,2]
site.de$latitude_decimal_degrees = site.de.ll[,3]
hor.de <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/Germany/LABORATORY_DATA.xlsx", sheet = 1)
#hor.de = plyr::join(openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/Germany/LABORATORY_DATA.xlsx", sheet = 1), openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/Germany/HORIZON_DATA.xlsx", sheet = 1), by="PointID")
## 17,189 rows
horALL.de = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hor.de, site.de, by="PointID")
## Sand content [Mass-%]; grain size 63-2000µm (DIN ISO 11277)
horALL.de$sand_tot_psa <- horALL.de$gS + horALL.de$mS + horALL.de$fS + 0.2 * horALL.de$gU
horALL.de$silt_tot_psa <- horALL.de$fU + horALL.de$mU + 0.8 * horALL.de$gU
## Convert millisiemens/meter [mS/m] to microsiemens/centimeter [μS/cm, uS/cm]
horALL.de$ec_satp = horALL.de$EC_H2O / 10
hor.sel.de <- [c](https://rdrr.io/r/base/c.html)("PointID", "Main.soil.type", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence", "Layer.upper.limit", "Layer.lower.limit", "hzn_desgn", "Soil.texture.class", "Clay", "silt_tot_psa", "sand_tot_psa", "TOC", "oc_d", "TC", "TN", "ph_kcl", "pH_H2O", "pH_CaCl2", "cec_sum", "cec_nh4", "ecec", "Rock.fragment.fraction", "BD_FS", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
#summary(horALL.de$TOC) ## mean = 12.3
#summary(horALL.de$BD_FS) ## mean = 1.41
#summary(horALL.de$pH_H2O)
horALL.de$oc_d = [signif](https://rdrr.io/r/base/Round.html)(horALL.de$TOC * horALL.de$BD_FS * (1-horALL.de$Rock.fragment.fraction/100), 3)
#summary(horALL.de$oc_d)
x.na = hor.sel.de[[which](https://rdrr.io/r/base/which.html)(!hor.sel.de [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(horALL.de))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ horALL.de[,i] = NA } }
chemsprops.BZE_LW <- horALL.de[,hor.sel.de]
chemsprops.BZE_LW$source_db = "BZE_LW"
chemsprops.BZE_LW$confidence_degree = 3
chemsprops.BZE_LW$project_url = "https://www.thuenen.de/de/ak/"
chemsprops.BZE_LW$citation_url = "https://doi.org/10.1002/jpln.202000113"
chemsprops.BZE_LW = complete.vars(chemsprops.BZE_LW, sel = [c](https://rdrr.io/r/base/c.html)("TOC", "pH_H2O", "Clay"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.BZE_LW)
#> [1] 17187 36
```
#### 5\.3\.0\.37 AARDEWERK\-Vlaanderen\-2010
* Beckers, V., Jacxsens, P., Van De Vreken, Ph., Van Meirvenne, M., Van Orshoven, J. (2011\). Gebruik en installatie van de bodemdatabank AARDEWERK\-Vlaanderen\-2010\. Spatial Applications Division Leuven, Belgium. Data download URL: [https://www.dov.vlaanderen.be/geonetwork/home/api/records/78e15dd4\-8070\-4220\-afac\-258ea040fb30](https://www.dov.vlaanderen.be/geonetwork/home/api/records/78e15dd4-8070-4220-afac-258ea040fb30)
* Ottoy, S., Beckers, V., Jacxsens, P., Hermy, M., \& Van Orshoven, J. (2015\). [Multi\-level statistical soil profiles for assessing regional soil organic carbon stocks](https://doi.org/10.1016/j.geoderma.2015.04.001). Geoderma, 253, 12\-20\. [https://doi.org/10\.1016/j.geoderma.2015\.04\.001](https://doi.org/10.1016/j.geoderma.2015.04.001)
```
if({
site.vl <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Belgium/Vlaanderen/Aardewerk-Vlaanderen-2010_Profiel.csv")
site.vl$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)([sapply](https://rdrr.io/r/base/lapply.html)(site.vl$Profilering_Datum, function(i){[strsplit](https://rdrr.io/r/base/strsplit.html)(i, " ")[[1]][1]}), format="%d-%m-%Y"), "%Y-%m-%d")
site.vl.xy = site.vl[,[c](https://rdrr.io/r/base/c.html)("ID","Coordinaat_Lambert72_X","Coordinaat_Lambert72_Y")]
## 7020
site.vl.xy = site.vl.xy[[complete.cases](https://rdrr.io/r/stats/complete.cases.html)(site.vl.xy),]
coordinates(site.vl.xy) <- ~Coordinaat_Lambert72_X+Coordinaat_Lambert72_Y
proj4string(site.vl.xy) <- CRS("+init=epsg:31300")
site.vl.ll <- [data.frame](https://rdrr.io/r/base/data.frame.html)(spTransform(site.vl.xy, CRS("+proj=longlat +ellps=WGS84 +datum=WGS84")))
site.vl$longitude_decimal_degrees = join(site.vl["ID"], site.vl.ll, by="ID")$Coordinaat_Lambert72_X
site.vl$latitude_decimal_degrees = join(site.vl["ID"], site.vl.ll, by="ID")$Coordinaat_Lambert72_Y
site.vl$Profiel_ID = site.vl$ID
hor.vl <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Belgium/Vlaanderen/Aardewerk-Vlaanderen-2010_Horizont.csv")
## 42,529 rows
horALL.vl = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hor.vl, site.vl, by="Profiel_ID")
horALL.vl$oc = horALL.vl$Humus*10 /1.724
[summary](https://rdrr.io/r/base/summary.html)(horALL.vl$oc) ## mean = 7.8
#summary(horALL.vl$pH_H2O)
horALL.vl$hzn_top <- [rowSums](https://rdrr.io/r/base/colSums.html)(horALL.vl[,[c](https://rdrr.io/r/base/c.html)("Diepte_grens_boven1", "Diepte_grens_boven2")], na.rm=TRUE)/2
horALL.vl$hzn_bot <- [rowSums](https://rdrr.io/r/base/colSums.html)(horALL.vl[,[c](https://rdrr.io/r/base/c.html)("Diepte_grens_onder1","Diepte_grens_onder2")], na.rm=TRUE)/2
horALL.vl$sand_tot_psa <- horALL.vl$T50_100 + horALL.vl$T100_200 + horALL.vl$T200_500 + horALL.vl$T500_1000 + horALL.vl$T1000_2000
horALL.vl$silt_tot_psa <- horALL.vl$T2_10 + horALL.vl$T10_20 + horALL.vl$T20_50
horALL.vl$tex_psda = [paste0](https://rdrr.io/r/base/paste.html)(horALL.vl$HorizontTextuur_code1, horALL.vl$HorizontTextuur_code2)
## some corrupt coordinates
horALL.vl <- horALL.vl[horALL.vl$latitude_decimal_degrees > 50.6,]
hor.sel.vl <- [c](https://rdrr.io/r/base/c.html)("Profiel_ID", "Bodemgroep", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "Hor_nr", "hzn_top", "hzn_bot", "Naam", "tex_psda", "T0_2", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "pH_KCl", "pH_H2O", "ph_cacl2", "Sorptiecapaciteit_Totaal", "cec_nh4", "ecec", "Tgroter_dan_2000", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = hor.sel.vl[[which](https://rdrr.io/r/base/which.html)(!hor.sel.vl [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(horALL.vl))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ horALL.vl[,i] = NA } }
chemsprops.Vlaanderen <- horALL.vl[,hor.sel.vl]
chemsprops.Vlaanderen$source_db = "Vlaanderen"
chemsprops.Vlaanderen$confidence_degree = 2
chemsprops.Vlaanderen$project_url = "https://www.dov.vlaanderen.be"
chemsprops.Vlaanderen$citation_url = "https://doi.org/10.1016/j.geoderma.2015.04.001"
chemsprops.Vlaanderen = complete.vars(chemsprops.Vlaanderen, sel = [c](https://rdrr.io/r/base/c.html)("oc", "pH_H2O", "T0_2"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Vlaanderen)
#> [1] 41310 36
```
#### 5\.3\.0\.38 Chilean Soil Organic Carbon database
* Pfeiffer, M., Padarian, J., Osorio, R., Bustamante, N., Olmedo, G. F., Guevara, M., et al. (2020\) [CHLSOC: the Chilean Soil Organic Carbon database, a multi\-institutional collaborative effort](https://doi.org/10.5194/essd-12-457-2020). Earth Syst. Sci. Data, 12, 457–468, [https://doi.org/10\.5194/essd\-12\-457\-2020](https://doi.org/10.5194/essd-12-457-2020). Data download URL: [https://doi.org/10\.17605/OSF.IO/NMYS3](https://doi.org/10.17605/OSF.IO/NMYS3)
```
if({
chl.hor <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Chile/CHLSOC/CHLSOC_v1.0.csv", stringsAsFactors = FALSE)
#summary(chl.hor$oc)
chl.hor$oc = chl.hor$oc*10
#summary(chl.hor$bd)
chl.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(chl.hor$oc * chl.hor$bd * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(chl.hor$crf), 0, chl.hor$crf))/100, 3)
#summary(chl.hor$oc_d)
chl.col = [c](https://rdrr.io/r/base/c.html)("ProfileID", "usiteid", "year", "long", "lat", "labsampnum", "layer_sequence", "top", "bottom", "hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "crf", "bd", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = chl.col[[which](https://rdrr.io/r/base/which.html)(!chl.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(chl.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ chl.hor[,i] = NA } }
chemsprops.CHLSOC = chl.hor[,chl.col]
chemsprops.CHLSOC$source_db = "Chilean_SOCDB"
chemsprops.CHLSOC$confidence_degree = 4
chemsprops.CHLSOC$project_url = "https://doi.org/10.17605/OSF.IO/NMYS3"
chemsprops.CHLSOC$citation_url = "https://doi.org/10.5194/essd-12-457-2020"
chemsprops.CHLSOC = complete.vars(chemsprops.CHLSOC, sel = [c](https://rdrr.io/r/base/c.html)("oc", "bd"), coords = [c](https://rdrr.io/r/base/c.html)("long", "lat"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CHLSOC)
#> [1] 16371 36
```
#### 5\.3\.0\.39 Scotland (NSIS\_1\)
* Lilly, A., Bell, J.S., Hudson, G., Nolan, A.J. \& Towers. W. (Compilers) (2010\). National soil inventory of Scotland (NSIS\_1\); site location, sampling and profile description protocols. (1978\-1988\). Technical Bulletin. Macaulay Institute, Aberdeen. [https://doi.org/10\.5281/zenodo.4650230](https://doi.org/10.5281/zenodo.4650230). Data download URL: [https://www.hutton.ac.uk/learning/natural\-resource\-datasets/soilshutton/soils\-maps\-scotland/download](https://www.hutton.ac.uk/learning/natural-resource-datasets/soilshutton/soils-maps-scotland/download)
```
if({
sco.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Scotland/NSIS_10km.csv")
coordinates(sco.xy) = ~ easting + northing
proj4string(sco.xy) = "EPSG:27700"
sco.ll = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(spTransform(sco.xy, CRS("EPSG:4326")))
sco.ll$site_obsdate = [as.numeric](https://rdrr.io/r/base/numeric.html)([sapply](https://rdrr.io/r/base/lapply.html)(sco.ll$profile_da, function(x){[substr](https://rdrr.io/r/base/substr.html)(x, [nchar](https://rdrr.io/r/base/nchar.html)(x)-3, [nchar](https://rdrr.io/r/base/nchar.html)(x))}))
#hist(sco.ll$site_obsdate[sco.ll$site_obsdate>1000])
## no points after 1990!!
#summary(sco.ll$exch_k)
sco.in.name = [c](https://rdrr.io/r/base/c.html)("profile_id", "site_obsdate", "easting", "northing", "horz_top", "horz_botto",
"horz_symb", "sample_id", "texture_ps",
"sand_int", "silt_int", "clay", "carbon", "nitrogen", "ph_h2o", "exch_ca",
"exch_mg", "exch_na", "exch_k", "sum_cation")
#sco.in.name[which(!sco.in.name %in% names(sco.ll))]
sco.x = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(sco.ll[,sco.in.name])
#sco.x = sco.x[!sco.x$sample_id==0,]
#summary(sco.x$carbon)
sco.out.name = [c](https://rdrr.io/r/base/c.html)("usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees",
"hzn_bot", "hzn_top", "hzn_desgn", "labsampnum", "tex_psda", "sand_tot_psa", "silt_tot_psa",
"clay_tot_psa", "oc", "n_tot", "ph_h2o", "ca_ext",
"mg_ext", "na_ext", "k_ext", "cec_sum")
## translate values
sco.fun.lst = [as.list](https://rdrr.io/r/base/list.html)([rep](https://rdrr.io/r/base/rep.html)("as.numeric(x)*1", [length](https://rdrr.io/r/base/length.html)(sco.in.name)))
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="profile_id")]] = "paste(x)"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_ca")]] = "as.numeric(x)*200"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_mg")]] = "as.numeric(x)*121"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_k")]] = "as.numeric(x)*391"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_na")]] = "as.numeric(x)*230"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="carbon")]] = "as.numeric(x)*10"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="nitrogen")]] = "as.numeric(x)*10"
## save translation rules:
[write.csv](https://rdrr.io/r/utils/write.table.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(sco.in.name, sco.out.name, [unlist](https://rdrr.io/r/base/unlist.html)(sco.fun.lst)), "scotland_soilab_transvalues.csv")
sco.soil = transvalues(sco.x, sco.out.name, sco.in.name, sco.fun.lst)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sco.soil))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sco.soil[,i] = NA } }
chemsprops.ScotlandNSIS1 = sco.soil[,col.names]
chemsprops.ScotlandNSIS1$source_db = "ScotlandNSIS1"
chemsprops.ScotlandNSIS1$confidence_degree = 2
chemsprops.ScotlandNSIS1$project_url = "http://soils.environment.gov.scot/"
chemsprops.ScotlandNSIS1$citation_url = "https://doi.org/10.5281/zenodo.4650230"
chemsprops.ScotlandNSIS1 = complete.vars(chemsprops.ScotlandNSIS1, sel = [c](https://rdrr.io/r/base/c.html)("oc", "ph_h2o", "clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.ScotlandNSIS1)
#> [1] 2977 36
```
#### 5\.3\.0\.40 Ecoforest map of Quebec, Canada
* Duchesne, L., Ouimet, R., (2021\). Digital mapping of soil texture in ecoforest polygons in Quebec, Canada. PeerJ 9:e11685 [https://doi.org/10\.7717/peerj.11685](https://doi.org/10.7717/peerj.11685). Data download URL: [https://doi.org/10\.7717/peerj.11685/supp\-1](https://doi.org/10.7717/peerj.11685/supp-1)
```
if({
que.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Canada/Quebec/RawData.csv")
#summary(as.factor(que.xy$Horizon))
## horizon depths were not measured - we assume 15-30 and 30-80
que.xy$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)(que.xy$Horizon=="B", 15, 30)
que.xy$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)(que.xy$Horizon=="B", 30, 80)
que.xy$site_key = que.xy$usiteid
que.xy$latitude_decimal_degrees = que.xy$Latitude
que.xy$longitude_decimal_degrees = que.xy$Longitude
que.xy$hzn_desgn = que.xy$Horizon
que.xy$sand_tot_psa = que.xy$PC_Sand
que.xy$silt_tot_psa = que.xy$PC_Silt
que.xy$clay_tot_psa = que.xy$PC_Clay
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(que.xy))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ que.xy[,i] = NA } }
chemsprops.QuebecTEX = que.xy[,col.names]
chemsprops.QuebecTEX$source_db = "QuebecTEX"
chemsprops.QuebecTEX$confidence_degree = 4
chemsprops.QuebecTEX$project_url = ""
chemsprops.QuebecTEX$citation_url = "https://doi.org/10.7717/peerj.11685"
chemsprops.QuebecTEX = complete.vars(chemsprops.QuebecTEX, sel = [c](https://rdrr.io/r/base/c.html)("clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.QuebecTEX)
#> [1] 26648 36
```
#### 5\.3\.0\.41 Pseudo\-observations
* Pseudo\-observations using simulated points (world deserts)
```
if({
## 0 soil organic carbon + 98% sand content (deserts)
[load](https://rdrr.io/r/base/load.html)("deserts.pnt.rda")
nut.sim <- [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(spTransform(deserts.pnt, CRS("+proj=longlat +datum=WGS84")))
nut.sim[,1] <- NULL
nut.sim <- plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(nut.sim, [c](https://rdrr.io/r/base/c.html)("x"="longitude_decimal_degrees", "y"="latitude_decimal_degrees"))
nr = [nrow](https://rdrr.io/r/base/nrow.html)(nut.sim)
nut.sim$site_key <- [paste](https://rdrr.io/r/base/paste.html)("Simulated", 1:nr, sep="_")
## insert zeros for all nutrients except for the once we are not sure:
## http://www.decodedscience.org/chemistry-sahara-sand-elements-dunes/45828
sim.vars = [c](https://rdrr.io/r/base/c.html)("oc", "oc_d", "c_tot", "n_tot", "ecec", "clay_tot_psa", "mg_ext", "k_ext")
nut.sim[,sim.vars] <- 0
nut.sim$silt_tot_psa = 2
nut.sim$sand_tot_psa = 98
nut.sim$hzn_top = 0
nut.sim$hzn_bot = 30
nut.sim$db_od = 1.55
nut.sim2 = nut.sim
nut.sim2$silt_tot_psa = 1
nut.sim2$sand_tot_psa = 99
nut.sim2$hzn_top = 30
nut.sim2$hzn_bot = 60
nut.sim2$db_od = 1.6
nut.simA = [rbind](https://rdrr.io/r/base/cbind.html)(nut.sim, nut.sim2)
#str(nut.simA)
nut.simA$source_db = "Simulated"
nut.simA$confidence_degree = 10
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(nut.simA))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ nut.simA[,i] = NA } }
chemsprops.SIM = nut.simA[,col.names]
chemsprops.SIM$project_url = "https://gitlab.com/openlandmap/"
chemsprops.SIM$citation_url = "https://gitlab.com/openlandmap/compiled-ess-point-data-sets/"
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SIM)
#> [1] 718 36
```
Other potential large soil profile DBs of interest:
* Shangguan, W., Dai, Y., Liu, B., Zhu, A., Duan, Q., Wu, L., … \& Chen, D. (2013\). [A China data set of soil properties for land surface modeling](https://doi.org/10.1002/jame.20026). Journal of Advances in Modeling Earth Systems, 5(2\), 212\-224\.
* Salković, E., Djurović, I., Knežević, M., Popović\-Bugarin, V., \& Topalović, A. (2018\). Digitization and mapping of national legacy soil data of Montenegro. Soil and Water Research, 13(2\), 83\-89\. [https://doi.org/10\.17221/81/2017\-SWR](https://doi.org/10.17221/81/2017-SWR)
5\.4 alt text Bind all datasets
-------------------------------
#### 5\.4\.0\.1 Bind and clean\-up
```
[ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("chemsprops.*"))
#> [1] "chemsprops.AfSIS1" "chemsprops.AfSPDB"
#> [3] "chemsprops.Alaska" "chemsprops.bpht"
#> [5] "chemsprops.BZE_LW" "chemsprops.CAPERM"
#> [7] "chemsprops.CHLSOC" "chemsprops.CNSOT"
#> [9] "chemsprops.CostaRica" "chemsprops.CUFS"
#> [11] "chemsprops.EGRPR" "chemsprops.FEBR"
#> [13] "chemsprops.FIADB" "chemsprops.FRED"
#> [15] "chemsprops.GEMAS" "chemsprops.GROOT"
#> [17] "chemsprops.IRANSPDB" "chemsprops.ISCND"
#> [19] "chemsprops.LandPKS" "chemsprops.LUCAS"
#> [21] "chemsprops.LUCAS2" "chemsprops.Mangroves"
#> [23] "chemsprops.NAMSOTER" "chemsprops.NatSoil"
#> [25] "chemsprops.NCSCD" "chemsprops.NCSS"
#> [27] "chemsprops.NPDB" "chemsprops.Peatlands"
#> [29] "chemsprops.PRONASOLOS" "chemsprops.QuebecTEX"
#> [31] "chemsprops.RaCA" "chemsprops.RemnantSOC"
#> [33] "chemsprops.ScotlandNSIS1" "chemsprops.SIM"
#> [35] "chemsprops.SISLAC" "chemsprops.SOCPDB"
#> [37] "chemsprops.SoDaH" "chemsprops.SoilHealthDB"
#> [39] "chemsprops.SRDB" "chemsprops.USGS.NGS"
#> [41] "chemsprops.Vlaanderen" "chemsprops.WISE"
tot_sprops = dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)([lapply](https://rdrr.io/r/base/lapply.html)([ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("chemsprops.*")), function(i){ mutate_all([setNames](https://rdrr.io/r/stats/setNames.html)([get](https://rdrr.io/r/base/get.html)(i), col.names), as.character) }))
## convert to numeric:
for(j in [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees", "layer_sequence",
"hzn_top", "hzn_bot", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa",
"oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum",
"cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext",
"ec_satp", "ec_12pre")){
tot_sprops[,j] = [as.numeric](https://rdrr.io/r/base/numeric.html)(tot_sprops[,j])
}
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#head(tot_sprops)
```
Clean up typos and physically impossible values:
```
tex.rm = [rowSums](https://rdrr.io/r/base/colSums.html)(tot_sprops[,[c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "sand_tot_psa", "silt_tot_psa")])
[summary](https://rdrr.io/r/base/summary.html)(tex.rm)
#> Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
#> -2997.00 100.00 100.00 99.39 100.00 500.00 274336
for(j in [c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "sand_tot_psa", "silt_tot_psa", "wpg2")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>100|tot_sprops[,j]<0, NA, tot_sprops[,j])
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tex.rm<99|[is.na](https://rdrr.io/r/base/NA.html)(tex.rm)|tex.rm>101, NA, tot_sprops[,j])
}
for(j in [c](https://rdrr.io/r/base/c.html)("ph_h2o","ph_kcl","ph_cacl2")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>12|tot_sprops[,j]<2, NA, tot_sprops[,j])
}
#hist(tot_sprops$db_od)
for(j in [c](https://rdrr.io/r/base/c.html)("db_od")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>2.4|tot_sprops[,j]<0.05, NA, tot_sprops[,j])
}
#hist(tot_sprops$oc)
for(j in [c](https://rdrr.io/r/base/c.html)("oc")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>800|tot_sprops[,j]<0, NA, tot_sprops[,j])
}
```
Fill\-in the missing depths:
```
## soil layer depth (middle)
tot_sprops$hzn_depth = tot_sprops$hzn_top + (tot_sprops$hzn_bot-tot_sprops$hzn_top)/2
[summary](https://rdrr.io/r/base/summary.html)([is.na](https://rdrr.io/r/base/NA.html)(tot_sprops$hzn_depth))
#> Mode FALSE TRUE
#> logical 766689 5465
## Note: large number of horizons without a depth
tot_sprops = tot_sprops[,]
#quantile(tot_sprops$hzn_depth, c(0.01,0.99), na.rm=TRUE)
tot_sprops$hzn_depth = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops$hzn_depth<0, 10, [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops$hzn_depth>800, 800, tot_sprops$hzn_depth))
#hist(tot_sprops$hzn_depth, breaks=45)
```
Summary number of points per data source:
```
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$source_db))
#> AfSIS1 AfSPDB Alaska_interior BZE_LW
#> 4162 60277 3880 17187
#> Canada_CUFS Canada_NPDB Canada_subarctic Chilean_SOCDB
#> 15162 14900 1180 16358
#> China_SOTER CIFOR CostaRica Croatian_Soil_Pedon
#> 5105 561 2029 5746
#> CSIRO_NatSoil FEBR FIADB FRED
#> 70688 7804 23208 625
#> GEMAS_2009 GROOT Iran_SPDB ISCND
#> 4131 718 4677 3977
#> ISRIC_WISE LandPKS LUCAS_2009 LUCAS_2015
#> 23278 41644 21272 21859
#> MangrovesDB NAMSOTER NCSCD PRONASOLOS
#> 7733 2941 7082 31655
#> QuebecTEX RaCA2016 Russia_EGRPR ScotlandNSIS1
#> 26648 53663 4437 2977
#> Simulated SISLAC SOCPDB SoDaH
#> 718 49416 1526 17766
#> SoilHealthDB SRDB USDA_NCSS USGS.NGS
#> 120 1596 135671 9398
#> Vlaanderen WHRC_remnant_SOC
#> 41310 1604
```
Add unique row identifier
```
tot_sprops$uuid = openssl::[md5](https://rdrr.io/pkg/openssl/man/hash.html)([make.unique](https://rdrr.io/r/base/make.unique.html)([paste](https://rdrr.io/r/base/paste.html)("OpenLandMap", tot_sprops$site_key, tot_sprops$layer_sequence, sep="_")))
```
and unique location based on the [Open Location Code](https://cran.r-project.org/web/packages/olctools/vignettes/Introduction_to_olctools.html):
```
tot_sprops$olc_id = olctools::[encode_olc](https://rdrr.io/pkg/olctools/man/encode_olc.html)(tot_sprops$latitude_decimal_degrees, tot_sprops$longitude_decimal_degrees, 11)
[length](https://rdrr.io/r/base/length.html)([levels](https://rdrr.io/r/base/levels.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$olc_id)))
#> [1] 205687
## 205,620
```
```
tot_sprops.pnts = tot_sprops[]
coordinates(tot_sprops.pnts) <- ~ longitude_decimal_degrees + latitude_decimal_degrees
proj4string(tot_sprops.pnts) <- "EPSG:4326"
```
Remove points falling in the sea or similar:
```
if({
#mask = terra::rast("./layers1km/lcv_landmask_esacci.lc.l4_c_1km_s0..0cm_2000..2015_v1.0.tif")
mask = terra::[rast](https://rdrr.io/pkg/terra/man/rast.html)("/mnt/diskstation/data/LandGIS/layers250m/lcv_landmask_esacci.lc.l4_c_250m_s0..0cm_2000..2015_v1.0.tif")
ov.sprops <- terra::[extract](https://rdrr.io/pkg/terra/man/extract.html)(mask, terra::[vect](https://rdrr.io/pkg/terra/man/vect.html)(tot_sprops.pnts)) ## TAKES 2 mins
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(ov.sprops[,2]))
if([sum](https://rdrr.io/r/base/sum.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]))>0 | [sum](https://rdrr.io/r/base/sum.html)(ov.sprops[,2]==2)>0){
rem.lst = [which](https://rdrr.io/r/base/which.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]) | ov.sprops[,2]==2 | ov.sprops[,2]==4)
rem.sp = tot_sprops.pnts$site_key[rem.lst]
tot_sprops.pnts = tot_sprops.pnts[-rem.lst,]
}
}
## final number of unique spatial locations:
[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts)
#> [1] 205687
## 203,107
```
#### 5\.4\.0\.2 Histogram plots
Have in mind that some datasets only represent top\-soil (e.g. LUCAS) while other
cover the whole soil depth, hence higher mean values for some regions (Europe) should
be considered within the context of diverse soil depths.
```
[library](https://rdrr.io/r/base/library.html)([ggplot2](http://ggplot2.tidyverse.org))
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(oc))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 195328 rows containing non-finite values (stat_boxplot).
```
```
sel.db = [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "Canada_CUFS", "USDA_NCSS", "AfSPDB", "Canada_NPDB", "FIADB", "PRONASOLOS", "CSIRO_NatSoil")
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db,], x = "hzn_depth", y = "oc", method = "hexbin",
col = "increment", type = "source_db", log.x = TRUE, log.y=TRUE, ylab="SOC wprm", xlab="depth in cm")
```
Note: FIADB includes litter and the soil samples are taken at fixed depths. Canada
dataset also shows SOC content for peatlands.
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=db_od)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 537198 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db,], x = "oc", y = "db_od", method = "hexbin",
col = "increment", type = "source_db", log.x=TRUE, ylab="Bulk density", xlab="SOC wprm")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(oc_d))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(oc_d): NaNs produced
#> Warning in log1p(oc_d): NaNs produced
#> Warning: Removed 561232 rows containing non-finite values (stat_boxplot).
```
```
sel.db0 = [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "Canada_CUFS", "USDA_NCSS", "AfSPDB", "PRONASOLOS", "CSIRO_NatSoil")
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "oc", y = "oc_d", method = "hexbin",
col = "increment", type = "source_db", log.x=TRUE, log.y=TRUE, xlab="SOC wprm", ylab="SOC kg/m3")
```
Note: SOC (%) and SOC density (kg/m3\) are almost linear relationship (curving toward fixed value),
except for organic soils (especially litter) where relationship is slightly shifted.
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "oc", y = "n_tot", method = "hexbin",
col = "increment", type = "source_db", log.x=TRUE, log.y=TRUE, xlab="SOC wprm", ylab="N total wprm")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=ph_h2o)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 270562 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "USDA_NCSS", "AfSPDB"),], x = "ph_kcl", y = "ph_h2o", method = "hexbin",
col = "increment", type = "source_db", xlab="soil pH KCl", ylab="soil pH H2O")
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "hzn_depth", y = "ph_h2o", method = "hexbin",
col = "increment", type = "source_db", log.x = TRUE, log.y=TRUE, ylab="soil pH H2O", xlab="depth in cm")
```
Note: there seems to be no apparent correlation between soil pH and soil depth.
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "USDA_NCSS", "AfSPDB", "PRONASOLOS"),], x = "ph_h2o", y = "cec_sum", method = "hexbin",
col = "increment", type = "source_db", log.y=TRUE, ylab="CEC", xlab="soil pH H2O")
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "ph_h2o", y = "oc", method = "hexbin",
col = "increment", type = "source_db", log.y=TRUE, ylab="SOC wprm", xlab="soil pH H2O")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=clay_tot_psa)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 279635 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], y = "clay_tot_psa", x = "sand_tot_psa", method = "hexbin",
col = "increment", type = "source_db", ylab="clay %", xlab="sand %")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(cec_sum))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(cec_sum): NaNs produced
#> Warning in log1p(cec_sum): NaNs produced
#> Warning: Removed 514240 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(n_tot))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(n_tot): NaNs produced
#> Warning in log1p(n_tot): NaNs produced
#> Warning: Removed 446028 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(k_ext))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(k_ext): NaNs produced
#> Warning in log1p(k_ext): NaNs produced
#> Warning: Removed 523010 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(ec_satp))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 608509 rows containing non-finite values (stat_boxplot).
```
```
sprops_yrs = tot_sprops[]
sprops_yrs$year = [as.numeric](https://rdrr.io/r/base/numeric.html)([substr](https://rdrr.io/r/base/substr.html)(x=sprops_yrs$site_obsdate, 1, 4))
#> Warning: NAs introduced by coercion
sprops_yrs$year = [ifelse](https://rdrr.io/r/base/ifelse.html)(sprops_yrs$year <1960, NA, [ifelse](https://rdrr.io/r/base/ifelse.html)(sprops_yrs$year>2024, NA, sprops_yrs$year))
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(sprops_yrs, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=year)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 70644 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(sprops_yrs[sprops_yrs$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], y = "oc", x = "year", method = "hexbin",
col = "increment", type = "source_db", log.y=TRUE, ylab="SOC wprm", xlab="Sampling year")
```
#### 5\.4\.0\.3 Convert to wide format
Add `layer_sequence` where missing since this is needed to be able to convert to wide
format:
```
#summary(tot_sprops$layer_sequence)
tot_sprops$dsiteid = [paste](https://rdrr.io/r/base/paste.html)(tot_sprops$source_db, tot_sprops$site_key, tot_sprops$site_obsdate, sep="_")
if({
[library](https://rdrr.io/r/base/library.html)([dplyr](https://dplyr.tidyverse.org))
## Note: takes >2 mins
l.s1 <- tot_sprops[,[c](https://rdrr.io/r/base/c.html)("dsiteid","hzn_depth")] [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [group_by](https://dplyr.tidyverse.org/reference/group_by.html)(dsiteid) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(layer_sequence.f = data.table::[frank](https://Rdatatable.gitlab.io/data.table/reference/frank.html)(hzn_depth, ties.method = "first"))
tot_sprops$layer_sequence.f = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(tot_sprops$layer_sequence), l.s1$layer_sequence.f, tot_sprops$layer_sequence)
tot_sprops$layer_sequence.f = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops$layer_sequence.f>6, 6, tot_sprops$layer_sequence.f)
}
```
Convert long table to [wide table format](https://ncss-tech.github.io/AQP/aqp/aqp-intro.html) so that each depth gets unique column
(note: the most computational / time\-consuming step usually):
```
if({
[library](https://rdrr.io/r/base/library.html)([data.table](http://r-datatable.com))
tot_sprops.w = data.table::[dcast](https://Rdatatable.gitlab.io/data.table/reference/dcast.data.table.html)( [as.data.table](https://Rdatatable.gitlab.io/data.table/reference/as.data.table.html)(tot_sprops),
formula = olc_id ~ layer_sequence.f,
value.var = [c](https://rdrr.io/r/base/c.html)("uuid", hor.names[-[which](https://rdrr.io/r/base/which.html)(hor.names [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("site_key", "layer_sequence"))]),
## "labsampnum", "hzn_desgn", "tex_psda"
#fun=function(x){ mean(x, na.rm=TRUE) },
## Note: does not work for characters
fun=function(x){ x[1] },
verbose = FALSE)
## remove "0" layers added automatically but containing no values
tot_sprops.w = tot_sprops.w[,[grep](https://rdrr.io/r/base/grep.html)("*_0$", [colnames](https://rdrr.io/r/base/colnames.html)(tot_sprops.w)):=NULL]
}
tot_sprops_w.pnts = tot_sprops.pnts
tot_sprops_w.pnts@data = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tot_sprops.pnts@data, tot_sprops.w)
#> Joining by: olc_id
```
Write all soil profiles using a wide format:
```
sel.rm.pnts <- tot_sprops_w.pnts$source_db=="LUCAS_2009" | tot_sprops_w.pnts$source_db=="LUCAS_2015" | tot_sprops_w.pnts$site_key [%in%](https://rdrr.io/r/base/match.html) mng.rm | tot_sprops_w.pnts$site_key [%in%](https://rdrr.io/r/base/match.html) rem.sp
out.gpkg = "./out/gpkg/sol_chem.pnts_horizons.gpkg"
#unlink(out.gpkg)
if({
writeOGR(tot_sprops_w.pnts[!sel.rm.pnts,], "./out/gpkg/sol_chem.pnts_horizons.gpkg", "sol_chem.pnts_horizons", drive="GPKG")
}
```
#### 5\.4\.0\.4 Save RDS files
Remove points that are not allowed to be distributed publicly:
```
sel.rm <- tot_sprops$source_db=="LUCAS_2009" | tot_sprops$source_db=="LUCAS_2015" | tot_sprops$site_key [%in%](https://rdrr.io/r/base/match.html) mng.rm | tot_sprops$site_key [%in%](https://rdrr.io/r/base/match.html) rem.sp
tot_sprops.s = tot_sprops[!sel.rm,]
```
Plot in Goode Homolozine projection and save final objects:
```
if({
tot_sprops.pnts_sf <- st_as_sf(tot_sprops.pnts[1], crs=4326)
plot_gh(tot_sprops.pnts_sf, out.pdf="./img/sol_chem.pnts_sites.pdf")
## extremely slow --- takes 15mins
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_chem.pnts_sites.pdf ./img/sol_chem.pnts_sites -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_chem.pnts_sites.png ./img/sol_chem.pnts_sites.png")
}
```
(\#fig:sol\_chem.pnts\_sites)Soil profiles and soil samples with chemical and physical properties global compilation.
Fig. 1: Soil profiles and soil samples with chemical and physical properties global compilation.
5\.5 Save final analysis\-ready objects:
----------------------------------------
```
saveRDS.gz(tot_sprops.s, "./out/rds/sol_chem.pnts_horizons.rds")
saveRDS.gz(tot_sprops, "/mnt/diskstation/data/Soil_points/sol_chem.pnts_horizons.rds")
saveRDS.gz(tot_sprops.pnts, "/mnt/diskstation/data/Soil_points/sol_chem.pnts_sites.rds")
#library(farff)
#writeARFF(tot_sprops.s, "./out/arff/sol_chem.pnts_horizons.arff", overwrite = TRUE)
## compressed CSV
[write.csv](https://rdrr.io/r/utils/write.table.html)(tot_sprops.s, file=[gzfile](https://rdrr.io/r/base/connections.html)("./out/csv/sol_chem.pnts_horizons.csv.gz"))
## regression matrix:
#saveRDS.gz(rm.sol, "./out/rds/sol_chem.pnts_horizons_rm.rds")
```
Save temp object:
```
save.image.pigz(file="soilchem.RData")
## rmarkdown::render("Index.rmd")
```
5\.1 Overview
-------------
This section describes import steps used to produce a global compilation of soil
laboratory data with chemical (and physical) soil properties that can be then
used for predictive soil mapping / modeling at global and regional scales.
Read more about soil chemical properties, global soil profile and sample data sets and functionality:
* Arrouays, D., Leenaars, J. G., Richer\-de\-Forges, A. C., Adhikari, K., Ballabio, C., Greve, M., … \& Heuvelink, G. (2017\). [Soil legacy data rescue via GlobalSoilMap and other international and national initiatives](https://doi.org/10.1016/j.grj.2017.06.001). GeoResJ, 14, 1\-19\.
* Batjes, N. H., Ribeiro, E., van Oostrum, A., Leenaars, J., Hengl, T., \& de Jesus, J. M. (2017\). [WoSIS: providing standardised soil profile data for the world](http://www.earth-syst-sci-data.net/9/1/2017/). Earth System Science Data, 9(1\), 1\.
* Hengl, T., MacMillan, R.A., (2019\). [Predictive Soil Mapping with R](https://soilmapper.org/). OpenGeoHub foundation, Wageningen, the Netherlands, 370 pages, www.soilmapper.org, ISBN: 978\-0\-359\-30635\-0\.
* Rossiter, D.G.,: [Compendium of Soil Geographical Databases](https://www.isric.org/explore/soil-geographic-databases).
5\.2 alt text Specifications
----------------------------
#### 5\.2\.0\.1 Data standards
* Metadata information: [“Soil Survey Investigation Report No. 42\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1253872.pdf) and [“Soil Survey Investigation Report No. 45\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/nrcs142p2_052226.pdf),
* Model DB: [National Cooperative Soil Survey (NCSS) Soil Characterization Database](https://ncsslabdatamart.sc.egov.usda.gov/),
#### 5\.2\.0\.2 *Target variables:*
```
site.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees",
"latitude_decimal_degrees")
hor.names = [c](https://rdrr.io/r/base/c.html)("labsampnum","site_key","layer_sequence","hzn_top","hzn_bot",
"hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa",
"oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2",
"cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext",
"na_ext", "k_ext", "ec_satp", "ec_12pre")
## target structure:
col.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees",
"latitude_decimal_degrees", "labsampnum", "layer_sequence", "hzn_top",
"hzn_bot","hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa",
"sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o",
"ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext",
"mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db",
"confidence_degree", "project_url", "citation_url")
```
Targe variables listed:
* `clay_tot_psa`: Clay, Total in % wt for \<2 mm soil fraction,
* `silt_tot_psa`: Silt, Total in % wt for \<2 mm soil fraction,
* `sand_tot_psa`: Sand, Total in % wt for \<2 mm soil fraction,
* `oc`: Carbon, Organic in g/kg for \<2 mm soil fraction,
* `oc_d`: Soil organic carbon density in kg/m3,
* `c_tot`: Carbon, Total in g/kg for \<2 mm soil fraction,
* `n_tot`: Nitrogen, Total NCS in g/kg for \<2 mm soil fraction,
* `ph_kcl`: pH, KCl Suspension for \<2 mm soil fraction,
* `ph_h2o`: pH, 1:1 Soil\-Water Suspension for \<2 mm soil fraction,
* `ph_cacl2`: pH, CaCl2 Suspension for \<2 mm soil fraction,
* `cec_sum`: Cation Exchange Capacity, Summary, in cmol(\+)/kg for \<2 mm soil fraction,
* `cec_nh4`: Cation Exchange Capacity, NH4 prep, in cmol(\+)/kg for \<2 mm soil fraction,
* `ecec`: Cation Exchange Capacity, Effective, CMS derived value default, standa prep in cmol(\+)/kg for \<2 mm soil fraction,
* `wpg2`: Coarse fragments in % wt for \>2 mm soil fraction,
* `db_od`: Bulk density (Oven Dry) in g/cm3 (4A1h),
* `ca_ext`: Calcium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `mg_ext`: Magnesium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `na_ext`: Sodium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `k_ext`: Potassium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `ec_satp`: Electrical Conductivity, Saturation Extract in dS/m for \<2 mm soil fraction,
* `ec_12pre`: Electrical Conductivity, Predict, 1:2 (w/w) in dS/m for \<2 mm soil fraction,
#### 5\.2\.0\.1 Data standards
* Metadata information: [“Soil Survey Investigation Report No. 42\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1253872.pdf) and [“Soil Survey Investigation Report No. 45\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/nrcs142p2_052226.pdf),
* Model DB: [National Cooperative Soil Survey (NCSS) Soil Characterization Database](https://ncsslabdatamart.sc.egov.usda.gov/),
#### 5\.2\.0\.2 *Target variables:*
```
site.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees",
"latitude_decimal_degrees")
hor.names = [c](https://rdrr.io/r/base/c.html)("labsampnum","site_key","layer_sequence","hzn_top","hzn_bot",
"hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa",
"oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2",
"cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext",
"na_ext", "k_ext", "ec_satp", "ec_12pre")
## target structure:
col.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees",
"latitude_decimal_degrees", "labsampnum", "layer_sequence", "hzn_top",
"hzn_bot","hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa",
"sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o",
"ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext",
"mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db",
"confidence_degree", "project_url", "citation_url")
```
Targe variables listed:
* `clay_tot_psa`: Clay, Total in % wt for \<2 mm soil fraction,
* `silt_tot_psa`: Silt, Total in % wt for \<2 mm soil fraction,
* `sand_tot_psa`: Sand, Total in % wt for \<2 mm soil fraction,
* `oc`: Carbon, Organic in g/kg for \<2 mm soil fraction,
* `oc_d`: Soil organic carbon density in kg/m3,
* `c_tot`: Carbon, Total in g/kg for \<2 mm soil fraction,
* `n_tot`: Nitrogen, Total NCS in g/kg for \<2 mm soil fraction,
* `ph_kcl`: pH, KCl Suspension for \<2 mm soil fraction,
* `ph_h2o`: pH, 1:1 Soil\-Water Suspension for \<2 mm soil fraction,
* `ph_cacl2`: pH, CaCl2 Suspension for \<2 mm soil fraction,
* `cec_sum`: Cation Exchange Capacity, Summary, in cmol(\+)/kg for \<2 mm soil fraction,
* `cec_nh4`: Cation Exchange Capacity, NH4 prep, in cmol(\+)/kg for \<2 mm soil fraction,
* `ecec`: Cation Exchange Capacity, Effective, CMS derived value default, standa prep in cmol(\+)/kg for \<2 mm soil fraction,
* `wpg2`: Coarse fragments in % wt for \>2 mm soil fraction,
* `db_od`: Bulk density (Oven Dry) in g/cm3 (4A1h),
* `ca_ext`: Calcium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `mg_ext`: Magnesium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `na_ext`: Sodium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `k_ext`: Potassium, Extractable in mg/kg for \<2 mm soil fraction (usually Mehlich3\),
* `ec_satp`: Electrical Conductivity, Saturation Extract in dS/m for \<2 mm soil fraction,
* `ec_12pre`: Electrical Conductivity, Predict, 1:2 (w/w) in dS/m for \<2 mm soil fraction,
5\.3 alt text Data import
-------------------------
#### 5\.3\.0\.1 National Cooperative Soil Survey Characterization Database
* National Cooperative Soil Survey, (2020\). National Cooperative Soil Survey Characterization Database. Data download URL: <http://ncsslabdatamart.sc.egov.usda.gov/>
* O’Geen, A., Walkinshaw, M., \& Beaudette, D. (2017\). SoilWeb: A multifaceted interface to soil survey information. Soil Science Society of America Journal, 81(4\), 853\-862\. [https://doi.org/10\.2136/sssaj2016\.11\.0386n](https://doi.org/10.2136/sssaj2016.11.0386n)
This data set is continuously updated.
```
if({
ncss.site <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Site_Location.csv", stringsAsFactors = FALSE)
ncss.layer <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Layer.csv", stringsAsFactors = FALSE)
ncss.bdm <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Bulk_Density_and_Moisture.csv", stringsAsFactors = FALSE)
## multiple measurements
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(ncss.bdm$prep_code))
ncss.bdm.0 <- ncss.bdm[ncss.bdm$prep_code=="S",]
[summary](https://rdrr.io/r/base/summary.html)(ncss.bdm.0$db_od)
## 0 BD values --- error!
ncss.carb <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Carbon_and_Extractions.csv", stringsAsFactors = FALSE)
ncss.organic <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Organic.csv", stringsAsFactors = FALSE)
ncss.pH <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_pH_and_Carbonates.csv", stringsAsFactors = FALSE)
#str(ncss.pH)
#summary(ncss.pH$ph_h2o)
#summary(!is.na(ncss.pH$ph_h2o))
ncss.PSDA <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_PSDA_and_Rock_Fragments.csv", stringsAsFactors = FALSE)
ncss.CEC <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_CEC_and_Bases.csv")
ncss.salt <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Salt.csv")
ncss.horizons <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(ncss.bdm.0, ncss.layer, ncss.carb, ncss.organic[,[c](https://rdrr.io/r/base/c.html)("labsampnum", "result_source_key", "c_tot", "n_tot", "db_od", "oc")], ncss.pH, ncss.PSDA, ncss.CEC, ncss.salt), type = "full", by="labsampnum")
#head(ncss.horizons)
[nrow](https://rdrr.io/r/base/nrow.html)(ncss.horizons)
ncss.horizons$oc_d = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$oc / 100 * ncss.horizons$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(ncss.horizons$wpg2), 0, ncss.horizons$wpg2))/100, 3)
ncss.horizons$ca_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$ca_nh4 * 200, 4)
ncss.horizons$mg_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$mg_nh4 * 121, 3)
ncss.horizons$na_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$na_nh4 * 230, 3)
ncss.horizons$k_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$k_nh4 * 391, 3)
#summary(ncss.horizons$oc_d)
## Values <0!!
chemsprops.NCSS = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(ncss.site[,site.names], ncss.horizons[,hor.names], by="site_key")
chemsprops.NCSS$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(chemsprops.NCSS$site_obsdate, format="%m/%d/%Y"), "%Y-%m-%d")
chemsprops.NCSS$source_db = "USDA_NCSS"
#dim(chemsprops.NCSS)
chemsprops.NCSS$oc = chemsprops.NCSS$oc * 10
chemsprops.NCSS$n_tot = chemsprops.NCSS$n_tot * 10
#hist(log1p(chemsprops.NCSS$oc), breaks=45, col="gray")
chemsprops.NCSS$confidence_degree = 1
chemsprops.NCSS$project_url = "http://ncsslabdatamart.sc.egov.usda.gov/"
chemsprops.NCSS$citation_url = "https://doi.org/10.2136/sssaj2016.11.0386n"
chemsprops.NCSS = complete.vars(chemsprops.NCSS, sel=[c](https://rdrr.io/r/base/c.html)("tex_psda","oc","clay_tot_psa","ecec","ph_h2o","ec_12pre","k_ext"))
#rm(ncss.horizons)
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NCSS)
#> [1] 136011 36
#summary(!is.na(chemsprops.NCSS$oc))
## texture classes need to be cleaned-up
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(chemsprops.NCSS$tex_psda))
#> c C cl CL
#> 2391 10908 19 10158 5
#> cos CoS COS cosl CoSL
#> 2424 6 4 4543 2
#> Fine Sandy Loam fs fsl FSL l
#> 1 2357 10701 16 17038
#> L lcos LCoS lfs ls
#> 2 2933 3 1805 3166
#> LS lvfs s S sc
#> 1 152 2958 1 601
#> scl SCL si sic SiC
#> 5456 5 854 7123 11
#> sicl SiCL sil SiL SIL
#> 13718 14 22230 4 28
#> sl SL vfs vfsl VFSL
#> 5547 13 64 2678 3
#> NA's
#> 6068
```
#### 5\.3\.0\.2 Rapid Carbon Assessment (RaCA)
* Soil Survey Staff. Rapid Carbon Assessment (RaCA) project. United States Department of Agriculture, Natural Resources Conservation Service. Available online. June 1, 2013 (FY2013 official release). Data download URL: [https://www.nrcs.usda.gov/wps/portal/nrcs/detailfull/soils/research/?cid\=nrcs142p2\_054164](https://www.nrcs.usda.gov/wps/portal/nrcs/detailfull/soils/research/?cid=nrcs142p2_054164)
* **Note**: Locations of each site have been degraded due to confidentiality and only reflect the general position of each site.
* Wills, S. et al. (2013\) [“Rapid carbon assessment (RaCA) methodology: Sampling and Initial Summary. United States Department of Agriculture.”](https://www.nrcs.usda.gov/wps/PA_NRCSConsumption/download?cid=nrcs142p2_052841&ext=pdf) Natural Resources Conservation Service, National Soil Survey Center.
```
if({
raca.df <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/RaCA/RaCa_general_location.csv", stringsAsFactors = FALSE)
[names](https://rdrr.io/r/base/names.html)(raca.df)[1] = "rcasiteid"
raca.layer <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/RaCA/RaCA_samples_JULY2016.csv", stringsAsFactors = FALSE)
raca.layer$longitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(raca.layer["rcasiteid"], raca.df, match ="first")$Gen_long
raca.layer$latitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(raca.layer["rcasiteid"], raca.df, match ="first")$Gen_lat
raca.layer$site_obsdate = "2013"
[summary](https://rdrr.io/r/base/summary.html)(raca.layer$Calc_SOC)
#plot(raca.layer[!duplicated(raca.layer$rcasiteid),c("longitude_decimal_degrees", "latitude_decimal_degrees")])
#summary(raca.layer$SOC_pred1)
## some strange groupings around small values
raca.layer$oc_d = [signif](https://rdrr.io/r/base/Round.html)(raca.layer$Calc_SOC / 100 * raca.layer$Bulkdensity * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(raca.layer$fragvolc), 0, raca.layer$fragvolc))/100, 3)
raca.layer$oc = raca.layer$Calc_SOC * 10
#summary(raca.layer$oc_d)
raca.h.lst <- [c](https://rdrr.io/r/base/c.html)("rcasiteid", "lay_field_label1", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "Lab.Sample.No", "layer_Number", "TOP", "BOT", "hzname", "texture", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot_ncs", "n_tot_ncs", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "fragvolc", "Bulkdensity", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = raca.h.lst[[which](https://rdrr.io/r/base/which.html)(!raca.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(raca.layer))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ raca.layer[,i] = NA } }
chemsprops.RaCA = raca.layer[,raca.h.lst]
chemsprops.RaCA$source_db = "RaCA2016"
chemsprops.RaCA$confidence_degree = 4
chemsprops.RaCA$project_url = "https://www.nrcs.usda.gov/survey/raca/"
chemsprops.RaCA$citation_url = "https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/nrcs142p2_052841.pdf"
chemsprops.RaCA = complete.vars(chemsprops.RaCA, sel = [c](https://rdrr.io/r/base/c.html)("oc", "fragvolc"))
}
#> Joining by: rcasiteid
#> Joining by: rcasiteid
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.RaCA)
#> [1] 53664 36
```
#### 5\.3\.0\.3 National Geochemical Database Soil
* Smith, D.B., Cannon, W.F., Woodruff, L.G., Solano, Federico, Kilburn, J.E., and Fey, D.L., (2013\). [Geochemical and
mineralogical data for soils of the conterminous United States](http://pubs.usgs.gov/ds/801/). U.S. Geological Survey Data Series 801, 19 p., <http://pubs.usgs.gov/ds/801/>.
* Grossman, J. N. (2004\). [The National Geochemical Survey\-database and documentation](https://doi.org/10.3133/ofr20041001). U.S. Geological Survey Open\-File Report 2004\-1001\. [DOI:10\.3133/ofr20041001](DOI:10.3133/ofr20041001).
* **Note**: NGS focuses on stream\-sediment samples, but also contains many soil samples.
```
if({
ngs.points <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/geochemical/ds-801-csv/site.txt", sep=",")
## 4857 pnts
ngs.layers <- [lapply](https://rdrr.io/r/base/lapply.html)([c](https://rdrr.io/r/base/c.html)("top5cm.txt", "ahorizon.txt", "chorizon.txt"), function(i){[read.csv](https://rdrr.io/r/utils/read.table.html)([paste0](https://rdrr.io/r/base/paste.html)("/mnt/diskstation/data/Soil_points/USA/geochemical/ds-801-csv/", i), sep=",")})
ngs.layers = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)(ngs.layers)
#dim(ngs.layers)
# 14571 126
#summary(ngs.layers$tot_carb_pct)
#lattice::xyplot(c_org_pct ~ c_tot_pct, ngs.layers, scales=list(x = list(log = 2), y = list(log = 2)))
#lattice::xyplot(c_org_pct ~ tot_clay_pct, ngs.layers, scales=list(y = list(log = 2)))
ngs.layers$c_tot = ngs.layers$c_tot_pct * 10
ngs.layers$oc = ngs.layers$c_org_pct * 10
ngs.layers$hzn_top = [sapply](https://rdrr.io/r/base/lapply.html)(ngs.layers$depth_cm, function(i){[strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-")[[1]][1]})
ngs.layers$hzn_bot = [sapply](https://rdrr.io/r/base/lapply.html)(ngs.layers$depth_cm, function(i){[strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-")[[1]][2]})
#summary(ngs.layers$tot_clay_pct)
#summary(ngs.layers$k_pct) ## very high numbers?
## question is if the geochemical element results are compatible with e.g. k_ext?
t.ngs = [c](https://rdrr.io/r/base/c.html)("lab_id", "site_id", "horizon", "hzn_top", "hzn_bot", "tot_clay_pct", "c_tot", "oc")
ngs.m = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(ngs.points, ngs.layers[
ngs.m$site_obsdate = [as.Date](https://rdrr.io/r/base/as.Date.html)(ngs.m$colldate, format="%Y-%m-%d")
ngs.h.lst <- [c](https://rdrr.io/r/base/c.html)("site_id", "quad", "site_obsdate", "longitude", "latitude", "lab_id", "layer_sequence", "hzn_top", "hzn_bot", "horizon", "tex_psda", "tot_clay_pct", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = ngs.h.lst[[which](https://rdrr.io/r/base/which.html)(!ngs.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(ngs.m))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ ngs.m[,i] = NA } }
chemsprops.USGS.NGS = ngs.m[,ngs.h.lst]
chemsprops.USGS.NGS$source_db = "USGS.NGS"
chemsprops.USGS.NGS$confidence_degree = 1
chemsprops.USGS.NGS$project_url = "https://mrdata.usgs.gov/ds-801/"
chemsprops.USGS.NGS$citation_url = "https://pubs.usgs.gov/ds/801/"
chemsprops.USGS.NGS = complete.vars(chemsprops.USGS.NGS, sel = [c](https://rdrr.io/r/base/c.html)("tot_clay_pct", "oc"), coords = [c](https://rdrr.io/r/base/c.html)("longitude", "latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.USGS.NGS)
#> [1] 9446 36
```
#### 5\.3\.0\.4 Forest Inventory and Analysis Database (FIADB)
* Domke, G. M., Perry, C. H., Walters, B. F., Nave, L. E., Woodall, C. W., \& Swanston, C. W. (2017\). [Toward inventory‐based estimates of soil organic carbon in forests of the United States](https://doi.org/10.1002/eap.1516). Ecological Applications, 27(4\), 1223\-1235\. [https://doi.org/10\.1002/eap.1516](https://doi.org/10.1002/eap.1516)
* Forest Inventory and Analysis, (2014\). The Forest Inventory and Analysis Database: Database description
and user guide version 6\.0\.1 for Phase 3\. U.S. Department of Agriculture, Forest Service. 182 p.
\[Online]. Available: [https://www.fia.fs.fed.us/library/database\-documentation/](https://www.fia.fs.fed.us/library/database-documentation/)
* **Note**: samples are taken only from the top\-soil either 0–10\.16 cm or 10\.16–20\.32 cm.
```
if({
fia.loc <- vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/USA/FIADB/ENTIRE/PLOT.csv")
fia.loc$site_id = [paste](https://rdrr.io/r/base/paste.html)(fia.loc$STATECD, fia.loc$COUNTYCD, fia.loc$PLOT, sep="_")
fia.lab <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/FIADB/ENTIRE/SOILS_LAB.csv")
fia.lab$site_id = [paste](https://rdrr.io/r/base/paste.html)(fia.lab$STATECD, fia.lab$COUNTYCD, fia.lab$PLOT, sep="_")
## 23,765 rows
fia.des <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/FIADB/ENTIRE/SOILS_SAMPLE_LOC.csv")
fia.des$site_id = [paste](https://rdrr.io/r/base/paste.html)(fia.des$STATECD, fia.des$COUNTYCD, fia.des$PLOT, sep="_")
#fia.lab$TXTRLYR1 = plyr::join(fia.lab[c("site_id","INVYR")], fia.des[c("site_id","TXTRLYR1","INVYR")], match ="first")$TXTRLYR1
fia.lab$TXTRLYR2 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","TXTRLYR2","INVYR")], match ="first")$TXTRLYR2
#summary(as.factor(fia.lab$TXTRLYR1))
fia.lab$tex_psda = [factor](https://rdrr.io/r/base/factor.html)(fia.lab$TXTRLYR2, labels = [c](https://rdrr.io/r/base/c.html)("Organic", "Loamy", "Clayey", "Sandy", "Coarse sand", "Not measured"))
#Code Description
# 0 Organic.
# 1 Loamy.
# 2 Clayey.
# 3 Sandy.
# 4 Coarse sand.
# 9 Not measured - make plot notes
fia.lab$FORFLTHK = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","FORFLTHK","INVYR")], match ="first")$FORFLTHK
#summary(fia.lab$FORFLTHK)
fia.lab$LTRLRTHK = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","LTRLRTHK","INVYR")], match ="first")$LTRLRTHK
fia.lab$tot_thk = [rowSums](https://rdrr.io/r/base/colSums.html)(fia.lab[,[c](https://rdrr.io/r/base/c.html)("FORFLTHK", "LTRLRTHK")], na.rm=TRUE)
fia.lab$DPTHSBSL = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","DPTHSBSL","INVYR")], match ="first")$DPTHSBSL
#summary(fia.lab$DPTHSBSL)
sel.fia = fia.loc$site_id [%in%](https://rdrr.io/r/base/match.html) fia.lab$site_id
#summary(sel.fia)
# 15,109
fia.loc = fia.loc[sel.fia, [c](https://rdrr.io/r/base/c.html)("site_id", "LON", "LAT")]
#summary(fia.lab$BULK_DENSITY) ## some strange values for BD!
#quantile(fia.lab$BULK_DENSITY, c(0.02, 0.98), na.rm=TRUE)
#summary(fia.lab$C_ORG_PCT)
#summary(as.factor(fia.lab$LAYER_TYPE))
#lattice::xyplot(BULK_DENSITY ~ C_ORG_PCT, fia.lab, scales=list(x = list(log = 2)))
#dim(fia.lab)
# 14571 126
fia.lab$c_tot = fia.lab$C_ORG_PCT * 10
fia.lab$oc = fia.lab$C_TOTAL_PCT * 10
fia.lab$n_tot = fia.lab$N_TOTAL_PCT * 10
fia.lab$db_od = [ifelse](https://rdrr.io/r/base/ifelse.html)(fia.lab$BULK_DENSITY < 0.001 | fia.lab$BULK_DENSITY > 1.8, NA, fia.lab$BULK_DENSITY)
#lattice::xyplot(db_od ~ C_ORG_PCT, fia.lab, par.settings = list(plot.symbol = list(col=scales::alpha("black", 0.6), fill=scales::alpha("red", 0.6), pch=21, cex=0.6)), scales = list(x=list(log=TRUE, equispaced.log=FALSE)), ylab="Bulk density", xlab="SOC wpct")
#hist(fia.lab$db_od, breaks=45)
## A lot of very small BD measurements
fia.lab$oc_d = [signif](https://rdrr.io/r/base/Round.html)(fia.lab$oc / 100 * fia.lab$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$COARSE_FRACTION_PCT), 0, fia.lab$COARSE_FRACTION_PCT))/100, 3)
#hist(fia.lab$oc_d, breaks=45, col="grey")
fia.lab$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)(fia.lab$LAYER_TYPE=="FF_TOTAL" | fia.lab$LAYER_TYPE=="L_ORG", 0, NA)
fia.lab$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)(fia.lab$LAYER_TYPE=="FF_TOTAL" | fia.lab$LAYER_TYPE=="L_ORG", fia.lab$tot_thk, NA)
fia.lab$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_top) & (fia.lab$LAYER_TYPE=="MIN_2" | fia.lab$LAYER_TYPE=="ORG_2"), 10.2 + fia.lab$tot_thk, fia.lab$hzn_top)
fia.lab$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_bot) & (fia.lab$LAYER_TYPE=="MIN_2" | fia.lab$LAYER_TYPE=="ORG_2"), 20.3 + fia.lab$tot_thk, fia.lab$hzn_bot)
fia.lab$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_top) & (fia.lab$LAYER_TYPE=="MIN_1" | fia.lab$LAYER_TYPE=="ORG_1"), 0 + fia.lab$tot_thk, fia.lab$hzn_top)
fia.lab$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_bot) & (fia.lab$LAYER_TYPE=="MIN_1" | fia.lab$LAYER_TYPE=="ORG_1"), 10.2 + fia.lab$tot_thk, fia.lab$hzn_bot)
#summary(fia.lab$EXCHNG_K) ## Negative values!
fia.m = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab, fia.loc)
#fia.m = fia.m[!duplicated(as.factor(paste(fia.m$site_id, fia.m$INVYR, fia.m$LAYER_TYPE, sep="_"))),]
fia.m$site_obsdate = [as.Date](https://rdrr.io/r/base/as.Date.html)(fia.m$SAMPLE_DATE, format="%Y-%m-%d")
sel.d.fia = fia.m$site_obsdate < [as.Date](https://rdrr.io/r/base/as.Date.html)("1980-01-01", format="%Y-%m-%d")
fia.m$site_obsdate[[which](https://rdrr.io/r/base/which.html)(sel.d.fia)] = NA
#hist(fia.m$site_obsdate, breaks=25)
fia.h.lst <- [c](https://rdrr.io/r/base/c.html)("site_id", "usiteid", "site_obsdate", "LON", "LAT", "SAMPLE_ID", "layer_sequence", "hzn_top", "hzn_bot", "LAYER_TYPE", "tex_psda", "tot_clay_pct", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "PH_H2O", "PH_CACL2", "cec_sum", "cec_nh4", "ECEC", "COARSE_FRACTION_PCT", "db_od", "EXCHNG_CA", "EXCHNG_MG", "EXCHNG_NA", "EXCHNG_K", "ec_satp", "ec_12pre")
x.na = fia.h.lst[[which](https://rdrr.io/r/base/which.html)(!fia.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(fia.m))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ fia.m[,i] = NA } }
chemsprops.FIADB = fia.m[,fia.h.lst]
chemsprops.FIADB$source_db = "FIADB"
chemsprops.FIADB$confidence_degree = 2
chemsprops.FIADB$project_url = "http://www.fia.fs.fed.us/"
chemsprops.FIADB$citation_url = "https://www.fia.fs.fed.us/library/database-documentation/"
chemsprops.FIADB = complete.vars(chemsprops.FIADB, sel = [c](https://rdrr.io/r/base/c.html)("PH_H2O", "oc", "EXCHNG_K"), coords = [c](https://rdrr.io/r/base/c.html)("LON", "LAT"))
#str(unique(paste(chemsprops.FIADB$LON, chemsprops.FIADB$LAT, sep="_")))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.FIADB)
#> [1] 23208 36
#write.csv(chemsprops.FIADB, "/mnt/diskstation/data/Soil_points/USA/FIADB/fiadb_soil.pnts.csv")
```
#### 5\.3\.0\.5 Africa soil profiles database
* Leenaars, J. G., Van Oostrum, A. J. M., \& Ruiperez Gonzalez, M. (2014\). [Africa soil profiles database version 1\.2\. A compilation of georeferenced and standardized legacy soil profile data for Sub\-Saharan Africa (with dataset)](https://www.isric.org/projects/africa-soil-profiles-database-afsp). Wageningen: ISRIC Report 2014/01; 2014\. Data download URL: <https://data.isric.org/>
```
if({
[library](https://rdrr.io/r/base/library.html)([foreign](https://svn.r-project.org/R-packages/trunk/foreign))
afspdb.profiles <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Profiles.dbf", as.is=TRUE)
afspdb.layers <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Layers.dbf", as.is=TRUE)
afspdb.s.lst <- [c](https://rdrr.io/r/base/c.html)("ProfileID", "FldMnl_ID", "T_Year", "X_LonDD", "Y_LatDD")
#summary(afspdb.layers$BlkDens)
## add missing columns
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(afspdb.layers)){
if([is.numeric](https://rdrr.io/r/base/numeric.html)(afspdb.layers[,j])) {
afspdb.layers[,j] <- [ifelse](https://rdrr.io/r/base/ifelse.html)(afspdb.layers[,j] < 0, NA, afspdb.layers[,j])
}
}
afspdb.layers$ca_ext = afspdb.layers$ExCa * 200
afspdb.layers$mg_ext = afspdb.layers$ExMg * 121
afspdb.layers$na_ext = afspdb.layers$ExNa * 230
afspdb.layers$k_ext = afspdb.layers$ExK * 391
#summary(afspdb.layers$k_ext)
afspdb.m = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(afspdb.profiles[,afspdb.s.lst], afspdb.layers)
afspdb.m$oc_d = [signif](https://rdrr.io/r/base/Round.html)(afspdb.m$OrgC * afspdb.m$BlkDens * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(afspdb.m$CfPc), 0, afspdb.m$CfPc))/100, 3)
#summary(afspdb.m$T_Year)
afspdb.m$T_Year = [ifelse](https://rdrr.io/r/base/ifelse.html)(afspdb.m$T_Year < 0, NA, afspdb.m$T_Year)
afspdb.h.lst <- [c](https://rdrr.io/r/base/c.html)("ProfileID", "FldMnl_ID", "T_Year", "X_LonDD", "Y_LatDD", "LayerID", "LayerNr", "UpDpth", "LowDpth", "HorDes", "LabTxtr", "Clay", "Silt", "Sand", "OrgC", "oc_d", "TotC", "TotalN", "PHKCl", "PHH2O", "PHCaCl2", "CecSoil", "cec_nh4", "Ecec", "CfPc" , "BlkDens", "ca_ext", "mg_ext", "na_ext", "k_ext", "EC", "ec_12pre")
x.na = afspdb.h.lst[[which](https://rdrr.io/r/base/which.html)(!afspdb.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(afspdb.m))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ afspdb.m[,i] = NA } }
chemsprops.AfSPDB = afspdb.m[,afspdb.h.lst]
chemsprops.AfSPDB$source_db = "AfSPDB"
chemsprops.AfSPDB$confidence_degree = 5
chemsprops.AfSPDB$project_url = "https://www.isric.org/projects/africa-soil-profiles-database-afsp"
chemsprops.AfSPDB$citation_url = "https://www.isric.org/sites/default/files/isric_report_2014_01.pdf"
chemsprops.AfSPDB = complete.vars(chemsprops.AfSPDB, sel = [c](https://rdrr.io/r/base/c.html)("LabTxtr","OrgC","Clay","Ecec","PHH2O","EC","k_ext"), coords = [c](https://rdrr.io/r/base/c.html)("X_LonDD", "Y_LatDD"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.AfSPDB)
#> [1] 60306 36
```
#### 5\.3\.0\.6 Africa Soil Information Service (AfSIS) Soil Chemistry
* Towett, E. K., Shepherd, K. D., Tondoh, J. E., Winowiecki, L. A., Lulseged, T., Nyambura, M., … \& Cadisch, G. (2015\). Total elemental composition of soils in Sub\-Saharan Africa and relationship with soil forming factors. Geoderma Regional, 5, 157\-168\. [https://doi.org/10\.1016/j.geodrs.2015\.06\.002](https://doi.org/10.1016/j.geodrs.2015.06.002)
* [AfSIS Soil Chemistry](https://github.com/qedsoftware/afsis-soil-chem-tutorial) produced by World Agroforestry Centre (ICRAF), Quantitative Engineering Design (QED), Center for International Earth Science Information Network (CIESIN), The International Center for Tropical Agriculture (CIAT), Crop Nutrition Laboratory Services (CROPNUTS) and Rothamsted Research (RRES). Data download URL: <https://registry.opendata.aws/afsis/>
```
if({
afsis1.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/2009-2013/Georeferences/georeferences.csv")
afsis1.xy$Sampling.date = 2011
afsis1.lst = [list.files](https://rdrr.io/r/base/list.files.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/2009-2013/Wet_Chemistry", pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("*.csv$"), full.names = TRUE, recursive = TRUE)
afsis1.hor = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([lapply](https://rdrr.io/r/base/lapply.html)(afsis1.lst, read.csv))
tansis.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/tansis/Georeferences/georeferences.csv")
#summary(tansis.xy$Sampling.date)
tansis.xy$Sampling.date = 2018
tansis.lst = [list.files](https://rdrr.io/r/base/list.files.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/tansis/Wet_Chemistry", pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("*.csv$"), full.names = TRUE, recursive = TRUE)
tansis.hor = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([lapply](https://rdrr.io/r/base/lapply.html)(tansis.lst, read.csv))
afsis1t.df = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([list](https://rdrr.io/r/base/list.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(afsis1.hor, afsis1.xy, by="SSN"), plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tansis.hor, tansis.xy, by="SSN")))
afsis1t.df$UpDpth = [ifelse](https://rdrr.io/r/base/ifelse.html)(afsis1t.df$Depth=="sub", 20, 0)
afsis1t.df$LowDpth = [ifelse](https://rdrr.io/r/base/ifelse.html)(afsis1t.df$Depth=="sub", 50, 20)
afsis1t.df$LayerNr = [ifelse](https://rdrr.io/r/base/ifelse.html)(afsis1t.df$Depth=="sub", 2, 1)
#summary(afsis1t.df$C...Org)
afsis1t.df$oc = [rowMeans](https://rdrr.io/r/base/colSums.html)(afsis1t.df[,[c](https://rdrr.io/r/base/c.html)("C...Org", "X.C")], na.rm=TRUE) * 10
afsis1t.df$c_tot = afsis1t.df$Total.carbon
afsis1t.df$n_tot = [rowMeans](https://rdrr.io/r/base/colSums.html)(afsis1t.df[,[c](https://rdrr.io/r/base/c.html)("Total.nitrogen", "X.N")], na.rm=TRUE) * 10
afsis1t.df$ph_h2o = [rowMeans](https://rdrr.io/r/base/colSums.html)(afsis1t.df[,[c](https://rdrr.io/r/base/c.html)("PH", "pH")], na.rm=TRUE)
## multiple texture fractons - which one is the total clay, sand, silt?
## Clay content for water dispersed particles-recorded after 4 minutes of ultrasonication
#summary(afsis1t.df$Psa.w4clay)
#plot(afsis1t.df[,c("Longitude", "Latitude")])
afsis1.h.lst <- [c](https://rdrr.io/r/base/c.html)("SSN", "Site", "Sampling.date", "Longitude", "Latitude", "Soil.material", "LayerNr", "UpDpth", "LowDpth", "HorDes", "LabTxtr", "Psa.w4clay", "Psa.w4silt", "Psa.w4sand", "oc", "oc_d", "c_tot", "n_tot", "PHKCl", "ph_h2o", "PHCaCl2", "CecSoil", "cec_nh4", "Ecec", "CfPc" , "BlkDens", "ca_ext", "M3.Mg", "M3.Na", "M3.K", "EC", "ec_12pre")
x.na = afspdb.h.lst[[which](https://rdrr.io/r/base/which.html)(!afsis1.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(afsis1t.df))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ afsis1t.df[,i] = NA } }
chemsprops.AfSIS1 = afsis1t.df[,afsis1.h.lst]
chemsprops.AfSIS1$source_db = "AfSIS1"
chemsprops.AfSIS1$confidence_degree = 2
chemsprops.AfSIS1$project_url = "https://registry.opendata.aws/afsis/"
chemsprops.AfSIS1$citation_url = "https://doi.org/10.1016/j.geodrs.2015.06.002"
chemsprops.AfSIS1 = complete.vars(chemsprops.AfSIS1, sel = [c](https://rdrr.io/r/base/c.html)("Psa.w4clay","oc","ph_h2o","M3.K"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.AfSIS1)
#> [1] 4162 36
```
#### 5\.3\.0\.7 Fine Root Ecology Database (FRED)
* Iversen CM, McCormack ML, Baer JK, Powell AS, Chen W, Collins C, Fan Y, Fanin N, Freschet GT, Guo D, Hogan JA, Kou L, Laughlin DC, Lavely E, Liese R, Lin D, Meier IC, Montagnoli A, Roumet C, See CR, Soper F, Terzaghi M, Valverde\-Barrantes OJ, Wang C, Wright SJ, Wurzburger N, Zadworny M. (2021\). [Fine\-Root Ecology Database (FRED): A Global Collection of Root Trait Data with Coincident Site, Vegetation, Edaphic, and Climatic Data, Version 3](https://roots.ornl.gov/). Oak Ridge National Laboratory, TES SFA, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A. Access on\-line at: [https://doi.org/10\.25581/ornlsfa.014/1459186](https://doi.org/10.25581/ornlsfa.014/1459186).
```
if({
[Sys.setenv](https://rdrr.io/r/base/Sys.setenv.html)("VROOM_CONNECTION_SIZE" = 131072 * 2)
fred = vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED3_Entire_Database_2021.csv", skip = 10, col_names=FALSE)
## 57,190 x 1,164
#nm.fred = read.csv("/mnt/diskstation/data/Soil_points/INT/FRED/FRED3_Column_Definitions_20210423-091040.csv", header=TRUE)
nm.fred0 = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED3_Entire_Database_2021.csv", nrows=2)
[names](https://rdrr.io/r/base/names.html)(fred) = [make.names](https://rdrr.io/r/base/make.names.html)([t](https://rdrr.io/r/base/t.html)(nm.fred0)[,1])
## 1164 columns!
fred.h.lst = [c](https://rdrr.io/r/base/c.html)("Notes_Row.ID", "Data.source_DOI", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "Soil.horizon", "Soil.texture", "Soil.texture_Fraction.clay", "Soil.texture_Fraction.silt", "Soil.texture_Fraction.sand", "Soil.organic.C.content", "oc_d", "c_tot", "Soil.N.content", "ph_kcl", "Soil.pH_Water", "Soil.pH_Salt", "Soil.cation.exchange.capacity..CEC.", "cec_nh4", "Soil.effective.cation.exchange.capacity..ECEC.", "wpg2", "Soil.bulk.density", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db", "confidence_degree")
fred$site_obsdate = [as.integer](https://rdrr.io/r/base/integer.html)([rowMeans](https://rdrr.io/r/base/colSums.html)(fred[,[c](https://rdrr.io/r/base/c.html)("Sample.collection_Year.ending.collection", "Sample.collection_Year.beginning.collection")], na.rm=TRUE))
#summary(fred$site_obsdate)
fred$longitude_decimal_degrees = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Longitude), fred$Longitude_Estimated, fred$Longitude)
fred$latitude_decimal_degrees = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Latitude), fred$Latitude_Estimated, fred$Latitude)
#names(fred)[grep("Notes_Row", names(fred))]
#summary(fred[,grep("clay", names(fred))])
#summary(fred[,grep("cation.exchange", names(fred))])
#summary(fred[,grep("organic.C", names(fred))])
#summary(fred$Soil.organic.C.content)
#summary(fred$Soil.bulk.density)
#summary(as.factor(fred$Soil.horizon))
fred$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Lower.sampling.depth), fred$Soil.depth - 5, fred$Soil.depth_Lower.sampling.depth)
fred$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Upper.sampling.depth), fred$Soil.depth + 5, fred$Soil.depth_Upper.sampling.depth)
fred$oc_d = [signif](https://rdrr.io/r/base/Round.html)(fred$Soil.organic.C.content / 1000 * fred$Soil.bulk.density * 1000, 3)
#summary(fred$oc_d)
x.na = fred.h.lst[[which](https://rdrr.io/r/base/which.html)(!fred.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(fred))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ fred[,i] = NA } }
chemsprops.FRED = fred[,fred.h.lst]
#plot(chemsprops.FRED[,4:5])
chemsprops.FRED$source_db = "FRED"
chemsprops.FRED$confidence_degree = 5
chemsprops.FRED$project_url = "https://roots.ornl.gov/"
chemsprops.FRED$citation_url = "https://doi.org/10.25581/ornlsfa.014/1459186"
chemsprops.FRED = complete.vars(chemsprops.FRED, sel = [c](https://rdrr.io/r/base/c.html)("Soil.organic.C.content", "Soil.texture_Fraction.clay", "Soil.pH_Water"))
## many duplicates
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.FRED)
#> [1] 858 36
```
#### 5\.3\.0\.8 Global root traits (GRooT) database (compilation)
* Guerrero‐Ramírez, N. R., Mommer, L., Freschet, G. T., Iversen, C. M., McCormack, M. L., Kattge, J., … \& Weigelt, A. (2021\). [Global root traits (GRooT) database](https://dx.doi.org/10.1111/geb.13179). Global ecology and biogeography, 30(1\), 25\-37\. [https://dx.doi.org/10\.1111/geb.13179](https://dx.doi.org/10.1111/geb.13179)
```
if({
#Sys.setenv("VROOM_CONNECTION_SIZE" = 131072 * 2)
GROOT = vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/INT/GRooT/GRooTFullVersion.csv")
## 114,222 x 73
[c](https://rdrr.io/r/base/c.html)("locationID", "GRooTID", "originalID", "source", "year", "decimalLatitude", "decimalLongitud", "soilpH", "soilTexture", "soilCarbon", "soilNitrogen", "soilPhosphorus", "soilCarbonToNitrogen", "soilBaseCationSaturation", "soilCationExchangeCapacity", "soilOrganicMatter", "soilWaterGravimetric", "soilWaterVolumetric")
#summary(GROOT$soilCarbon)
#summary(!is.na(GROOT$soilCarbon))
#summary(GROOT$soilOrganicMatter)
#summary(GROOT$soilNitrogen)
#summary(GROOT$soilpH)
#summary(as.factor(GROOT$soilTexture))
#lattice::xyplot(soilCarbon ~ soilpH, GROOT, par.settings = list(plot.symbol = list(col=scales::alpha("black", 0.6), fill=scales::alpha("red", 0.6), pch=21, cex=0.6)), scales = list(y=list(log=TRUE, equispaced.log=FALSE)), ylab="SOC", xlab="pH")
GROOT$site_obsdate = [as.Date](https://rdrr.io/r/base/as.Date.html)([paste0](https://rdrr.io/r/base/paste.html)(GROOT$year, "-01-01"), format="%Y-%m-%d")
GROOT$hzn_top = 0
GROOT$hzn_bot = 30
GROOT.h.lst = [c](https://rdrr.io/r/base/c.html)("locationID", "originalID", "site_obsdate", "decimalLongitud", "decimalLatitude", "GRooTID", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "soilTexture", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "soilCarbon", "oc_d", "c_tot", "soilNitrogen", "ph_kcl", "soilpH", "ph_cacl2", "soilCationExchangeCapacity", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = GROOT.h.lst[[which](https://rdrr.io/r/base/which.html)(!GROOT.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(GROOT))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ GROOT[,i] = NA } }
chemsprops.GROOT = GROOT[,GROOT.h.lst]
chemsprops.GROOT$source_db = "GROOT"
chemsprops.GROOT$confidence_degree = 8
chemsprops.GROOT$project_url = "https://groot-database.github.io/GRooT/"
chemsprops.GROOT$citation_url = "https://dx.doi.org/10.1111/geb.13179"
chemsprops.GROOT = complete.vars(chemsprops.GROOT, sel = [c](https://rdrr.io/r/base/c.html)("soilCarbon", "soilpH"), coords = [c](https://rdrr.io/r/base/c.html)("decimalLongitud", "decimalLatitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.GROOT)
#> [1] 718 36
```
#### 5\.3\.0\.9 Global Soil Respiration DB
* Bond\-Lamberty, B. and Thomson, A. (2010\). A global database of soil respiration data, Biogeosciences, 7, 1915–1926, [https://doi.org/10\.5194/bg\-7\-1915\-2010](https://doi.org/10.5194/bg-7-1915-2010)
```
if({
srdb = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SRDB/srdb-data.csv")
## 10366 x 85
srdb.h.lst = [c](https://rdrr.io/r/base/c.html)("Site_ID", "Notes", "Study_midyear", "Longitude", "Latitude", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psd", "Soil_clay", "Soil_silt", "Soil_sand", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "Soil_BD", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db", "confidence_degree")
#summary(srdb$Study_midyear)
srdb$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(srdb$C_soildepth), 100, srdb$C_soildepth)
srdb$hzn_top = 0
#summary(srdb$Soil_clay)
#summary(srdb$C_soilmineral)
srdb$oc_d = [signif](https://rdrr.io/r/base/Round.html)(srdb$C_soilmineral / 1000 / (srdb$hzn_bot/100), 3)
#summary(srdb$oc_d)
#summary(srdb$Soil_BD)
srdb$oc = srdb$oc_d / srdb$Soil_BD
#summary(srdb$oc)
x.na = srdb.h.lst[[which](https://rdrr.io/r/base/which.html)(!srdb.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(srdb))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ srdb[,i] = NA } }
chemsprops.SRDB = srdb[,srdb.h.lst]
#plot(chemsprops.SRDB[,4:5])
chemsprops.SRDB$source_db = "SRDB"
chemsprops.SRDB$confidence_degree = 5
chemsprops.SRDB$project_url = "https://github.com/bpbond/srdb/"
chemsprops.SRDB$citation_url = "https://doi.org/10.5194/bg-7-1915-2010"
chemsprops.SRDB = complete.vars(chemsprops.SRDB, sel = [c](https://rdrr.io/r/base/c.html)("oc", "Soil_clay", "Soil_BD"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SRDB)
#> [1] 1596 36
```
#### 5\.3\.0\.10 SOils DAta Harmonization database (SoDaH)
* Wieder, W. R., Pierson, D., Earl, S., Lajtha, K., Baer, S., Ballantyne, F., … \& Weintraub, S. (2020\). [SoDaH: the SOils DAta Harmonization database, an open\-source synthesis of soil data from research networks, version 1\.0](https://doi.org/10.5194/essd-2020-195). Earth System Science Data Discussions, 1\-19\. [https://doi.org/10\.5194/essd\-2020\-195](https://doi.org/10.5194/essd-2020-195). Data download URL: [https://doi.org/10\.6073/pasta/9733f6b6d2ffd12bf126dc36a763e0b4](https://doi.org/10.6073/pasta/9733f6b6d2ffd12bf126dc36a763e0b4)
```
if({
sodah.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SoDaH/521_soils_data_harmonization_6e8416fa0c9a2c2872f21ba208e6a919.csv")
#head(sodah.hor)
#summary(sodah.hor$coarse_frac)
#summary(sodah.hor$lyr_soc)
#summary(sodah.hor$lyr_som_WalkleyBlack/1.724)
#summary(as.factor(sodah.hor$observation_date))
sodah.hor$site_obsdate = [as.integer](https://rdrr.io/r/base/integer.html)([substr](https://rdrr.io/r/base/substr.html)(sodah.hor$observation_date, 1, 4))
sodah.hor$oc = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sodah.hor$lyr_soc), sodah.hor$lyr_som_WalkleyBlack/1.724, sodah.hor$lyr_soc) * 10
sodah.hor$n_tot = sodah.hor$lyr_n_tot * 10
sodah.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(sodah.hor$oc / 1000 * sodah.hor$bd_samp * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sodah.hor$coarse_frac), 0, sodah.hor$coarse_frac))/100, 3)
sodah.hor$site_key = [paste](https://rdrr.io/r/base/paste.html)(sodah.hor$network, sodah.hor$location_name, sep="_")
sodah.hor$labsampnum = [make.unique](https://rdrr.io/r/base/make.unique.html)([paste](https://rdrr.io/r/base/paste.html)(sodah.hor$network, sodah.hor$location_name, sodah.hor$L1, sep="_"))
#summary(sodah.hor$oc_d)
sodah.h.lst = [c](https://rdrr.io/r/base/c.html)("site_key", "data_file", "observation_date", "long", "lat", "labsampnum", "layer_sequence", "layer_top", "layer_bot", "hzn", "profile_texture_class", "clay", "silt", "sand", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl", "cec_sum", "cec_nh4", "ecec", "coarse_frac", "bd_samp", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db", "confidence_degree")
x.na = sodah.h.lst[[which](https://rdrr.io/r/base/which.html)(!sodah.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sodah.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sodah.hor[,i] = NA } }
chemsprops.SoDaH = sodah.hor[,sodah.h.lst]
#plot(chemsprops.SoDaH[,4:5])
chemsprops.SoDaH$source_db = "SoDaH"
chemsprops.SoDaH$confidence_degree = 3
chemsprops.SoDaH$project_url = "https://lter.github.io/som-website"
chemsprops.SoDaH$citation_url = "https://doi.org/10.5194/essd-2020-195"
chemsprops.SoDaH = complete.vars(chemsprops.SoDaH, sel = [c](https://rdrr.io/r/base/c.html)("oc", "clay", "ph_h2o"), coords = [c](https://rdrr.io/r/base/c.html)("long", "lat"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SoDaH)
#> [1] 20383 36
```
#### 5\.3\.0\.11 ISRIC WISE harmonized soil profile data
* Batjes, N.H. (2019\). [Harmonized soil profile data for applications at global and continental scales: updates to the WISE database](http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x). Soil Use and Management 5:124–127\. Data download URL: [https://files.isric.org/public/wise/WD\-WISE.zip](https://files.isric.org/public/wise/WD-WISE.zip)
```
if({
wise.site <- [read.table](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_SITE.csv", sep=",", header=TRUE, stringsAsFactors = FALSE, fill=TRUE)
wise.s.lst <- [c](https://rdrr.io/r/base/c.html)("WISE3_id", "PITREF", "DATEYR", "LONDD", "LATDD")
wise.site$LONDD = [as.numeric](https://rdrr.io/r/base/numeric.html)(wise.site$LONDD)
wise.site$LATDD = [as.numeric](https://rdrr.io/r/base/numeric.html)(wise.site$LATDD)
wise.layer <- [read.table](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_HORIZON.csv", sep=",", header=TRUE, stringsAsFactors = FALSE, fill=TRUE)
wise.layer$ca_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXCA * 200, 4)
wise.layer$mg_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXMG * 121, 3)
wise.layer$na_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXNA * 230, 3)
wise.layer$k_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXK * 391, 3)
wise.layer$oc_d = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$ORGC / 1000 * wise.layer$BULKDENS * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(wise.layer$GRAVEL), 0, wise.layer$GRAVEL))/100, 3)
wise.h.lst <- [c](https://rdrr.io/r/base/c.html)("WISE3_ID", "labsampnum", "HONU", "TOPDEP", "BOTDEP", "DESIG", "tex_psda", "CLAY", "SILT", "SAND", "ORGC", "oc_d", "c_tot", "TOTN", "PHKCL", "PHH2O", "PHCACL2", "CECSOIL", "cec_nh4", "ecec", "GRAVEL" , "BULKDENS", "ca_ext", "mg_ext", "na_ext", "k_ext", "ECE", "ec_12pre")
x.na = wise.h.lst[[which](https://rdrr.io/r/base/which.html)(!wise.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(wise.layer))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ wise.layer[,i] = NA } }
chemsprops.WISE = [merge](https://rdrr.io/r/base/merge.html)(wise.site[,wise.s.lst], wise.layer[,wise.h.lst], by.x="WISE3_id", by.y="WISE3_ID")
chemsprops.WISE$source_db = "ISRIC_WISE"
chemsprops.WISE$confidence_degree = 4
chemsprops.WISE$project_url = "https://isric.org"
chemsprops.WISE$citation_url = "http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x"
chemsprops.WISE = complete.vars(chemsprops.WISE, sel = [c](https://rdrr.io/r/base/c.html)("ORGC","CLAY","PHH2O","CECSOIL","k_ext"), coords = [c](https://rdrr.io/r/base/c.html)("LONDD", "LATDD"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.WISE)
#> [1] 23278 36
```
#### 5\.3\.0\.12 GEMAS
* Reimann, C., Fabian, K., Birke, M., Filzmoser, P., Demetriades, A., Négrel, P., … \& Anderson, M. (2018\). [GEMAS: Establishing geochemical background and threshold for 53 chemical elements in European agricultural soil](https://doi.org/10.1016/j.apgeochem.2017.01.021). Applied Geochemistry, 88, 302\-318\. Data download URL: <http://gemas.geolba.ac.at/>
```
if({
gemas.samples <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/EU/GEMAS/GEMAS.csv", stringsAsFactors = FALSE)
## GEMAS, agricultural soil, 0-20 cm, air dried, <2 mm, aqua regia Data from ACME, total C, TOC, CEC, ph_CaCl2
gemas.samples$hzn_top = 0
gemas.samples$hzn_bot = 20
gemas.samples$oc = gemas.samples$TOC * 10
#summary(gemas.samples$oc)
gemas.samples$c_tot = gemas.samples$C_tot * 10
gemas.samples$site_obsdate = 2009
gemas.h.lst <- [c](https://rdrr.io/r/base/c.html)("ID", "COUNRTY", "site_obsdate", "XCOO", "YCOO", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "TYPE", "tex_psda", "clay", "silt", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "pH_CaCl2", "CEC", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = gemas.h.lst[[which](https://rdrr.io/r/base/which.html)(!gemas.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(gemas.samples))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ gemas.samples[,i] = NA } }
chemsprops.GEMAS <- gemas.samples[,gemas.h.lst]
chemsprops.GEMAS$source_db = "GEMAS_2009"
chemsprops.GEMAS$confidence_degree = 2
chemsprops.GEMAS$project_url = "http://gemas.geolba.ac.at/"
chemsprops.GEMAS$citation_url = "https://doi.org/10.1016/j.apgeochem.2017.01.021"
chemsprops.GEMAS = complete.vars(chemsprops.GEMAS, sel = [c](https://rdrr.io/r/base/c.html)("oc","clay","pH_CaCl2"), coords = [c](https://rdrr.io/r/base/c.html)("XCOO", "YCOO"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.GEMAS)
#> [1] 4131 36
```
#### 5\.3\.0\.13 LUCAS soil
* Orgiazzi, A., Ballabio, C., Panagos, P., Jones, A., \& Fernández‐Ugalde, O. (2018\). [LUCAS Soil, the largest expandable soil dataset for Europe: a review](https://doi.org/10.1111/ejss.12499). European Journal of Soil Science, 69(1\), 140\-153\. Data download URL: [https://esdac.jrc.ec.europa.eu/content/lucas\-2009\-topsoil\-data](https://esdac.jrc.ec.europa.eu/content/lucas-2009-topsoil-data)
```
if({
lucas.samples <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_TOPSOIL_v1.xlsx", sheet = 1)
lucas.samples$site_obsdate <- "2009"
#summary(lucas.samples$N)
lucas.ro <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/EU/LUCAS/Romania.xlsx", sheet = 1)
lucas.ro$site_obsdate <- "2012"
[names](https://rdrr.io/r/base/names.html)(lucas.samples)[[which](https://rdrr.io/r/base/which.html)(]
lucas.ro = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(lucas.ro, replace=[c](https://rdrr.io/r/base/c.html)("Soil.ID"="sample_ID", "GPS_X_LONG"="GPS_LONG", "GPS_Y_LAT"="GPS_LAT", "pHinH2O"="pH_in_H2O", "pHinCaCl2"="pH_in_CaCl"))
lucas.bu <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/EU/LUCAS/Bulgaria.xlsx", sheet = 1)
lucas.bu$site_obsdate <- "2012"
[names](https://rdrr.io/r/base/names.html)(lucas.samples)[[which](https://rdrr.io/r/base/which.html)(]
#lucas.ch <- openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_2015_Topsoil_data_of_Switzerland-with-coordinates.xlsx_.xlsx", sheet = 1, startRow = 2)
#lucas.ch = plyr::rename(lucas.ch, replace=c("Soil_ID"="sample_ID", "GPS_.LAT"="GPS_LAT", "pH.in.H2O"="pH_in_H2O", "pH.in.CaCl2"="pH_in_CaCl", "Calcium.carbonate/.g.kg–1"="CaCO3", "Silt/.g.kg–1"="silt", "Sand/.g.kg–1"="sand", "Clay/.g.kg–1"="clay", "Organic.carbon/.g.kg–1"="OC"))
## Double readings?
lucas.t = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([list](https://rdrr.io/r/base/list.html)(lucas.samples, lucas.ro, lucas.bu))
lucas.h.lst <- [c](https://rdrr.io/r/base/c.html)("POINT_ID", "usiteid", "site_obsdate", "GPS_LONG", "GPS_LAT", "sample_ID", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psda", "clay", "silt", "sand", "OC", "oc_d", "c_tot", "N", "ph_kcl", "pH_in_H2O", "pH_in_CaCl", "CEC", "cec_nh4", "ecec", "coarse", "db_od", "ca_ext", "mg_ext", "na_ext", "K", "ec_satp", "ec_12pre")
x.na = lucas.h.lst[[which](https://rdrr.io/r/base/which.html)(!lucas.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(lucas.t))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ lucas.t[,i] = NA } }
chemsprops.LUCAS <- lucas.t[,lucas.h.lst]
chemsprops.LUCAS$source_db = "LUCAS_2009"
chemsprops.LUCAS$hzn_top <- 0
chemsprops.LUCAS$hzn_bot <- 20
chemsprops.LUCAS$confidence_degree = 2
chemsprops.LUCAS$project_url = "https://esdac.jrc.ec.europa.eu/"
chemsprops.LUCAS$citation_url = "https://doi.org/10.1111/ejss.12499"
chemsprops.LUCAS = complete.vars(chemsprops.LUCAS, sel = [c](https://rdrr.io/r/base/c.html)("OC","clay","pH_in_H2O"), coords = [c](https://rdrr.io/r/base/c.html)("GPS_LONG", "GPS_LAT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.LUCAS)
#> [1] 21272 36
```
```
if({
#lucas2015.samples <- openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_Topsoil_2015_20200323.xlsx", sheet = 1)
lucas2015.xy = readOGR("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_Topsoil_2015_20200323.shp")
#head(as.data.frame(lucas2015.xy))
lucas2015.xy = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(lucas2015.xy)
## https://www.aqion.de/site/130
## 1 mS/m = 100 mS/cm | 1 dS/m = 1 mS/cm = 1 mS/m / 100
lucas2015.xy$ec_satp = lucas2015.xy$EC / 100
lucas2015.h.lst <- [c](https://rdrr.io/r/base/c.html)("Point_ID", "LC0_Desc", "site_obsdate", "coords.x1", "coords.x2", "sample_ID", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psda", "Clay", "Silt", "Sand", "OC", "oc_d", "c_tot", "N", "ph_kcl", "pH_H20", "pH_CaCl2", "CEC", "cec_nh4", "ecec", "coarse", "db_od", "ca_ext", "mg_ext", "na_ext", "K", "ec_satp", "ec_12pre")
x.na = lucas2015.h.lst[[which](https://rdrr.io/r/base/which.html)(!lucas2015.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(lucas2015.xy))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ lucas2015.xy[,i] = NA } }
chemsprops.LUCAS2 <- lucas2015.xy[,lucas2015.h.lst]
chemsprops.LUCAS2$source_db = "LUCAS_2015"
chemsprops.LUCAS2$hzn_top <- 0
chemsprops.LUCAS2$hzn_bot <- 20
chemsprops.LUCAS2$site_obsdate <- "2015"
chemsprops.LUCAS2$confidence_degree = 2
chemsprops.LUCAS2$project_url = "https://esdac.jrc.ec.europa.eu/"
chemsprops.LUCAS2$citation_url = "https://doi.org/10.1111/ejss.12499"
chemsprops.LUCAS2 = complete.vars(chemsprops.LUCAS2, sel = [c](https://rdrr.io/r/base/c.html)("OC","Clay","pH_H20"), coords = [c](https://rdrr.io/r/base/c.html)("coords.x1", "coords.x2"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.LUCAS2)
#> [1] 21859 36
```
#### 5\.3\.0\.14 Mangrove forest soil DB
* Sanderman, J., Hengl, T., Fiske, G., Solvik, K., Adame, M. F., Benson, L., … \& Duncan, C. (2018\). [A global map of mangrove forest soil carbon at 30 m spatial resolution](https://doi.org/10.1088/1748-9326/aabe1c). Environmental Research Letters, 13(5\), 055002\. Data download URL: [https://dataverse.harvard.edu/dataset.xhtml?persistentId\=doi:10\.7910/DVN/OCYUIT](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/OCYUIT)
```
if({
mng.profs <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/TNC_mangroves/mangrove_soc_database_v10_sites.csv", skip=1)
mng.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/TNC_mangroves/mangrove_soc_database_v10_horizons.csv", skip=1)
mngALL = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(mng.hors, mng.profs, by=[c](https://rdrr.io/r/base/c.html)("Site.name"))
mngALL$oc = mngALL$OC_final * 10
mngALL$oc_d = mngALL$CD_calc * 1000
mngALL$hzn_top = mngALL$U_depth * 100
mngALL$hzn_bot = mngALL$L_depth * 100
mngALL$wpg2 = 0
#summary(mngALL$BD_reported) ## some very high values 3.26 t/m3
mngALL$Year = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(mngALL$Year_sampled), mngALL$Years_collected, mngALL$Year_sampled)
mng.col = [c](https://rdrr.io/r/base/c.html)("Site.name", "Site..", "Year", "Longitude_Adjusted", "Latitude_Adjusted", "labsampnum", "layer_sequence","hzn_top","hzn_bot","hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "BD_reported", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = mng.col[[which](https://rdrr.io/r/base/which.html)(!mng.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(mngALL))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ mngALL[,i] = NA } }
chemsprops.Mangroves = mngALL[,mng.col]
chemsprops.Mangroves$source_db = "MangrovesDB"
chemsprops.Mangroves$confidence_degree = 4
chemsprops.Mangroves$project_url = "http://maps.oceanwealth.org/mangrove-restoration/"
chemsprops.Mangroves$citation_url = "https://doi.org/10.1088/1748-9326/aabe1c"
chemsprops.Mangroves = complete.vars(chemsprops.Mangroves, sel = [c](https://rdrr.io/r/base/c.html)("oc","BD_reported"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude_Adjusted", "Latitude_Adjusted"))
#head(chemsprops.Mangroves)
#levels(as.factor(mngALL$OK.to.release.))
mng.rm = chemsprops.Mangroves$Site.name[chemsprops.Mangroves$Site.name [%in%](https://rdrr.io/r/base/match.html) mngALL$Site.name[[grep](https://rdrr.io/r/base/grep.html)("N", mngALL$OK.to.release., ignore.case = FALSE)]]
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Mangroves)
#> [1] 7734 36
```
#### 5\.3\.0\.15 CIFOR peatland points
Peatland soil measurements (points) from the literature described in:
* Murdiyarso, D., Roman\-Cuesta, R. M., Verchot, L. V., Herold, M., Gumbricht, T., Herold, N., \& Martius, C. (2017\). New map reveals more peat in the tropics (Vol. 189\). CIFOR. [https://doi.org/10\.17528/cifor/006452](https://doi.org/10.17528/cifor/006452)
```
if({
cif.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/CIFOR_peatlands/SOC_literature_CIFOR.csv")
#summary(cif.hors$BD..g.cm..)
#summary(cif.hors$SOC)
cif.hors$oc = cif.hors$SOC * 10
cif.hors$wpg2 = 0
cif.hors$c_tot = cif.hors$TOC.content.... * 10
cif.hors$oc_d = cif.hors$C.density..kg.C.m..
cif.hors$site_obsdate = [as.integer](https://rdrr.io/r/base/integer.html)([substr](https://rdrr.io/r/base/substr.html)(cif.hors$year, 1, 4))-1
cif.col = [c](https://rdrr.io/r/base/c.html)("SOURCEID", "usiteid", "site_obsdate", "modelling.x", "modelling.y", "labsampnum", "layer_sequence", "Upper", "Lower", "hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "BD..g.cm..", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = cif.col[[which](https://rdrr.io/r/base/which.html)(!cif.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(cif.hors))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ cif.hors[,i] = NA } }
chemsprops.Peatlands = cif.hors[,cif.col]
chemsprops.Peatlands$source_db = "CIFOR"
chemsprops.Peatlands$confidence_degree = 4
chemsprops.Peatlands$project_url = "https://www.cifor.org/"
chemsprops.Peatlands$citation_url = "https://doi.org/10.17528/cifor/006452"
chemsprops.Peatlands = complete.vars(chemsprops.Peatlands, sel = [c](https://rdrr.io/r/base/c.html)("oc","BD..g.cm.."), coords = [c](https://rdrr.io/r/base/c.html)("modelling.x", "modelling.y"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Peatlands)
#> [1] 756 36
```
#### 5\.3\.0\.16 LandPKS observations
* Herrick, J. E., Urama, K. C., Karl, J. W., Boos, J., Johnson, M. V. V., Shepherd, K. D., … \& Kosnik, C. (2013\). [The Global Land\-Potential Knowledge System (LandPKS): Supporting Evidence\-based, Site\-specific Land Use and Management through Cloud Computing, Mobile Applications, and Crowdsourcing](https://doi.org/10.2489/jswc.68.1.5A). Journal of Soil and Water Conservation, 68(1\), 5A\-12A. Data download URL: [http://portal.landpotential.org/\#/landpksmap](http://portal.landpotential.org/#/landpksmap)
```
if({
pks = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/LandPKS/Export_LandInfo_Data.csv", stringsAsFactors = FALSE)
#str(pks)
pks.hor = [data.frame](https://rdrr.io/r/base/data.frame.html)(rock_fragments = [c](https://rdrr.io/r/base/c.html)(pks$rock_fragments_layer_0_1cm,
pks$rock_fragments_layer_1_10cm,
pks$rock_fragments_layer_10_20cm,
pks$rock_fragments_layer_20_50cm,
pks$rock_fragments_layer_50_70cm,
pks$rock_fragments_layer_70_100cm,
pks$rock_fragments_layer_100_120cm),
tex_field = [c](https://rdrr.io/r/base/c.html)(pks$texture_layer_0_1cm,
pks$texture_layer_1_10cm,
pks$texture_layer_10_20cm,
pks$texture_layer_20_50cm,
pks$texture_layer_50_70cm,
pks$texture_layer_70_100cm,
pks$texture_layer_100_120cm))
pks.hor$hzn_top = [c](https://rdrr.io/r/base/c.html)([rep](https://rdrr.io/r/base/rep.html)(0, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(1, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(10, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(20, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(50, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(70, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(100, [nrow](https://rdrr.io/r/base/nrow.html)(pks)))
pks.hor$hzn_bot = [c](https://rdrr.io/r/base/c.html)([rep](https://rdrr.io/r/base/rep.html)(1, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(10, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(20, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(50, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(70, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(100, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(120, [nrow](https://rdrr.io/r/base/nrow.html)(pks)))
pks.hor$longitude_decimal_degrees = [rep](https://rdrr.io/r/base/rep.html)(pks$longitude, 7)
pks.hor$latitude_decimal_degrees = [rep](https://rdrr.io/r/base/rep.html)(pks$latitude, 7)
pks.hor$site_obsdate = [rep](https://rdrr.io/r/base/rep.html)(pks$modified_date, 7)
pks.hor$site_key = [rep](https://rdrr.io/r/base/rep.html)(pks$id, 7)
#summary(as.factor(pks.hor$tex_field))
tex.tr = [data.frame](https://rdrr.io/r/base/data.frame.html)(tex_field=[c](https://rdrr.io/r/base/c.html)("CLAY", "CLAY LOAM", "LOAM", "LOAMY SAND", "SAND", "SANDY CLAY", "SANDY CLAY LOAM", "SANDY LOAM", "SILT LOAM", "SILTY CLAY", "SILTY CLAY LOAM"),
clay_tot_psa=[c](https://rdrr.io/r/base/c.html)(62.4, 34.0, 19.0, 5.8, 3.3, 41.7, 27.0, 10.0, 13.1, 46.7, 34.0),
silt_tot_psa=[c](https://rdrr.io/r/base/c.html)(17.8, 34.0, 40.0, 12.0, 5.0, 6.7, 13.0, 25.0, 65.7, 46.7, 56.0),
sand_tot_psa=[c](https://rdrr.io/r/base/c.html)(19.8, 32.0, 41.0, 82.2, 91.7, 51.6, 60.0, 65.0, 21.2, 6.7, 10.0))
pks.hor$clay_tot_psa = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(pks.hor["tex_field"], tex.tr)$clay_tot_psa
pks.hor$silt_tot_psa = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(pks.hor["tex_field"], tex.tr)$silt_tot_psa
pks.hor$sand_tot_psa = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(pks.hor["tex_field"], tex.tr)$sand_tot_psa
#summary(as.factor(pks.hor$rock_fragments))
pks.hor$wpg2 = [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments==">60%", 65, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="35-60%", 47.5, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="15-35%", 25, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="1-15%" | pks.hor$rock_fragments=="0-15%", 7.5, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="0-1%", 0.5, NA)))))
#head(pks.hor)
#plot(pks.hor[,c("longitude_decimal_degrees","latitude_decimal_degrees")])
pks.col = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence","hzn_top","hzn_bot","hzn_desgn", "tex_field", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = pks.col[[which](https://rdrr.io/r/base/which.html)(!pks.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(pks.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ pks.hor[,i] = NA } }
chemsprops.LandPKS = pks.hor[,pks.col]
chemsprops.LandPKS$source_db = "LandPKS"
chemsprops.LandPKS$confidence_degree = 8
chemsprops.LandPKS$project_url = "http://portal.landpotential.org"
chemsprops.LandPKS$citation_url = "https://doi.org/10.2489/jswc.68.1.5A"
chemsprops.LandPKS = complete.vars(chemsprops.LandPKS, sel = [c](https://rdrr.io/r/base/c.html)("clay_tot_psa","wpg2"), coords = [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.LandPKS)
#> [1] 41644 36
```
#### 5\.3\.0\.17 EGRPR
* [Russian Federation: The Unified State Register of Soil Resources (EGRPR)](http://egrpr.esoil.ru/). Data download URL: <http://egrpr.esoil.ru/content/1DB.html>
```
if({
russ.HOR = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Russia/EGRPR/Russia_EGRPR_soil_pedons.csv")
russ.HOR$SOURCEID = [paste](https://rdrr.io/r/base/paste.html)(russ.HOR$CardID, russ.HOR$SOIL_ID, sep="_")
russ.HOR$wpg2 = russ.HOR$TEXTSTNS
russ.HOR$SNDPPT <- russ.HOR$TEXTSAF + russ.HOR$TEXSCM
russ.HOR$SLTPPT <- russ.HOR$TEXTSIC + russ.HOR$TEXTSIM + 0.8 * [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(russ.HOR$TEXTSIF), 0, russ.HOR$TEXTSIF)
russ.HOR$CLYPPT <- russ.HOR$TEXTCL + 0.2 * [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(russ.HOR$TEXTSIF), 0, russ.HOR$TEXTSIF)
## Correct texture fractions:
sumTex <- [rowSums](https://rdrr.io/r/base/colSums.html)(russ.HOR[,[c](https://rdrr.io/r/base/c.html)("SLTPPT","CLYPPT","SNDPPT")])
russ.HOR$SNDPPT <- russ.HOR$SNDPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$SLTPPT <- russ.HOR$SLTPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$oc <- [rowMeans](https://rdrr.io/r/base/colSums.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(x1=russ.HOR$CORG * 10, x2=russ.HOR$ORGMAT/1.724 * 10), na.rm=TRUE)
russ.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(russ.HOR$oc / 1000 * russ.HOR$DVOL * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(russ.HOR$wpg2), 0, russ.HOR$wpg2))/100, 3)
russ.HOR$n_tot <- russ.HOR$NTOT * 10
russ.HOR$ca_ext = russ.HOR$EXCA * 200
russ.HOR$mg_ext = russ.HOR$EXMG * 121
russ.HOR$na_ext = russ.HOR$EXNA * 230
russ.HOR$k_ext = russ.HOR$EXK * 391
## Sampling year not available but with high confidence <2000
russ.HOR$site_obsdate = "1982"
russ.sel.h <- [c](https://rdrr.io/r/base/c.html)("SOURCEID", "SOIL_ID", "site_obsdate", "LONG", "LAT", "labsampnum", "HORNMB", "HORTOP", "HORBOT", "HISMMN", "tex_psda", "CLYPPT", "SLTPPT", "SNDPPT", "oc", "oc_d", "c_tot", "NTOT", "PHSLT", "PHH2O", "ph_cacl2", "CECST", "cec_nh4", "ecec", "wpg2", "DVOL", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = russ.sel.h[[which](https://rdrr.io/r/base/which.html)(!russ.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(russ.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ russ.HOR[,i] = NA } }
chemsprops.EGRPR = russ.HOR[,russ.sel.h]
chemsprops.EGRPR$source_db = "Russia_EGRPR"
chemsprops.EGRPR$confidence_degree = 2
chemsprops.EGRPR$project_url = "http://egrpr.esoil.ru/"
chemsprops.EGRPR$citation_url = "https://doi.org/10.19047/0136-1694-2016-86-115-123"
chemsprops.EGRPR <- complete.vars(chemsprops.EGRPR, sel=[c](https://rdrr.io/r/base/c.html)("oc", "CLYPPT"), coords = [c](https://rdrr.io/r/base/c.html)("LONG", "LAT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.EGRPR)
#> [1] 4437 36
```
#### 5\.3\.0\.18 Canada National Pedon DB
* [Agriculture and Agri\-Food Canada National Pedon Database](https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0). Data download URL: <https://open.canada.ca/data/en/>
```
if({
NPDB.nm = [c](https://rdrr.io/r/base/c.html)("NPDB_V2_sum_source_info.csv","NPDB_V2_sum_chemical.csv", "NPDB_V2_sum_horizons_raw.csv", "NPDB_V2_sum_physical.csv")
NPDB.HOR = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)([paste0](https://rdrr.io/r/base/paste.html)("/mnt/diskstation/data/Soil_points/Canada/NPDB/", NPDB.nm), read.csv), type = "full")
NPDB.HOR$HISMMN = [paste0](https://rdrr.io/r/base/paste.html)(NPDB.HOR$HZN_MAS, NPDB.HOR$HZN_SUF, NPDB.HOR$HZN_MOD)
NPDB.HOR$CARB_ORG[NPDB.HOR$CARB_ORG==9] <- NA
NPDB.HOR$N_TOTAL[NPDB.HOR$N_TOTAL==9] <- NA
NPDB.HOR$oc = NPDB.HOR$CARB_ORG * 10
NPDB.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(NPDB.HOR$oc / 1000 * NPDB.HOR$BULK_DEN * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(NPDB.HOR$VC_SAND), 0, NPDB.HOR$VC_SAND))/100, 3)
NPDB.HOR$ca_ext = NPDB.HOR$EXCH_CA * 200
NPDB.HOR$mg_ext = NPDB.HOR$EXCH_MG * 121
NPDB.HOR$na_ext = NPDB.HOR$EXCH_NA * 230
NPDB.HOR$k_ext = NPDB.HOR$EXCH_K * 391
npdb.sel.h = [c](https://rdrr.io/r/base/c.html)("PEDON_ID", "usiteid", "CAL_YEAR", "DD_LONG", "DD_LAT", "labsampnum", "layer_sequence", "U_DEPTH", "L_DEPTH", "HISMMN", "tex_psda", "T_CLAY", "T_SILT", "T_SAND", "oc", "oc_d", "c_tot", "N_TOTAL", "ph_kcl", "PH_H2O", "PH_CACL2", "CEC", "cec_nh4", "ecec", "VC_SAND", "BULK_DEN", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = npdb.sel.h[[which](https://rdrr.io/r/base/which.html)(!npdb.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(NPDB.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ NPDB.HOR[,i] = NA } }
chemsprops.NPDB = NPDB.HOR[,npdb.sel.h]
chemsprops.NPDB$source_db = "Canada_NPDB"
chemsprops.NPDB$confidence_degree = 2
chemsprops.NPDB$project_url = "https://open.canada.ca/data/en/"
chemsprops.NPDB$citation_url = "https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0"
chemsprops.NPDB <- complete.vars(chemsprops.NPDB, sel=[c](https://rdrr.io/r/base/c.html)("oc", "PH_H2O", "T_CLAY"), coords = [c](https://rdrr.io/r/base/c.html)("DD_LONG", "DD_LAT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NPDB)
#> [1] 15946 36
```
#### 5\.3\.0\.19 Canadian upland forest soil profile and carbon stocks database
* Shaw, C., Hilger, A., Filiatrault, M., \& Kurz, W. (2018\). [A Canadian upland forest soil profile and carbon stocks database](https://doi.org/10.1002/ecy.2159). Ecology, 99(4\), 989\-989\. Data download URL: [https://esajournals.onlinelibrary.wiley.com/action/downloadSupplement?doi\=10\.1002%2Fecy.2159\&file\=ecy2159\-sup\-0001\-DataS1\.zip](https://esajournals.onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1002%2Fecy.2159&file=ecy2159-sup-0001-DataS1.zip)
\*Organic horizons have negative values, the first mineral soil horizon has a value of 0 cm, and other mineral soil horizons have positive values. This needs to be corrected before the values can be bind with other international sets.
```
if({
## Reading of the .dat file was tricky
cufs.HOR = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Canada/CUFSDB/PROFILES.csv", stringsAsFactors = FALSE)
cufs.HOR$LOWER_HZN_LIMIT =cufs.HOR$UPPER_HZN_LIMIT + cufs.HOR$HZN_THICKNESS
## Correct depth (Canadian data can have negative depths for soil horizons):
z.min.cufs <- ddply(cufs.HOR, .(LOCATION_ID), summarize, aggregated = [min](https://rdrr.io/r/base/Extremes.html)(UPPER_HZN_LIMIT, na.rm=TRUE))
z.shift.cufs <- join(cufs.HOR["LOCATION_ID"], z.min.cufs, type="left")$aggregated
## fixed shift
z.shift.cufs <- [ifelse](https://rdrr.io/r/base/ifelse.html)(z.shift.cufs>0, 0, z.shift.cufs)
cufs.HOR$hzn_top <- cufs.HOR$UPPER_HZN_LIMIT - z.shift.cufs
cufs.HOR$hzn_bot <- cufs.HOR$LOWER_HZN_LIMIT - z.shift.cufs
cufs.SITE = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Canada/CUFSDB/SITES.csv", stringsAsFactors = FALSE)
cufs.HOR$longitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$LONGITUDE
cufs.HOR$latitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$LATITUDE
cufs.HOR$site_obsdate = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$YEAR_SAMPLED
cufs.HOR$usiteid = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$RELEASE_SOURCE_SITEID
#summary(cufs.HOR$ORG_CARB_PCT)
#hist(cufs.HOR$ORG_CARB_PCT, breaks=45)
cufs.HOR$oc = cufs.HOR$ORG_CARB_PCT*10
#cufs.HOR$c_tot = cufs.HOR$oc + ifelse(is.na(cufs.HOR$CARBONATE_CARB_PCT), 0, cufs.HOR$CARBONATE_CARB_PCT*10)
cufs.HOR$n_tot = cufs.HOR$TOT_NITRO_PCT*10
cufs.HOR$ca_ext = cufs.HOR$EXCH_Ca * 200
cufs.HOR$mg_ext = cufs.HOR$EXCH_Mg * 121
cufs.HOR$na_ext = cufs.HOR$EXCH_Na * 230
cufs.HOR$k_ext = cufs.HOR$EXCH_K * 391
cufs.HOR$ph_cacl2 = cufs.HOR$pH
cufs.HOR$ph_cacl2[!cufs.HOR$pH_H2O_CACL2=="CACL2"] = NA
cufs.HOR$ph_h2o = cufs.HOR$pH
cufs.HOR$ph_h2o[!cufs.HOR$pH_H2O_CACL2=="H2O"] = NA
#summary(cufs.HOR$CF_VOL_PCT) ## is NA == 0??
cufs.HOR$wpg2 = [ifelse](https://rdrr.io/r/base/ifelse.html)(cufs.HOR$CF_CORR_FACTOR==1, 0, cufs.HOR$CF_VOL_PCT)
cufs.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(cufs.HOR$oc / 1000 * cufs.HOR$BULK_DENSITY * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(cufs.HOR$wpg2), 0, cufs.HOR$wpg2))/100, 3)
cufs.sel.h = [c](https://rdrr.io/r/base/c.html)("LOCATION_ID", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "HZN_SEQ_NO", "hzn_top", "hzn_bot", "HORIZON", "TEXT_CLASS", "CLAY_PCT", "SILT_PCT", "SAND_PCT", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "CEC_CALCULATED", "cec_nh4", "ecec", "wpg2", "BULK_DENSITY", "ca_ext", "mg_ext", "na_ext", "k_ext", "ELEC_COND", "ec_12pre")
x.na = cufs.sel.h[[which](https://rdrr.io/r/base/which.html)(!cufs.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(cufs.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ cufs.HOR[,i] = NA } }
chemsprops.CUFS = cufs.HOR[,cufs.sel.h]
chemsprops.CUFS$source_db = "Canada_CUFS"
chemsprops.CUFS$confidence_degree = 1
chemsprops.CUFS$project_url = "https://cfs.nrcan.gc.ca/publications/centre/nofc"
chemsprops.CUFS$citation_url = "https://doi.org/10.1002/ecy.2159"
chemsprops.CUFS <- complete.vars(chemsprops.CUFS, sel=[c](https://rdrr.io/r/base/c.html)("oc", "ph_h2o", "CLAY_PCT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CUFS)
#> [1] 15162 36
```
#### 5\.3\.0\.20 Permafrost in subarctic Canada
* Estop\-Aragones, C.; Fisher, J.P.; Cooper, M.A.; Thierry, A.; Treharne, R.; Murton, J.B.; Phoenix, G.K.; Charman, D.J.; Williams, M.; Hartley, I.P. (2016\). Bulk density, carbon and nitrogen content in soil profiles from permafrost in subarctic Canada. NERC Environmental Information Data Centre. [https://doi.org/10\.5285/efa2a84b\-3505\-4221\-a7da\-12af3cdc1952](https://doi.org/10.5285/efa2a84b-3505-4221-a7da-12af3cdc1952). Data download URL:
```
if({
caperm.HOR = vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/Canada/NorthCanada/Bulk_density_CandNcontent_profiles_all_sites.csv")
#measurements::conv_unit("-99 36 15.7", from = "deg_min_sec", to = "dec_deg")
#caperm.HOR$longitude_decimal_degrees = as.numeric(measurements::conv_unit(paste0("-", gsub('\"W', '', gsub("'", ' ', iconv(caperm.HOR$Coordinates_West, "UTF-8", "UTF-8", sub=' ')), fixed = TRUE)), from = "deg_min_sec", to = "dec_deg"))
caperm.HOR$longitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)(measurements::[conv_unit](https://rdrr.io/pkg/measurements/man/conv_unit.html)([paste0](https://rdrr.io/r/base/paste.html)("-", caperm.HOR$Cordinates_West), from = "deg_min_sec", to = "dec_deg"))
#caperm.HOR$latitude_decimal_degrees = as.numeric(measurements::conv_unit(gsub('\"N', '', gsub('o', '', gsub("'", ' ', iconv(caperm.HOR$Coordinates_North, "UTF-8", "UTF-8", sub=' '))), fixed = TRUE), from = "deg_min_sec", to = "dec_deg"))
caperm.HOR$latitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)(measurements::[conv_unit](https://rdrr.io/pkg/measurements/man/conv_unit.html)(caperm.HOR$Cordinates_North, from = "deg_min_sec", to = "dec_deg"))
#plot(caperm.HOR[,c("longitude_decimal_degrees","latitude_decimal_degrees")])
caperm.HOR$site_obsdate = "2013"
caperm.HOR$site_key = [make.unique](https://rdrr.io/r/base/make.unique.html)(caperm.HOR$Soil.core)
#summary(as.factor(caperm.HOR$Soil_depth_cm))
caperm.HOR$hzn_top = caperm.HOR$Soil_depth_cm-1
caperm.HOR$hzn_bot = caperm.HOR$Soil_depth_cm+1
caperm.HOR$db_od = caperm.HOR$Bulk_density_gdrysoil_cm3wetsoil
caperm.HOR$oc = caperm.HOR$Ccontent_percentage_on_drymass * 10
caperm.HOR$n_tot = caperm.HOR$Ncontent_percentage_on_drymass * 10
caperm.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(caperm.HOR$oc / 1000 * caperm.HOR$db_od * 1000, 3)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(caperm.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ caperm.HOR[,i] = NA } }
chemsprops.CAPERM = caperm.HOR[,col.names]
chemsprops.CAPERM$source_db = "Canada_subarctic"
chemsprops.CAPERM$confidence_degree = 2
chemsprops.CAPERM$project_url = "http://arp.arctic.ac.uk/projects/carbon-cycling-linkages-permafrost-systems-cyclops/"
chemsprops.CAPERM$citation_url = "https://doi.org/10.5285/efa2a84b-3505-4221-a7da-12af3cdc1952"
chemsprops.CAPERM <- complete.vars(chemsprops.CAPERM, sel=[c](https://rdrr.io/r/base/c.html)("oc", "n_tot"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CAPERM)
#> [1] 1180 36
```
#### 5\.3\.0\.21 SOTER China soil profiles
* Dijkshoorn, K., van Engelen, V., \& Huting, J. (2008\). [Soil and landform properties for LADA partner countries](https://isric.org/sites/default/files/isric_report_2008_06.pdf). ISRIC report 2008/06 and GLADA report 2008/03, ISRIC – World Soil Information and FAO, Wageningen. Data download URL: [https://files.isric.org/public/soter/CN\-SOTER.zip](https://files.isric.org/public/soter/CN-SOTER.zip)
```
if({
sot.sites = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/China/China_SOTERv1/CHINA_SOTERv1_Profile.csv")
sot.horizons = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/China/China_SOTERv1/CHINA_SOTERv1_Horizon.csv")
sot.HOR = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(sot.sites, sot.horizons), type = "full")
sot.HOR$oc = sot.HOR$SOCA * 10
sot.HOR$ca_ext = sot.HOR$EXCA * 200
sot.HOR$mg_ext = sot.HOR$EXMG * 121
sot.HOR$na_ext = sot.HOR$EXNA * 230
sot.HOR$k_ext = sot.HOR$EXCK * 391
## upper depth missing needs to be derived manually
sot.HOR$hzn_top = NA
sot.HOR$hzn_top[2:[nrow](https://rdrr.io/r/base/nrow.html)(sot.HOR)] <- sot.HOR$HBDE[1:([nrow](https://rdrr.io/r/base/nrow.html)(sot.HOR)-1)]
sot.HOR$hzn_top <- [ifelse](https://rdrr.io/r/base/ifelse.html)(sot.HOR$hzn_top > sot.HOR$HBDE, 0, sot.HOR$hzn_top)
sot.HOR$hzn_top <- [ifelse](https://rdrr.io/r/base/ifelse.html)(sot.HOR$HONU==1 & [is.na](https://rdrr.io/r/base/NA.html)(sot.HOR$hzn_top), 0, sot.HOR$hzn_top)
sot.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(sot.HOR$oc / 1000 * sot.HOR$BULK * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sot.HOR$SDVC), 0, sot.HOR$SDVC))/100, 3)
sot.sel.h = [c](https://rdrr.io/r/base/c.html)("PRID", "PDID", "SAYR", "LNGI", "LATI", "labsampnum", "HONU", "hzn_top","HBDE","HODE", "PSCL", "CLPC", "STPC", "SDTO", "oc", "oc_d", "TOTC", "TOTN", "PHKC", "PHAQ", "ph_cacl2", "CECS", "cec_nh4", "ecec", "SDVC", "BULK", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = sot.sel.h[[which](https://rdrr.io/r/base/which.html)(!sot.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sot.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sot.HOR[,i] = NA } }
chemsprops.CNSOT = sot.HOR[,sot.sel.h]
chemsprops.CNSOT$source_db = "China_SOTER"
chemsprops.CNSOT$confidence_degree = 8
chemsprops.CNSOT$project_url = "https://www.isric.org/explore/soter"
chemsprops.CNSOT$citation_url = "https://isric.org/sites/default/files/isric_report_2008_06.pdf"
chemsprops.CNSOT <- complete.vars(chemsprops.CNSOT, sel=[c](https://rdrr.io/r/base/c.html)("TOTC", "PHAQ", "CLPC"), coords = [c](https://rdrr.io/r/base/c.html)("LNGI", "LATI"))
}
#> Joining by: PRID, INFR
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CNSOT)
#> [1] 5105 36
```
#### 5\.3\.0\.22 SISLAC
* Sistema de Información de Suelos de Latinoamérica (SISLAC), Data download URL: [http://54\.229\.242\.119/sislac/es](http://54.229.242.119/sislac/es)
```
if({
sis.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/SA/SISLAC/sislac_profiles_es.csv", stringsAsFactors = FALSE)
#str(sis.hor)
## SOC for Urugvay do not match the original soil profile data (see e.g. http://www.mgap.gub.uy/sites/default/files/multimedia/skmbt_c45111090914030.pdf)
## compare with:
#sis.hor[sis.hor$perfil_id=="23861",]
## Subset to SISINTA/WOSIS points:
cor.sel = [c](https://rdrr.io/r/base/c.html)([grep](https://rdrr.io/r/base/grep.html)("WoSIS", [paste](https://rdrr.io/r/base/paste.html)(sis.hor$perfil_numero)), [grep](https://rdrr.io/r/base/grep.html)("SISINTA", [paste](https://rdrr.io/r/base/paste.html)(sis.hor$perfil_numero)))
#length(cor.sel)
sis.hor = sis.hor[cor.sel,]
#summary(sis.hor$analitico_carbono_organico_c)
sis.hor$oc = sis.hor$analitico_carbono_organico_c * 10
sis.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(sis.hor$oc / 1000 * sis.hor$analitico_densidad_aparente * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sis.hor$analitico_gravas), 0, sis.hor$analitico_gravas))/100, 3)
#summary(sis.hor$analitico_base_k)
#summary(as.factor(sis.hor$perfil_fecha))
sis.sel.h = [c](https://rdrr.io/r/base/c.html)("perfil_id", "perfil_numero", "perfil_fecha", "perfil_ubicacion_longitud", "perfil_ubicacion_latitud", "id", "layer_sequence", "profundidad_superior", "profundidad_inferior", "hzn_desgn", "tex_psda", "analitico_arcilla", "analitico_limo_2_50", "analitico_arena_total", "oc", "oc_d", "c_tot", "n_tot", "analitico_ph_kcl", "analitico_ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "analitico_gravas", "analitico_densidad_aparente", "ca_ext", "mg_ext", "na_ext", "k_ext", "analitico_conductividad", "ec_12pre")
x.na = sis.sel.h[[which](https://rdrr.io/r/base/which.html)(!sis.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sis.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sis.hor[,i] = NA } }
chemsprops.SISLAC = sis.hor[,sis.sel.h]
chemsprops.SISLAC$source_db = "SISLAC"
chemsprops.SISLAC$confidence_degree = 4
chemsprops.SISLAC$project_url = "http://54.229.242.119/sislac/es"
chemsprops.SISLAC$citation_url = "https://hdl.handle.net/10568/49611"
chemsprops.SISLAC <- complete.vars(chemsprops.SISLAC, sel=[c](https://rdrr.io/r/base/c.html)("oc","analitico_ph_kcl","analitico_arcilla"), coords = [c](https://rdrr.io/r/base/c.html)("perfil_ubicacion_longitud", "perfil_ubicacion_latitud"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SISLAC)
#> [1] 49994 36
```
#### 5\.3\.0\.23 FEBR
* Samuel\-Rosa, A., Dalmolin, R. S. D., Moura\-Bueno, J. M., Teixeira, W. G., \& Alba, J. M. F. (2020\). Open legacy soil survey data in Brazil: geospatial data quality and how to improve it. Scientia Agricola, 77(1\). [https://doi.org/10\.1590/1678\-992x\-2017\-0430](https://doi.org/10.1590/1678-992x-2017-0430)
* Free Brazilian Repository for Open Soil Data – febr. Data download URL: <http://www.ufsm.br/febr/>
```
if({
#library(febr)
## download up-to-date copy of data
#febr.lab = febr::layer(dataset = "all", variable="all")
#febr.lab = febr::observation(dataset = "all")
febr.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Brasil/FEBR/febr-superconjunto.csv", stringsAsFactors = FALSE, dec = ",", sep = ";")
#head(febr.hor)
#summary(febr.hor$carbono)
#summary(febr.hor$ph)
#summary(febr.hor$dsi) ## bulk density of total soil
febr.hor$clay_tot_psa = febr.hor$argila /10
febr.hor$sand_tot_psa = febr.hor$areia /10
febr.hor$silt_tot_psa = febr.hor$silte /10
febr.hor$wpg2 = (1000-febr.hor$terrafina)/10
febr.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(febr.hor$carbono / 1000 * febr.hor$dsi * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(febr.hor$wpg2), 0, febr.hor$wpg2))/100, 3)
febr.sel.h <- [c](https://rdrr.io/r/base/c.html)("observacao_id", "usiteid", "observacao_data", "coord_x", "coord_y", "sisb_id", "camada_id", "profund_sup", "profund_inf", "camada_nome", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "carbono", "oc_d", "c_tot", "nitrogenio", "ph_kcl", "ph", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "dsi", "ca_ext", "mg_ext", "na_ext", "k_ext", "ce", "ec_12pre")
x.na = febr.sel.h[[which](https://rdrr.io/r/base/which.html)(!febr.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(febr.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ febr.hor[,i] = NA } }
chemsprops.FEBR = febr.hor[,febr.sel.h]
chemsprops.FEBR$source_db = "FEBR"
chemsprops.FEBR$confidence_degree = 4
chemsprops.FEBR$project_url = "http://www.ufsm.br/febr/"
chemsprops.FEBR$citation_url = "https://doi.org/10.1590/1678-992x-2017-0430"
chemsprops.FEBR <- complete.vars(chemsprops.FEBR, sel=[c](https://rdrr.io/r/base/c.html)("carbono","ph","clay_tot_psa","dsi"), coords = [c](https://rdrr.io/r/base/c.html)("coord_x", "coord_y"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.FEBR)
#> [1] 7842 36
```
#### 5\.3\.0\.24 PRONASOLOS
* POLIDORO, J., COELHO, M., CARVALHO FILHO, A. D., LUMBRERAS, J., de OLIVEIRA, A. P., VASQUES, G. D. M., … \& BREFIN, M. (2021\). [Programa Nacional de Levantamento e Interpretação de Solos do Brasil (PronaSolos): diretrizes para implementação](https://www.infoteca.cnptia.embrapa.br/infoteca/handle/doc/1135056). Embrapa Solos\-Documentos (INFOTECA\-E).
* Download URL: <http://geoinfo.cnps.embrapa.br/documents/3013/download>
```
if({
pronas.hor = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(sf::[read_sf](https://r-spatial.github.io/sf/reference/st_read.html)("/mnt/diskstation/data/Soil_points/Brasil/Pronasolos/Perfis_PronaSolos_20201202v2.shp"))
## 34,464 rows
#head(pronas.hor)
#summary(as.numeric(pronas.hor$carbono_or))
#summary(as.numeric(pronas.hor$densidade_))
#summary(as.numeric(pronas.hor$argila))
#summary(as.numeric(pronas.hor$cascalho))
#summary(as.numeric(pronas.hor$ph_h2o))
#summary(as.numeric(pronas.hor$complexo_2))
## A lot of errors / typos e.g. very high values and 0 values!!
#pronas.hor$data_colet[1:50]
pronas.in.name = [c](https://rdrr.io/r/base/c.html)("sigla", "codigo_pon", "data_colet", "gcs_latitu", "gcs_longit", "simbolo_ho", "profundida",
"profundi_1", "cascalho", "areia_tota", "silte", "argila", "densidade_", "ph_h2o", "ph_kcl",
"complexo_s", "complexo_1", "complexo_2", "complexo_3", "valor_s", "carbono_or", "nitrogenio",
"condutivid", "classe_tex")
#pronas.in.name[which(!pronas.in.name %in% names(pronas.hor))]
pronas.x = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(pronas.hor[,pronas.in.name])
pronas.out.name = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "latitude_decimal_degrees", "longitude_decimal_degrees",
"hzn_desgn", "hzn_bot", "hzn_top", "wpg2", "sand_tot_psa", "silt_tot_psa",
"clay_tot_psa", "db_od", "ph_h2o", "ph_kcl", "ca_ext",
"mg_ext", "k_ext", "na_ext", "cec_sum", "oc", "n_tot", "ec_satp", "tex_psda")
## translate values
pronas.fun.lst = [as.list](https://rdrr.io/r/base/list.html)([rep](https://rdrr.io/r/base/rep.html)("as.numeric(x)*1", [length](https://rdrr.io/r/base/length.html)(pronas.in.name)))
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="sigla")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="codigo_pon")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="data_colet")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="simbolo_ho")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="classe_tex")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_s")]] = "as.numeric(x)*200"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_1")]] = "as.numeric(x)*121"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_2")]] = "as.numeric(x)*391"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_3")]] = "as.numeric(x)*230"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="areia_tota")]] = "round(as.numeric(x)/10, 1)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="silte")]] = "round(as.numeric(x)/10, 1)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="argila")]] = "round(as.numeric(x)/10, 1)"
## save translation rules:
[write.csv](https://rdrr.io/r/utils/write.table.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(pronas.in.name, pronas.out.name, [unlist](https://rdrr.io/r/base/unlist.html)(pronas.fun.lst)), "pronas_soilab_transvalues.csv")
pronas.soil = transvalues(pronas.x, pronas.out.name, pronas.in.name, pronas.fun.lst)
pronas.soil$oc_d = [signif](https://rdrr.io/r/base/Round.html)(pronas.soil$oc / 1000 * pronas.soil$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(pronas.soil$wpg2), 0, pronas.soil$wpg2))/100, 3)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(pronas.soil))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ pronas.soil[,i] = NA } }
chemsprops.PRONASOLOS = pronas.soil[,col.names]
chemsprops.PRONASOLOS$source_db = "PRONASOLOS"
chemsprops.PRONASOLOS$confidence_degree = 2
chemsprops.PRONASOLOS$project_url = "https://geoportal.cprm.gov.br/pronasolos/"
chemsprops.PRONASOLOS$citation_url = "https://www.infoteca.cnptia.embrapa.br/infoteca/handle/doc/1135056"
chemsprops.PRONASOLOS <- complete.vars(chemsprops.PRONASOLOS, sel=[c](https://rdrr.io/r/base/c.html)("oc","ph_h2o","clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.PRONASOLOS)
#> [1] 31747 36
```
#### 5\.3\.0\.25 Soil Profile DB for Costa Rica
* Mata, R., Vázquez, A., Rosales, A., \& Salazar, D. (2012\). [Mapa digital de suelos de Costa Rica](http://www.cia.ucr.ac.cr/?page_id=139). Asociación Costarricense de la Ciencia del Suelo, San José, CRC. Escala, 1, 200000\. Data download URL: [http://www.cia.ucr.ac.cr/wp\-content/recursosnaturales/Base%20perfiles%20de%20suelos%20v1\.1\.rar](http://www.cia.ucr.ac.cr/wp-content/recursosnaturales/Base%20perfiles%20de%20suelos%20v1.1.rar)
* Mata\-Chinchilla, R., \& Castro\-Chinchilla, J. (2019\). Geoportal de suelos de Costa Rica como Bien Público al servicio del país. Revista Tecnología En Marcha, 32(7\), Pág. 51\-56\. [https://doi.org/10\.18845/tm.v32i7\.4259](https://doi.org/10.18845/tm.v32i7.4259)
```
if({
cr.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Costa_Rica/Base_de_datos_version_1.2.3.csv", stringsAsFactors = FALSE)
#plot(cr.hor[,c("X","Y")], pch="+", asp=1)
cr.hor$usiteid = [paste](https://rdrr.io/r/base/paste.html)(cr.hor$Provincia, cr.hor$Cantón, cr.hor$Id, sep="_")
#summary(cr.hor$Corg.)
cr.hor$oc = cr.hor$Corg. * 10
cr.hor$Densidad.Aparente = [as.numeric](https://rdrr.io/r/base/numeric.html)([paste0](https://rdrr.io/r/base/paste.html)(cr.hor$Densidad.Aparente))
#summary(cr.hor$K)
cr.hor$ca_ext = cr.hor$Ca * 200
cr.hor$mg_ext = cr.hor$Mg * 121
#cr.hor$na_ext = cr.hor$Na * 230
cr.hor$k_ext = cr.hor$K * 391
cr.hor$wpg2 = NA
cr.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(cr.hor$oc / 1000 * cr.hor$Densidad.Aparente * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(cr.hor$wpg2), 0, cr.hor$wpg2))/100, 3)
cr.sel.h = [c](https://rdrr.io/r/base/c.html)("Id", "usiteid", "Fecha", "X", "Y", "labsampnum", "horizonte", "prof_inicio", "prof_final", "id_hz", "Clase.Textural", "ARCILLA", "LIMO", "ARENA", "oc", "oc_d", "c_tot", "n_tot", "pHKCl", "pH_H2O", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "Densidad.Aparente", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = cr.sel.h[[which](https://rdrr.io/r/base/which.html)(!cr.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(cr.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ cr.hor[,i] = NA } }
chemsprops.CostaRica = cr.hor[,cr.sel.h]
chemsprops.CostaRica$source_db = "CostaRica"
chemsprops.CostaRica$confidence_degree = 4
chemsprops.CostaRica$project_url = "http://www.cia.ucr.ac.cr"
chemsprops.CostaRica$citation_url = "https://doi.org/10.18845/tm.v32i7.42"
chemsprops.CostaRica <- complete.vars(chemsprops.CostaRica, sel=[c](https://rdrr.io/r/base/c.html)("oc","pH_H2O","ARCILLA","Densidad.Aparente"), coords = [c](https://rdrr.io/r/base/c.html)("X", "Y"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CostaRica)
#> [1] 2042 36
```
#### 5\.3\.0\.26 Iran soil profile DB
* Dewan, M. L., \& Famouri, J. (1964\). The soils of Iran. Food and Agriculture Organization of the United Nations.
* Hengl, T., Toomanian, N., Reuter, H. I., \& Malakouti, M. J. (2007\). [Methods to interpolate soil categorical variables from profile observations: Lessons from Iran](https://doi.org/10.1016/j.geoderma.2007.04.022). Geoderma, 140(4\), 417\-427\.
* Mohammad, H. B. (2000\). Soil resources and use potentiality map of Iran. Soil and Water Research Institute, Teheran, Iran.
```
if({
na.s = [c](https://rdrr.io/r/base/c.html)("?","","?.","??", -2147483647, -1.00e+308, "<NA>")
iran.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Iran/iran_sdbana.txt", stringsAsFactors = FALSE, na.strings = na.s, header = FALSE)[,1:12]
[names](https://rdrr.io/r/base/names.html)(iran.hor) = [c](https://rdrr.io/r/base/c.html)("site_key", "hzn_desgn", "hzn_top", "hzn_bot", "ph_h2o", "ec_satp", "oc", "CACO", "PBS", "sand_tot_psa", "silt_tot_psa", "clay_tot_psa")
iran.hor$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(iran.hor$hzn_top) & iran.hor$hzn_desgn=="A", 0, iran.hor$hzn_top)
iran.hor2 = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Iran/iran_sdbhor.txt", stringsAsFactors = FALSE, na.strings = na.s, header = FALSE)[,1:8]
[names](https://rdrr.io/r/base/names.html)(iran.hor2) = [c](https://rdrr.io/r/base/c.html)("site_key", "layer_sequence", "DESI", "hzn_top", "hzn_bot", "M_colour", "tex_psda", "hzn_desgn")
iran.site = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Iran/iran_sgdb.txt", stringsAsFactors = FALSE, na.strings = na.s, header = FALSE)
[names](https://rdrr.io/r/base/names.html)(iran.site) = [c](https://rdrr.io/r/base/c.html)("usiteid", "latitude_decimal_degrees", "longitude_decimal_degrees", "FAO", "Tax", "site_key")
iran.db = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(iran.site, iran.hor, iran.hor2))
iran.db$oc = iran.db$oc * 10
#summary(iran.db$oc)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(iran.db))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ iran.db[,i] = NA } }
chemsprops.IRANSPDB = iran.db[,col.names]
chemsprops.IRANSPDB$source_db = "Iran_SPDB"
chemsprops.IRANSPDB$confidence_degree = 4
chemsprops.IRANSPDB$project_url = ""
chemsprops.IRANSPDB$citation_url = "https://doi.org/10.1016/j.geoderma.2007.04.022"
chemsprops.IRANSPDB <- complete.vars(chemsprops.IRANSPDB, sel=[c](https://rdrr.io/r/base/c.html)("oc","ph_h2o","clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.IRANSPDB)
#> [1] 4759 36
```
#### 5\.3\.0\.27 Northern circumpolar permafrost soil profiles
* Hugelius, G., Bockheim, J. G., Camill, P., Elberling, B., Grosse, G., Harden, J. W., … \& Michaelson, G. (2013\). [A new data set for estimating organic carbon storage to 3 m depth in soils of the northern circumpolar permafrost region](https://doi.org/10.5194/essd-5-393-2013). Earth System Science Data (Online), 5(2\). Data download URL: [http://dx.doi.org/10\.5879/ECDS/00000002](http://dx.doi.org/10.5879/ECDS/00000002)
```
if({
ncscd.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/NCSCD/Harden_etal_2012_Hugelius_etal_2013_cleaned_data.csv", stringsAsFactors = FALSE)
ncscd.hors$oc = [as.numeric](https://rdrr.io/r/base/numeric.html)(ncscd.hors$X.C)*10
#summary(ncscd.hors$oc)
#hist(ncscd.hors$Layer.thickness.cm, breaks = 45)
ncscd.hors$Layer.thickness.cm = [ifelse](https://rdrr.io/r/base/ifelse.html)(ncscd.hors$Layer.thickness.cm<0, NA, ncscd.hors$Layer.thickness.cm)
ncscd.hors$hzn_bot = ncscd.hors$Basal.Depth.cm + ncscd.hors$Layer.thickness.cm
ncscd.hors$db_od = [as.numeric](https://rdrr.io/r/base/numeric.html)(ncscd.hors$bulk.density.g.cm.3)
## Can we assume no coarse fragments?
ncscd.hors$wpg2 = 0
ncscd.hors$oc_d = [signif](https://rdrr.io/r/base/Round.html)(ncscd.hors$oc / 1000 * ncscd.hors$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(ncscd.hors$wpg2), 0, ncscd.hors$wpg2))/100, 3)
## very high values >40 kg/m3
ncscd.hors$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(ncscd.hors$Sample.date, format="%d-%m-%Y"), "%Y-%m-%d")
#summary(ncscd.hors$db_od)
ncscd.col = [c](https://rdrr.io/r/base/c.html)("Profile.ID", "citation", "site_obsdate", "Long", "Lat", "labsampnum", "layer_sequence", "Basal.Depth.cm", "hzn_bot", "Horizon.type", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = ncscd.col[[which](https://rdrr.io/r/base/which.html)(!ncscd.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(ncscd.hors))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ ncscd.hors[,i] = NA } }
chemsprops.NCSCD = ncscd.hors[,ncscd.col]
chemsprops.NCSCD$source_db = "NCSCD"
chemsprops.NCSCD$confidence_degree = 10
chemsprops.NCSCD$project_url = "https://bolin.su.se/data/ncscd/"
chemsprops.NCSCD$citation_url = "https://doi.org/10.5194/essd-5-393-2013"
chemsprops.NCSCD = complete.vars(chemsprops.NCSCD, sel = [c](https://rdrr.io/r/base/c.html)("oc","db_od"), coords = [c](https://rdrr.io/r/base/c.html)("Long", "Lat"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NCSCD)
#> [1] 7104 36
```
#### 5\.3\.0\.28 CSIRO National Soil Site Database
* CSIRO (2020\). CSIRO National Soil Site Database. v4\. CSIRO. Data Collection. <https://data.csiro.au/collections/collection/CIcsiro:7526v004>. Data download URL: [https://doi.org/10\.25919/5eeb2a56eac12](https://doi.org/10.25919/5eeb2a56eac12) (available upon request)
* Searle, R. (2014\). The Australian site data collation to support the GlobalSoilMap. GlobalSoilMap: Basis of the global spatial soil information system, 127\.
```
if({
[library](https://rdrr.io/r/base/library.html)([Hmisc](http://biostat.mc.vanderbilt.edu/Hmisc))
cmdb <- [mdb.get](https://rdrr.io/pkg/Hmisc/man/mdb.get.html)("/mnt/diskstation/data/Soil_points/Australia/CSIRO/NatSoil_v2_20200612.mdb")
#str(cmdb$SITES)
au.obs = cmdb$OBSERVATIONS[,[c](https://rdrr.io/r/base/c.html)("s.id", "o.location.notes", "o.date.desc", "o.latitude.GDA94", "o.longitude.GDA94")]
au.obs = au.obs[,]
coordinates(au.obs) <- ~o.longitude.GDA94+o.latitude.GDA94
proj4string(au.obs) <- CRS("+proj=longlat +ellps=GRS80 +no_defs")
au.xy <- [data.frame](https://rdrr.io/r/base/data.frame.html)(spTransform(au.obs, CRS("+proj=longlat +ellps=WGS84 +datum=WGS84")))
#plot(au.xy[,c("o.longitude.GDA94", "o.latitude.GDA94")])
## all variables in one column and need to be sorted based on the lab method
#summary(cmdb$LAB_METHODS$LABM.SHORT.NAME)
#write.csv(cmdb$LAB_METHODS, "/mnt/diskstation/data/Soil_points/Australia/CSIRO/NatSoil_v2_20200612_lab_methods.csv")
lab.tbl = [list](https://rdrr.io/r/base/list.html)(
[c](https://rdrr.io/r/base/c.html)("6_DC", "6A1", "6A1_UC", "6B1", "6B2", "6B2a", "6B2b", "6B3", "6B4", "6B4a", "6B4b", "6Z"), # %
[c](https://rdrr.io/r/base/c.html)("6B3a"), # g/kg
[c](https://rdrr.io/r/base/c.html)("6H4", "6H4_SCaRP"), # %
[c](https://rdrr.io/r/base/c.html)("7_C_B", "7_NR", "7A1", "7A2", "7A2a", "7A2b", "7A3", "7A4", "7A5", "7A6", "7A6a", "7A6b", "7A6b_MCLW"), # g/kg
[c](https://rdrr.io/r/base/c.html)("4A1", "4_NR", "4A_C_2.5", "4A_C_1", "4G1"),
[c](https://rdrr.io/r/base/c.html)("4C_C_1", "4C1", "4C2", "23A"),
[c](https://rdrr.io/r/base/c.html)("4B_C_2.5", "4B1", "4B2"),
[c](https://rdrr.io/r/base/c.html)("P10_NR_C", "P10_HYD_C", "P10_PB_C", "P10_PB1_C", "P10_CF_C", "P10_I_C"),
[c](https://rdrr.io/r/base/c.html)("P10_NR_Z", "P10_HYD_Z", "P10_PB_Z", "P10_PB1_Z", "P10_CF_Z", "P10_I_Z"),
[c](https://rdrr.io/r/base/c.html)("P10_NR_S", "P10_HYD_S", "P10_PB_S", "P10_PB1_S", "P10_CF_S", "P10_I_S"),
[c](https://rdrr.io/r/base/c.html)("15C1modCEC", "15_HSK_CEC", "15J_CEC"),
[c](https://rdrr.io/r/base/c.html)("15I1", "15I2", "15I3", "15I4", "15D3_CEC"),
[c](https://rdrr.io/r/base/c.html)("15_BASES", "15_NR", "15J_H", "15J1"),
[c](https://rdrr.io/r/base/c.html)("2Z2_Grav", "P10_GRAV"),
[c](https://rdrr.io/r/base/c.html)("503.08a", "P3A_NR", "P3A1", "P3A1_C4", "P3A1_CLOD", "P3A1_e"),
[c](https://rdrr.io/r/base/c.html)("18F1_CA"),
[c](https://rdrr.io/r/base/c.html)("18F1_MG"),
[c](https://rdrr.io/r/base/c.html)("18F1_NA"),
[c](https://rdrr.io/r/base/c.html)("18F1_K", "18F2", "18A1mod", "18_NR", "18A1", "18A1_NR", "18B1", "18B2"),
[c](https://rdrr.io/r/base/c.html)("3_C_B", "3_NR", "3A_TSS"),
[c](https://rdrr.io/r/base/c.html)("3A_C_2.5", "3A1")
)
[names](https://rdrr.io/r/base/names.html)(lab.tbl) = [c](https://rdrr.io/r/base/c.html)("oc", "ocP", "c_tot", "n_tot", "ph_h2o", "ph_kcl", "ph_cacl2", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
val.lst = [lapply](https://rdrr.io/r/base/lapply.html)(1:[length](https://rdrr.io/r/base/length.html)(lab.tbl), function(i){x <- cmdb$LAB_RESULTS[cmdb$LAB_RESULTS$labm.code [%in%](https://rdrr.io/r/base/match.html) lab.tbl[[i]], [c](https://rdrr.io/r/base/c.html)("agency.code", "proj.code", "s.id", "o.id", "h.no", "labr.value")]; [names](https://rdrr.io/r/base/names.html)(x)[6] <- [names](https://rdrr.io/r/base/names.html)(lab.tbl)[i]; [return](https://rdrr.io/r/base/function.html)(x) })
[names](https://rdrr.io/r/base/names.html)(val.lst) = [names](https://rdrr.io/r/base/names.html)(lab.tbl)
val.lst$oc$oc = val.lst$oc$oc * 10
[names](https://rdrr.io/r/base/names.html)(val.lst$ocP)[6] = "oc"
val.lst$oc <- [rbind](https://rdrr.io/r/base/cbind.html)(val.lst$oc, val.lst$ocP)
val.lst$ocP = NULL
#summary(val.lst$oc$oc)
#str(val.lst, max.level = 1)
for(i in 1:[length](https://rdrr.io/r/base/length.html)(val.lst)){ val.lst[[i]]$h.id <- [paste](https://rdrr.io/r/base/paste.html)(val.lst[[i]]$agency.code, val.lst[[i]]$proj.code, val.lst[[i]]$s.id, val.lst[[i]]$o.id, val.lst[[i]]$h.no, sep="_") }
au.hor <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)(val.lst, function(x){x[,6:7]}), match="first")
#str(as.factor(au.hor$h.id))
cmdb$HORIZONS$h.id = [paste](https://rdrr.io/r/base/paste.html)(cmdb$HORIZONS$agency.code, cmdb$HORIZONS$proj.code, cmdb$HORIZONS$s.id, cmdb$HORIZONS$o.id, cmdb$HORIZONS$h.no, sep="_")
cmdb$HORIZONS$hzn_desgn = [paste](https://rdrr.io/r/base/paste.html)(cmdb$HORIZONS$h.desig.master, cmdb$HORIZONS$h.desig.subdiv, cmdb$HORIZONS$h.desig.suffix, sep="")
au.horT <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(cmdb$HORIZONS[,[c](https://rdrr.io/r/base/c.html)("h.id","s.id","h.no","h.texture","hzn_desgn","h.upper.depth","h.lower.depth")], au.hor, au.xy))
au.horT$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(au.horT$o.date.desc, format="%d%m%Y"), "%Y-%m-%d")
au.horT$sand_tot_psa = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(au.horT$sand_tot_psa), 100-(au.horT$clay_tot_psa + au.horT$silt_tot_psa), au.horT$sand_tot_psa)
au.horT$hzn_top = au.horT$h.upper.depth*100
au.horT$hzn_bot = au.horT$h.lower.depth*100
au.horT$oc_d = [signif](https://rdrr.io/r/base/Round.html)(au.horT$oc / 1000 * au.horT$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(au.horT$wpg2), 0, au.horT$wpg2))/100, 3)
au.cols.n = [c](https://rdrr.io/r/base/c.html)("s.id", "o.location.notes", "site_obsdate", "o.longitude.GDA94", "o.latitude.GDA94", "h.id", "h.no", "hzn_top", "hzn_bot", "hzn_desgn", "h.texture", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = au.cols.n[[which](https://rdrr.io/r/base/which.html)(!au.cols.n [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(au.horT))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ au.horT[,i] = NA } }
chemsprops.NatSoil = au.horT[,au.cols.n]
chemsprops.NatSoil$source_db = "CSIRO_NatSoil"
chemsprops.NatSoil$confidence_degree = 4
chemsprops.NatSoil$project_url = "https://www.csiro.au/en/Do-business/Services/Enviro/Soil-archive"
chemsprops.NatSoil$citation_url = "https://doi.org/10.25919/5eeb2a56eac12"
chemsprops.NatSoil = complete.vars(chemsprops.NatSoil, sel = [c](https://rdrr.io/r/base/c.html)("oc","db_od","clay_tot_psa","ph_h2o"), coords = [c](https://rdrr.io/r/base/c.html)("o.longitude.GDA94", "o.latitude.GDA94"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NatSoil)
#> [1] 70791 36
```
#### 5\.3\.0\.29 NAMSOTER
* Coetzee, M. E. (2001\). [NAMSOTER, a SOTER database for Namibia](https://edepot.wur.nl/485173). Agroecological Zoning, 458\.
* Coetzee, M. E. (2009\). Chemical characterisation of the soils of East Central Namibia (Doctoral dissertation, Stellenbosch: University of Stellenbosch).
```
if({
nam.profs <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Namibia/NAMSOTER/Namibia_all_profiles.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("-9999", "999", "9999", "NA"))
nam.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Namibia/NAMSOTER/Namibia_all_horizons.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("-9999", "999", "9999", "NA"))
#summary(nam.hors$TOTN)
#summary(nam.hors$TOTC)
nam.hors$hzn_top <- NA
nam.hors$hzn_top <- [ifelse](https://rdrr.io/r/base/ifelse.html)(nam.hors$HONU==1, 0, nam.hors$hzn_top)
h.lst <- [lapply](https://rdrr.io/r/base/lapply.html)(1:7, function(x){[which](https://rdrr.io/r/base/which.html)(nam.hors$HONU==x)})
for(i in 2:7){
sel <- [match](https://rdrr.io/r/base/match.html)(nam.hors$PRID[h.lst[[i]]], nam.hors$PRID[h.lst[[i-1]]])
nam.hors$hzn_top[h.lst[[i]]] <- nam.hors$HBDE[h.lst[[i-1]]][sel]
}
nam.hors$HBDE <- [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(nam.hors$HBDE), nam.hors$hzn_top+50, nam.hors$HBDE)
#summary(nam.hors$HBDE)
namALL = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(nam.hors, nam.profs, by=[c](https://rdrr.io/r/base/c.html)("PRID"))
namALL$k_ext = namALL$EXCK * 391
namALL$ca_ext = namALL$EXCA * 200
namALL$mg_ext = namALL$EXMG * 121
namALL$na_ext = namALL$EXNA * 230
#summary(namALL$MINA)
namALL$BULK <- [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$BULK>2.4, NA, namALL$BULK)
namALL$wpg2 = [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="D", 80, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="A", 60, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="M", 25, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="C", 10, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="V", 1, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="F", 2.5, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="M/A", 40, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="C/M", 15, 0))))))))
#hist(namALL$wpg2)
namALL$oc_d = [signif](https://rdrr.io/r/base/Round.html)(namALL$TOTC / 1000 * namALL$BULK * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(namALL$wpg2), 0, namALL$wpg2))/100, 3)
#summary(namALL$oc_d)
#summary(namALL$PHAQ) ## very high ph
namALL$site_obsdate = 2000
nam.col = [c](https://rdrr.io/r/base/c.html)("PRID", "SLID", "site_obsdate", "LONG", "LATI", "labsampnum", "HONU", "hzn_top", "HBDE", "HODE", "PSCL", "CLPC", "STPC", "SDTO", "TOTC", "oc_d", "c_tot", "TOTN", "PHKC", "PHAQ", "ph_cacl2", "CECS", "cec_nh4", "ecec", "wpg2", "BULK", "ca_ext", "mg_ext", "na_ext", "k_ext", "ELCO", "ec_12pre")
x.na = nam.col[[which](https://rdrr.io/r/base/which.html)(!nam.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(namALL))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ namALL[,i] = NA } }
chemsprops.NAMSOTER = namALL[,nam.col]
chemsprops.NAMSOTER$source_db = "NAMSOTER"
chemsprops.NAMSOTER$confidence_degree = 2
chemsprops.NAMSOTER$project_url = ""
chemsprops.NAMSOTER$citation_url = "https://edepot.wur.nl/485173"
chemsprops.NAMSOTER = complete.vars(chemsprops.NAMSOTER, sel = [c](https://rdrr.io/r/base/c.html)("TOTC","CLPC","PHAQ"), coords = [c](https://rdrr.io/r/base/c.html)("LONG", "LATI"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NAMSOTER)
#> [1] 2953 36
```
#### 5\.3\.0\.30 Worldwide organic soil carbon and nitrogen data
* Zinke, P. J., Millemann, R. E., \& Boden, T. A. (1986\). [Worldwide organic soil carbon and nitrogen data](https://cdiac.ess-dive.lbl.gov/ftp/ndp018/ndp018.pdf). Carbon Dioxide Information Center, Environmental Sciences Division, Oak Ridge National Laboratory. Data download URL: [https://dx.doi.org/10\.3334/CDIAC/lue.ndp018](https://dx.doi.org/10.3334/CDIAC/lue.ndp018)
* Note: poor spatial location accuracy i.e. \<10 km. Bulk density for many points has been estimated not measured. Sampling year has not been but literature indicates: 1965, 1974, 1976, 1978, 1979, 1984\. Most of samples come from natural vegetation (undisturbed) areas.
```
if({
ndp.profs <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISCND/ndp018.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("-9999", "?", "NA"), stringsAsFactors = FALSE)
[names](https://rdrr.io/r/base/names.html)(ndp.profs) = [c](https://rdrr.io/r/base/c.html)("PROFILE", "CODE", "CARBON", "NITROGEN", "LAT", "LONG", "ELEV", "SOURCE", "HOLDRIGE", "OLSON", "PARENT")
for(j in [c](https://rdrr.io/r/base/c.html)("CARBON","NITROGEN","ELEV")){ ndp.profs[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(ndp.profs[,j]) }
#summary(ndp.profs$CARBON)
lat.s <- [grep](https://rdrr.io/r/base/grep.html)("S", ndp.profs$LAT) # lat.n <- grep("N", ndp.profs$LAT)
ndp.profs$latitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)("[^0-9.-]", "", ndp.profs$LAT))
ndp.profs$latitude_decimal_degrees[lat.s] = ndp.profs$latitude_decimal_degrees[lat.s] * -1
lon.w <- [grep](https://rdrr.io/r/base/grep.html)("W", ndp.profs$LONG) # lon.e <- grep("E", ndp.profs$LONG, fixed = TRUE)
ndp.profs$longitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)("[^0-9.-]", "", ndp.profs$LONG))
ndp.profs$longitude_decimal_degrees[lon.w] = ndp.profs$longitude_decimal_degrees[lon.w] * -1
#plot(ndp.profs[,c("longitude_decimal_degrees", "latitude_decimal_degrees")])
ndp.profs$hzn_top = 0; ndp.profs$hzn_bot = 100
## Sampling years from the doc: 1965, 1974, 1976, 1978, 1979, 1984
ndp.profs$site_obsdate = "1982"
ndp.col = [c](https://rdrr.io/r/base/c.html)("PROFILE", "CODE", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence","hzn_top","hzn_bot","hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "CARBON", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = ndp.col[[which](https://rdrr.io/r/base/which.html)(!ndp.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(ndp.profs))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ ndp.profs[,i] = NA } }
chemsprops.ISCND = ndp.profs[,ndp.col]
chemsprops.ISCND$source_db = "ISCND"
chemsprops.ISCND$confidence_degree = 8
chemsprops.ISCND$project_url = "https://iscn.fluxdata.org/data/"
chemsprops.ISCND$citation_url = "https://dx.doi.org/10.3334/CDIAC/lue.ndp018"
chemsprops.ISCND = complete.vars(chemsprops.ISCND, sel = [c](https://rdrr.io/r/base/c.html)("CARBON"), coords = [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.ISCND)
#> [1] 3977 36
```
#### 5\.3\.0\.31 Interior Alaska Carbon and Nitrogen stocks
* Manies, K., Waldrop, M., and Harden, J. (2020\): Generalized models to estimate carbon and nitrogen stocks of organic soil horizons in Interior Alaska, Earth Syst. Sci. Data, 12, 1745–1757, [https://doi.org/10\.5194/essd\-12\-1745\-2020](https://doi.org/10.5194/essd-12-1745-2020), Data download URL: [https://doi.org/10\.5066/P960N1F9](https://doi.org/10.5066/P960N1F9)
```
if({
al.gps <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/Alaska_Interior/Site_GPS_coordinates_v1-1.csv", stringsAsFactors = FALSE)
## Different datums!
#summary(as.factor(al.gps$Datum))
al.gps1 = al.gps[al.gps$Datum=="NAD83",]
coordinates(al.gps1) = ~ Longitude + Latitude
proj4string(al.gps1) = "+proj=longlat +datum=NAD83"
al.gps0 = spTransform(al.gps1, CRS("+proj=longlat +datum=WGS84"))
al.gps[[which](https://rdrr.io/r/base/which.html)(al.gps$Datum=="NAD83"),"Longitude"] = al.gps0@coords[,1]
al.gps[[which](https://rdrr.io/r/base/which.html)(al.gps$Datum=="NAD83"),"Latitude"] = al.gps0@coords[,2]
al.gps$site = al.gps$Site
al.hor <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/Alaska_Interior/Generalized_models_for_CandN_Alaska_v1-1.csv", stringsAsFactors = FALSE)
al.hor$hzn_top = al.hor$depth - [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$thickness)
al.hor$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(al.hor$date, format = "%m/%d/%Y"), "%Y-%m-%d")
al.hor$oc = [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$carbon) * 10
al.hor$n_tot = [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$nitrogen) * 10
al.hor$oc_d = [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$Cdensity) * 1000
#summary(al.hor$oc_d)
al.horA = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(al.hor, al.gps, by=[c](https://rdrr.io/r/base/c.html)("site"))
al.col = [c](https://rdrr.io/r/base/c.html)("profile", "description", "site_obsdate", "Longitude", "Latitude", "sampleID", "layer_sequence", "hzn_top", "depth", "Hcode", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "BDfine", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = al.col[[which](https://rdrr.io/r/base/which.html)(!al.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(al.horA))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ al.horA[,i] = NA } }
chemsprops.Alaska = al.horA[,al.col]
chemsprops.Alaska$source_db = "Alaska_interior"
chemsprops.Alaska$confidence_degree = 1
chemsprops.Alaska$project_url = "https://www.usgs.gov/centers/gmeg"
chemsprops.Alaska$citation_url = "https://doi.org/10.5194/essd-12-1745-2020"
chemsprops.Alaska = complete.vars(chemsprops.Alaska, sel = [c](https://rdrr.io/r/base/c.html)("oc","oc_d"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Alaska)
#> [1] 3882 36
```
#### 5\.3\.0\.32 Croatian Soil Pedon data
* Martinović J., (2000\) [“Tla u Hrvatskoj”](https://books.google.nl/books?id=k_a2MgAACAAJ), Monografija, Državna uprava za zaštitu prirode i okoliša, str. 269, Zagreb. ISBN: 9536793059
* Bašić F., (2014\) [“The Soils of Croatia”](https://books.google.nl/books?id=VbJEAAAAQBAJ). World Soils Book Series, Springer Science \& Business Media, 179 pp. ISBN: 9400758154
```
if({
bpht.site <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Croatia/WBSoilHR_sites_1997.csv", stringsAsFactors = FALSE)
bpht.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Croatia/WBSoilHR_1997.csv", stringsAsFactors = FALSE)
## filter typos
for(j in [c](https://rdrr.io/r/base/c.html)("GOR", "DON", "MKP", "PH1", "PH2", "MSP", "MP", "MG", "HUM", "EXTN", "EXTP", "EXTK", "CAR")){
bpht.hors[,j] = [as.numeric](https://rdrr.io/r/base/numeric.html)(bpht.hors[,j])
}
## Convert to the USDA standard
bpht.hors$sand_tot_psa <- bpht.hors$MSP * 0.8 + bpht.hors$MKP
bpht.hors$silt_tot_psa <- bpht.hors$MP + bpht.hors$MSP * 0.2
bpht.hors$oc <- [signif](https://rdrr.io/r/base/Round.html)(bpht.hors$HUM/1.724 * 10, 3)
## summary(bpht.hors$sand_tot_psa)
bpht.s.lst <- [c](https://rdrr.io/r/base/c.html)("site_key", "UZORAK", "Cro16.30_X", "Cro16.30_Y", "FITOC", "STIJENA", "HID_DREN", "DUBINA")
bpht.hor = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(bpht.site[,bpht.s.lst], bpht.hors)
bpht.hor$wpg2 = bpht.hor$STIJENA
bpht.hor$DON <- [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(bpht.hor$DON), bpht.hor$GOR+50, bpht.hor$DON)
bpht.hor$depth <- bpht.hor$GOR + (bpht.hor$DON - bpht.hor$GOR)/2
bpht.hor = bpht.hor[,]
bpht.hor$wpg2[[which](https://rdrr.io/r/base/which.html)(bpht.hor$GOR<30)] <- bpht.hor$wpg2[[which](https://rdrr.io/r/base/which.html)(bpht.hor$GOR<30)]*.3
bpht.hor$sample_key = [make.unique](https://rdrr.io/r/base/make.unique.html)([paste](https://rdrr.io/r/base/paste.html)(bpht.hor$PEDOL_ID, bpht.hor$OZN, sep="_"))
bpht.hor$sand_tot_psa[bpht.hor$sample_key=="805_Amo"] <- bpht.hor$sand_tot_psa[bpht.hor$sample_key=="805_Amo"]/10
## convert N, P, K
#summary(bpht.hor$EXTK) -- measurements units?
bpht.hor$p_ext = bpht.hor$EXTP * 4.364
bpht.hor$k_ext = bpht.hor$EXTK * 8.3013
bpht.hor = bpht.hor[,]
## coordinates:
bpht.pnts = SpatialPointsDataFrame(bpht.hor[,[c](https://rdrr.io/r/base/c.html)("Cro16.30_X","Cro16.30_Y")], bpht.hor["site_key"], proj4string = CRS("+proj=tmerc +lat_0=0 +lon_0=16.5 +k=0.9999 +x_0=2500000 +y_0=0 +ellps=bessel +towgs84=550.499,164.116,475.142,5.80967,2.07902,-11.62386,0.99999445824 +units=m"))
bpht.pnts.ll <- spTransform(bpht.pnts, CRS("+proj=longlat +datum=WGS84"))
bpht.hor$longitude_decimal_degrees = bpht.pnts.ll@coords[,1]
bpht.hor$latitude_decimal_degrees = bpht.pnts.ll@coords[,2]
bpht.h.lst <- [c](https://rdrr.io/r/base/c.html)('site_key', 'OZ_LIST_PROF', 'UZORAK', 'longitude_decimal_degrees', 'latitude_decimal_degrees', 'labsampnum', 'layer_sequence', 'GOR', 'DON', 'OZN', 'TT', 'MG', 'silt_tot_psa', 'sand_tot_psa', 'oc', 'oc_d', 'c_tot', 'EXTN', 'PH2', 'PH1', 'ph_cacl2', 'cec_sum', 'cec_nh4', 'ecec', 'wpg2', 'db_od', 'ca_ext', 'mg_ext', 'na_ext', 'k_ext', 'ec_satp', 'ec_12pre')
x.na = bpht.h.lst[[which](https://rdrr.io/r/base/which.html)(!bpht.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(bpht.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ bpht.hor[,i] = NA } }
chemsprops.bpht = bpht.hor[,bpht.h.lst]
chemsprops.bpht$source_db = "Croatian_Soil_Pedon"
chemsprops.bpht$confidence_degree = 1
chemsprops.bpht$project_url = "http://www.haop.hr/"
chemsprops.bpht$citation_url = "https://books.google.nl/books?id=k_a2MgAACAAJ"
chemsprops.bpht = complete.vars(chemsprops.bpht, sel = [c](https://rdrr.io/r/base/c.html)("oc","MG","PH1","k_ext"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.bpht)
#> [1] 5746 36
```
#### 5\.3\.0\.33 Remnant native SOC database
* Sanderman, J., (2017\) “Remnant native SOC database for release.xlsx”, Soil carbon profile data from paired land use comparisons, [https://doi.org/10\.7910/DVN/QQQM8V/8MSBNI](https://doi.org/10.7910/DVN/QQQM8V/8MSBNI), Harvard Dataverse, V1
```
if({
rem.hor <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/WHRC_remnant_SOC/remnant+native+SOC+database+for+release.xlsx", sheet = 3)
rem.site <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/WHRC_remnant_SOC/remnant+native+SOC+database+for+release.xlsx", sheet = 2)
rem.ref <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/WHRC_remnant_SOC/remnant+native+SOC+database+for+release.xlsx", sheet = 4)
rem.site = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(rem.site, rem.ref[,[c](https://rdrr.io/r/base/c.html)("Source.No.","DOI","Sample_year")], by=[c](https://rdrr.io/r/base/c.html)("Source.No."))
rem.site$Site = rem.site$Site.ID
rem.horA = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(rem.hor, rem.site, by=[c](https://rdrr.io/r/base/c.html)("Site"))
rem.horA$hzn_top = rem.horA$'U_depth.(m)'*100
rem.horA$hzn_bot = rem.horA$'L_depth.(m)'*100
rem.horA$db_od = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(rem.horA$'measured.BD.(Mg/m3)')), [as.numeric](https://rdrr.io/r/base/numeric.html)(rem.horA$'estimated.BD.(Mg/m3)'), [as.numeric](https://rdrr.io/r/base/numeric.html)(rem.horA$'measured.BD.(Mg/m3)'))
rem.horA$oc_d = [signif](https://rdrr.io/r/base/Round.html)(rem.horA$'OC.(g/kg)' * rem.horA$db_od, 3)
#summary(rem.horA$oc_d)
rem.col = [c](https://rdrr.io/r/base/c.html)("Source.No.", "Site", "Sample_year", "Longitude", "Latitude", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "OC.(g/kg)", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = rem.col[[which](https://rdrr.io/r/base/which.html)(!rem.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(rem.horA))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ rem.horA[,i] = NA } }
chemsprops.RemnantSOC = rem.horA[,rem.col]
chemsprops.RemnantSOC$source_db = "WHRC_remnant_SOC"
chemsprops.RemnantSOC$confidence_degree = 8
chemsprops.RemnantSOC$project_url = "https://www.woodwellclimate.org/research-area/carbon/"
chemsprops.RemnantSOC$citation_url = "http://dx.doi.org/10.1073/pnas.1706103114"
chemsprops.RemnantSOC = complete.vars(chemsprops.RemnantSOC, sel = [c](https://rdrr.io/r/base/c.html)("OC.(g/kg)","oc_d"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.RemnantSOC)
#> [1] 1604 36
```
#### 5\.3\.0\.34 Soil Health DB
* Jian, J., Du, X., \& Stewart, R. D. (2020\). A database for global soil health assessment. Scientific Data, 7(1\), 1\-8\. [https://doi.org/10\.1038/s41597\-020\-0356\-3](https://doi.org/10.1038/s41597-020-0356-3). Data download URL: <https://github.com/jinshijian/SoilHealthDB>
Note: some information is available about column names ([https://www.nature.com/articles/s41597\-020\-0356\-3/tables/3](https://www.nature.com/articles/s41597-020-0356-3/tables/3)) but detailed explanation is missing.
```
if({
shdb.hor <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/SoilHealthDB/SoilHealthDB_V2.xlsx", sheet = 1, na.strings = [c](https://rdrr.io/r/base/c.html)("NA", "NotAvailable", "Not-available"))
#summary(as.factor(shdb.hor$SamplingDepth))
shdb.hor$hzn_top = [as.numeric](https://rdrr.io/r/base/numeric.html)([sapply](https://rdrr.io/r/base/lapply.html)(shdb.hor$SamplingDepth, function(i){ [strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-to-")[[1]][1] }))
shdb.hor$hzn_bot = [as.numeric](https://rdrr.io/r/base/numeric.html)([sapply](https://rdrr.io/r/base/lapply.html)(shdb.hor$SamplingDepth, function(i){ [strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-to-")[[1]][2] }))
shdb.hor$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(shdb.hor$hzn_top), 0, shdb.hor$hzn_top)
shdb.hor$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(shdb.hor$hzn_bot), 15, shdb.hor$hzn_bot)
shdb.hor$oc = [as.numeric](https://rdrr.io/r/base/numeric.html)(shdb.hor$BackgroundSOC) * 10
shdb.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(shdb.hor$oc * shdb.hor$SoilBD, 3)
for(j in [c](https://rdrr.io/r/base/c.html)("ClayPerc", "SiltPerc", "SandPerc", "SoilpH")){ shdb.hor[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(shdb.hor[,j]) }
#summary(shdb.hor$oc_d)
shdb.col = [c](https://rdrr.io/r/base/c.html)("StudyID", "ExperimentID", "SamplingYear", "Longitude", "Latitude", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "Texture", "ClayPerc", "SiltPerc", "SandPerc", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "SoilpH", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "SoilBD", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = shdb.col[[which](https://rdrr.io/r/base/which.html)(!shdb.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(shdb.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ shdb.hor[,i] = NA } }
chemsprops.SoilHealthDB = shdb.hor[,shdb.col]
chemsprops.SoilHealthDB$source_db = "SoilHealthDB"
chemsprops.SoilHealthDB$confidence_degree = 8
chemsprops.SoilHealthDB$project_url = "https://github.com/jinshijian/SoilHealthDB"
chemsprops.SoilHealthDB$citation_url = "https://doi.org/10.1038/s41597-020-0356-3"
chemsprops.SoilHealthDB = complete.vars(chemsprops.SoilHealthDB, sel = [c](https://rdrr.io/r/base/c.html)("ClayPerc", "SoilpH", "oc"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SoilHealthDB)
#> [1] 120 36
```
#### 5\.3\.0\.35 Global Harmonized Dataset of SOC change under perennial crops
* Ledo, A., Hillier, J., Smith, P. et al. (2019\) A global, empirical, harmonised dataset of soil organic carbon changes under perennial crops. Sci Data 6, 57\. [https://doi.org/10\.1038/s41597\-019\-0062\-1](https://doi.org/10.1038/s41597-019-0062-1). Data download URL: [https://doi.org/10\.6084/m9\.figshare.7637210\.v2](https://doi.org/10.6084/m9.figshare.7637210.v2)
Note: Many missing years for PREVIOUS SOC AND SOIL CHARACTERISTICS.
```
if({
[library](https://rdrr.io/r/base/library.html)(["readxl"](https://readxl.tidyverse.org))
socpdb <- readxl::[read_excel](https://readxl.tidyverse.org/reference/read_excel.html)("/mnt/diskstation/data/Soil_points/INT/SOCPDB/SOC_perennials_DATABASE.xls", skip=1, sheet = 1)
#names(socpdb)
#summary(as.numeric(socpdb$year_measure))
socpdb$year_measure = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$year_measure)), [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$yearPpub)-5, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$year_measure))
socpdb$year_measure = [ifelse](https://rdrr.io/r/base/ifelse.html)(socpdb$year_measure<1960, NA, socpdb$year_measure)
socpdb$depth_current = socpdb$soil_to_cm_current - socpdb$soil_from_cm_current
socpdb = socpdb[socpdb$depth_current>5,]
socpdb$SOC_g_kg_current = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_current)), [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_Mg_ha_current) / (socpdb$depth_current/100 * [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$bulk_density_Mg_m3_current) * 1000) * 10, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_current))
socpdb$depth_previous = socpdb$soil_to_cm_previous - socpdb$soil_from_cm_previous
socpdb$SOC_g_kg_previous = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_previous)), [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_Mg_ha_previous) / (socpdb$depth_previous/100 * [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$Bulkdensity_previous) * 1000) * 10, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_previous))
hor.b = [which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(socpdb) [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("ID", "plotID", "Longitud", "Latitud", "year_measure", "years_since_luc", "USDA", "original_source"))
socpdb1 = socpdb[,[c](https://rdrr.io/r/base/c.html)(hor.b, [grep](https://rdrr.io/r/base/grep.html)("_current", [names](https://rdrr.io/r/base/names.html)(socpdb)))]
#summary(as.numeric(socpdb1$years_since_luc))
## 10 yrs median
socpdb1$site_obsdate = socpdb1$year_measure
socpdb2 = socpdb[,[c](https://rdrr.io/r/base/c.html)(hor.b, [grep](https://rdrr.io/r/base/grep.html)("_previous", [names](https://rdrr.io/r/base/names.html)(socpdb)))]
socpdb2$site_obsdate = socpdb2$year_measure - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb2$years_since_luc)), 10, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb2$years_since_luc))
[colnames](https://rdrr.io/r/base/colnames.html)(socpdb2) <- [sub](https://rdrr.io/r/base/grep.html)("_previous", "_current", [colnames](https://rdrr.io/r/base/colnames.html)(socpdb2))
nm.socpdb = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "hzn_top", "hzn_bot", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_h2o", "db_od")
sel.socdpb1 = [c](https://rdrr.io/r/base/c.html)("ID", "original_source", "site_obsdate", "Longitud", "Latitud", "soil_from_cm_current", "soil_to_cm_current", "%clay_current", "%silt_current", "%sand_current", "SOC_g_kg_current", "ph_current", "bulk_density_Mg_m3_current")
sel.socdpb2 = [c](https://rdrr.io/r/base/c.html)("ID", "original_source", "site_obsdate", "Longitud", "Latitud", "soil_from_cm_current", "soil_to_cm_current", "%clay_current", "%silt_current", "%sand_current", "SOC_g_kg_current", "ph_current", "Bulkdensity_current")
socpdbALL = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)([lapply](https://rdrr.io/r/base/lapply.html)([list](https://rdrr.io/r/base/list.html)(socpdb1[,sel.socdpb1], socpdb2[,sel.socdpb2]), function(i){ dplyr::[mutate_all](https://dplyr.tidyverse.org/reference/mutate_all.html)([setNames](https://rdrr.io/r/stats/setNames.html)(i, nm.socpdb), as.character) })))
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(socpdbALL)){ socpdbALL[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdbALL[,j]) }
#summary(socpdbALL$oc) ## mean = 15
#summary(socpdbALL$db_od)
#summary(socpdbALL$ph_h2o)
socpdbALL$oc_d = [signif](https://rdrr.io/r/base/Round.html)(socpdbALL$oc * socpdbALL$db_od, 3)
#summary(socpdbALL$oc_d)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(socpdbALL))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ socpdbALL[,i] = NA } }
chemsprops.SOCPDB <- socpdbALL[,col.names]
chemsprops.SOCPDB$source_db = "SOCPDB"
chemsprops.SOCPDB$confidence_degree = 5
chemsprops.SOCPDB$project_url = "https://africap.info/"
chemsprops.SOCPDB$citation_url = "https://doi.org/10.1038/s41597-019-0062-1"
chemsprops.SOCPDB = complete.vars(chemsprops.SOCPDB, sel = [c](https://rdrr.io/r/base/c.html)("oc","ph_h2o","clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SOCPDB)
#> [1] 1526 36
```
#### 5\.3\.0\.36 Stocks of organic carbon in German agricultural soils (BZE\_LW)
* Poeplau, C., Jacobs, A., Don, A., Vos, C., Schneider, F., Wittnebel, M., … \& Flessa, H. (2020\). [Stocks of organic carbon in German agricultural soils—Key results of the first comprehensive inventory](https://doi.org/10.1002/jpln.202000113). Journal of Plant Nutrition and Soil Science, 183(6\), 665\-681\. [https://doi.org/10\.1002/jpln.202000113](https://doi.org/10.1002/jpln.202000113). Data download URL: [https://doi.org/10\.3220/DATA20200203151139](https://doi.org/10.3220/DATA20200203151139)
Note: For protection of data privacy, the coordinate was randomly generated within a radius of 4\-km around the planned sampling point. This data is hence probably not suitable for spatial analysis, predictive soil mapping.
```
if({
site.de <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/Germany/SITE.xlsx", sheet = 1)
site.de$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)([paste0](https://rdrr.io/r/base/paste.html)("01-", site.de$Sampling_month, "-", site.de$Sampling_year), format="%d-%m-%Y"), "%Y-%m-%d")
site.de.xy = site.de[,[c](https://rdrr.io/r/base/c.html)("PointID","xcoord","ycoord")]
## 3104
coordinates(site.de.xy) <- ~xcoord+ycoord
proj4string(site.de.xy) <- CRS("+proj=utm +zone=32 +ellps=WGS84 +datum=WGS84 +units=m +no_defs")
site.de.ll <- [data.frame](https://rdrr.io/r/base/data.frame.html)(spTransform(site.de.xy, CRS("+proj=longlat +ellps=WGS84 +datum=WGS84")))
site.de$longitude_decimal_degrees = site.de.ll[,2]
site.de$latitude_decimal_degrees = site.de.ll[,3]
hor.de <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/Germany/LABORATORY_DATA.xlsx", sheet = 1)
#hor.de = plyr::join(openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/Germany/LABORATORY_DATA.xlsx", sheet = 1), openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/Germany/HORIZON_DATA.xlsx", sheet = 1), by="PointID")
## 17,189 rows
horALL.de = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hor.de, site.de, by="PointID")
## Sand content [Mass-%]; grain size 63-2000µm (DIN ISO 11277)
horALL.de$sand_tot_psa <- horALL.de$gS + horALL.de$mS + horALL.de$fS + 0.2 * horALL.de$gU
horALL.de$silt_tot_psa <- horALL.de$fU + horALL.de$mU + 0.8 * horALL.de$gU
## Convert millisiemens/meter [mS/m] to microsiemens/centimeter [μS/cm, uS/cm]
horALL.de$ec_satp = horALL.de$EC_H2O / 10
hor.sel.de <- [c](https://rdrr.io/r/base/c.html)("PointID", "Main.soil.type", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence", "Layer.upper.limit", "Layer.lower.limit", "hzn_desgn", "Soil.texture.class", "Clay", "silt_tot_psa", "sand_tot_psa", "TOC", "oc_d", "TC", "TN", "ph_kcl", "pH_H2O", "pH_CaCl2", "cec_sum", "cec_nh4", "ecec", "Rock.fragment.fraction", "BD_FS", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
#summary(horALL.de$TOC) ## mean = 12.3
#summary(horALL.de$BD_FS) ## mean = 1.41
#summary(horALL.de$pH_H2O)
horALL.de$oc_d = [signif](https://rdrr.io/r/base/Round.html)(horALL.de$TOC * horALL.de$BD_FS * (1-horALL.de$Rock.fragment.fraction/100), 3)
#summary(horALL.de$oc_d)
x.na = hor.sel.de[[which](https://rdrr.io/r/base/which.html)(!hor.sel.de [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(horALL.de))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ horALL.de[,i] = NA } }
chemsprops.BZE_LW <- horALL.de[,hor.sel.de]
chemsprops.BZE_LW$source_db = "BZE_LW"
chemsprops.BZE_LW$confidence_degree = 3
chemsprops.BZE_LW$project_url = "https://www.thuenen.de/de/ak/"
chemsprops.BZE_LW$citation_url = "https://doi.org/10.1002/jpln.202000113"
chemsprops.BZE_LW = complete.vars(chemsprops.BZE_LW, sel = [c](https://rdrr.io/r/base/c.html)("TOC", "pH_H2O", "Clay"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.BZE_LW)
#> [1] 17187 36
```
#### 5\.3\.0\.37 AARDEWERK\-Vlaanderen\-2010
* Beckers, V., Jacxsens, P., Van De Vreken, Ph., Van Meirvenne, M., Van Orshoven, J. (2011\). Gebruik en installatie van de bodemdatabank AARDEWERK\-Vlaanderen\-2010\. Spatial Applications Division Leuven, Belgium. Data download URL: [https://www.dov.vlaanderen.be/geonetwork/home/api/records/78e15dd4\-8070\-4220\-afac\-258ea040fb30](https://www.dov.vlaanderen.be/geonetwork/home/api/records/78e15dd4-8070-4220-afac-258ea040fb30)
* Ottoy, S., Beckers, V., Jacxsens, P., Hermy, M., \& Van Orshoven, J. (2015\). [Multi\-level statistical soil profiles for assessing regional soil organic carbon stocks](https://doi.org/10.1016/j.geoderma.2015.04.001). Geoderma, 253, 12\-20\. [https://doi.org/10\.1016/j.geoderma.2015\.04\.001](https://doi.org/10.1016/j.geoderma.2015.04.001)
```
if({
site.vl <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Belgium/Vlaanderen/Aardewerk-Vlaanderen-2010_Profiel.csv")
site.vl$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)([sapply](https://rdrr.io/r/base/lapply.html)(site.vl$Profilering_Datum, function(i){[strsplit](https://rdrr.io/r/base/strsplit.html)(i, " ")[[1]][1]}), format="%d-%m-%Y"), "%Y-%m-%d")
site.vl.xy = site.vl[,[c](https://rdrr.io/r/base/c.html)("ID","Coordinaat_Lambert72_X","Coordinaat_Lambert72_Y")]
## 7020
site.vl.xy = site.vl.xy[[complete.cases](https://rdrr.io/r/stats/complete.cases.html)(site.vl.xy),]
coordinates(site.vl.xy) <- ~Coordinaat_Lambert72_X+Coordinaat_Lambert72_Y
proj4string(site.vl.xy) <- CRS("+init=epsg:31300")
site.vl.ll <- [data.frame](https://rdrr.io/r/base/data.frame.html)(spTransform(site.vl.xy, CRS("+proj=longlat +ellps=WGS84 +datum=WGS84")))
site.vl$longitude_decimal_degrees = join(site.vl["ID"], site.vl.ll, by="ID")$Coordinaat_Lambert72_X
site.vl$latitude_decimal_degrees = join(site.vl["ID"], site.vl.ll, by="ID")$Coordinaat_Lambert72_Y
site.vl$Profiel_ID = site.vl$ID
hor.vl <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Belgium/Vlaanderen/Aardewerk-Vlaanderen-2010_Horizont.csv")
## 42,529 rows
horALL.vl = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hor.vl, site.vl, by="Profiel_ID")
horALL.vl$oc = horALL.vl$Humus*10 /1.724
[summary](https://rdrr.io/r/base/summary.html)(horALL.vl$oc) ## mean = 7.8
#summary(horALL.vl$pH_H2O)
horALL.vl$hzn_top <- [rowSums](https://rdrr.io/r/base/colSums.html)(horALL.vl[,[c](https://rdrr.io/r/base/c.html)("Diepte_grens_boven1", "Diepte_grens_boven2")], na.rm=TRUE)/2
horALL.vl$hzn_bot <- [rowSums](https://rdrr.io/r/base/colSums.html)(horALL.vl[,[c](https://rdrr.io/r/base/c.html)("Diepte_grens_onder1","Diepte_grens_onder2")], na.rm=TRUE)/2
horALL.vl$sand_tot_psa <- horALL.vl$T50_100 + horALL.vl$T100_200 + horALL.vl$T200_500 + horALL.vl$T500_1000 + horALL.vl$T1000_2000
horALL.vl$silt_tot_psa <- horALL.vl$T2_10 + horALL.vl$T10_20 + horALL.vl$T20_50
horALL.vl$tex_psda = [paste0](https://rdrr.io/r/base/paste.html)(horALL.vl$HorizontTextuur_code1, horALL.vl$HorizontTextuur_code2)
## some corrupt coordinates
horALL.vl <- horALL.vl[horALL.vl$latitude_decimal_degrees > 50.6,]
hor.sel.vl <- [c](https://rdrr.io/r/base/c.html)("Profiel_ID", "Bodemgroep", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "Hor_nr", "hzn_top", "hzn_bot", "Naam", "tex_psda", "T0_2", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "pH_KCl", "pH_H2O", "ph_cacl2", "Sorptiecapaciteit_Totaal", "cec_nh4", "ecec", "Tgroter_dan_2000", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = hor.sel.vl[[which](https://rdrr.io/r/base/which.html)(!hor.sel.vl [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(horALL.vl))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ horALL.vl[,i] = NA } }
chemsprops.Vlaanderen <- horALL.vl[,hor.sel.vl]
chemsprops.Vlaanderen$source_db = "Vlaanderen"
chemsprops.Vlaanderen$confidence_degree = 2
chemsprops.Vlaanderen$project_url = "https://www.dov.vlaanderen.be"
chemsprops.Vlaanderen$citation_url = "https://doi.org/10.1016/j.geoderma.2015.04.001"
chemsprops.Vlaanderen = complete.vars(chemsprops.Vlaanderen, sel = [c](https://rdrr.io/r/base/c.html)("oc", "pH_H2O", "T0_2"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Vlaanderen)
#> [1] 41310 36
```
#### 5\.3\.0\.38 Chilean Soil Organic Carbon database
* Pfeiffer, M., Padarian, J., Osorio, R., Bustamante, N., Olmedo, G. F., Guevara, M., et al. (2020\) [CHLSOC: the Chilean Soil Organic Carbon database, a multi\-institutional collaborative effort](https://doi.org/10.5194/essd-12-457-2020). Earth Syst. Sci. Data, 12, 457–468, [https://doi.org/10\.5194/essd\-12\-457\-2020](https://doi.org/10.5194/essd-12-457-2020). Data download URL: [https://doi.org/10\.17605/OSF.IO/NMYS3](https://doi.org/10.17605/OSF.IO/NMYS3)
```
if({
chl.hor <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Chile/CHLSOC/CHLSOC_v1.0.csv", stringsAsFactors = FALSE)
#summary(chl.hor$oc)
chl.hor$oc = chl.hor$oc*10
#summary(chl.hor$bd)
chl.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(chl.hor$oc * chl.hor$bd * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(chl.hor$crf), 0, chl.hor$crf))/100, 3)
#summary(chl.hor$oc_d)
chl.col = [c](https://rdrr.io/r/base/c.html)("ProfileID", "usiteid", "year", "long", "lat", "labsampnum", "layer_sequence", "top", "bottom", "hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "crf", "bd", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = chl.col[[which](https://rdrr.io/r/base/which.html)(!chl.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(chl.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ chl.hor[,i] = NA } }
chemsprops.CHLSOC = chl.hor[,chl.col]
chemsprops.CHLSOC$source_db = "Chilean_SOCDB"
chemsprops.CHLSOC$confidence_degree = 4
chemsprops.CHLSOC$project_url = "https://doi.org/10.17605/OSF.IO/NMYS3"
chemsprops.CHLSOC$citation_url = "https://doi.org/10.5194/essd-12-457-2020"
chemsprops.CHLSOC = complete.vars(chemsprops.CHLSOC, sel = [c](https://rdrr.io/r/base/c.html)("oc", "bd"), coords = [c](https://rdrr.io/r/base/c.html)("long", "lat"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CHLSOC)
#> [1] 16371 36
```
#### 5\.3\.0\.39 Scotland (NSIS\_1\)
* Lilly, A., Bell, J.S., Hudson, G., Nolan, A.J. \& Towers. W. (Compilers) (2010\). National soil inventory of Scotland (NSIS\_1\); site location, sampling and profile description protocols. (1978\-1988\). Technical Bulletin. Macaulay Institute, Aberdeen. [https://doi.org/10\.5281/zenodo.4650230](https://doi.org/10.5281/zenodo.4650230). Data download URL: [https://www.hutton.ac.uk/learning/natural\-resource\-datasets/soilshutton/soils\-maps\-scotland/download](https://www.hutton.ac.uk/learning/natural-resource-datasets/soilshutton/soils-maps-scotland/download)
```
if({
sco.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Scotland/NSIS_10km.csv")
coordinates(sco.xy) = ~ easting + northing
proj4string(sco.xy) = "EPSG:27700"
sco.ll = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(spTransform(sco.xy, CRS("EPSG:4326")))
sco.ll$site_obsdate = [as.numeric](https://rdrr.io/r/base/numeric.html)([sapply](https://rdrr.io/r/base/lapply.html)(sco.ll$profile_da, function(x){[substr](https://rdrr.io/r/base/substr.html)(x, [nchar](https://rdrr.io/r/base/nchar.html)(x)-3, [nchar](https://rdrr.io/r/base/nchar.html)(x))}))
#hist(sco.ll$site_obsdate[sco.ll$site_obsdate>1000])
## no points after 1990!!
#summary(sco.ll$exch_k)
sco.in.name = [c](https://rdrr.io/r/base/c.html)("profile_id", "site_obsdate", "easting", "northing", "horz_top", "horz_botto",
"horz_symb", "sample_id", "texture_ps",
"sand_int", "silt_int", "clay", "carbon", "nitrogen", "ph_h2o", "exch_ca",
"exch_mg", "exch_na", "exch_k", "sum_cation")
#sco.in.name[which(!sco.in.name %in% names(sco.ll))]
sco.x = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(sco.ll[,sco.in.name])
#sco.x = sco.x[!sco.x$sample_id==0,]
#summary(sco.x$carbon)
sco.out.name = [c](https://rdrr.io/r/base/c.html)("usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees",
"hzn_bot", "hzn_top", "hzn_desgn", "labsampnum", "tex_psda", "sand_tot_psa", "silt_tot_psa",
"clay_tot_psa", "oc", "n_tot", "ph_h2o", "ca_ext",
"mg_ext", "na_ext", "k_ext", "cec_sum")
## translate values
sco.fun.lst = [as.list](https://rdrr.io/r/base/list.html)([rep](https://rdrr.io/r/base/rep.html)("as.numeric(x)*1", [length](https://rdrr.io/r/base/length.html)(sco.in.name)))
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="profile_id")]] = "paste(x)"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_ca")]] = "as.numeric(x)*200"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_mg")]] = "as.numeric(x)*121"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_k")]] = "as.numeric(x)*391"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_na")]] = "as.numeric(x)*230"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="carbon")]] = "as.numeric(x)*10"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="nitrogen")]] = "as.numeric(x)*10"
## save translation rules:
[write.csv](https://rdrr.io/r/utils/write.table.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(sco.in.name, sco.out.name, [unlist](https://rdrr.io/r/base/unlist.html)(sco.fun.lst)), "scotland_soilab_transvalues.csv")
sco.soil = transvalues(sco.x, sco.out.name, sco.in.name, sco.fun.lst)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sco.soil))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sco.soil[,i] = NA } }
chemsprops.ScotlandNSIS1 = sco.soil[,col.names]
chemsprops.ScotlandNSIS1$source_db = "ScotlandNSIS1"
chemsprops.ScotlandNSIS1$confidence_degree = 2
chemsprops.ScotlandNSIS1$project_url = "http://soils.environment.gov.scot/"
chemsprops.ScotlandNSIS1$citation_url = "https://doi.org/10.5281/zenodo.4650230"
chemsprops.ScotlandNSIS1 = complete.vars(chemsprops.ScotlandNSIS1, sel = [c](https://rdrr.io/r/base/c.html)("oc", "ph_h2o", "clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.ScotlandNSIS1)
#> [1] 2977 36
```
#### 5\.3\.0\.40 Ecoforest map of Quebec, Canada
* Duchesne, L., Ouimet, R., (2021\). Digital mapping of soil texture in ecoforest polygons in Quebec, Canada. PeerJ 9:e11685 [https://doi.org/10\.7717/peerj.11685](https://doi.org/10.7717/peerj.11685). Data download URL: [https://doi.org/10\.7717/peerj.11685/supp\-1](https://doi.org/10.7717/peerj.11685/supp-1)
```
if({
que.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Canada/Quebec/RawData.csv")
#summary(as.factor(que.xy$Horizon))
## horizon depths were not measured - we assume 15-30 and 30-80
que.xy$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)(que.xy$Horizon=="B", 15, 30)
que.xy$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)(que.xy$Horizon=="B", 30, 80)
que.xy$site_key = que.xy$usiteid
que.xy$latitude_decimal_degrees = que.xy$Latitude
que.xy$longitude_decimal_degrees = que.xy$Longitude
que.xy$hzn_desgn = que.xy$Horizon
que.xy$sand_tot_psa = que.xy$PC_Sand
que.xy$silt_tot_psa = que.xy$PC_Silt
que.xy$clay_tot_psa = que.xy$PC_Clay
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(que.xy))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ que.xy[,i] = NA } }
chemsprops.QuebecTEX = que.xy[,col.names]
chemsprops.QuebecTEX$source_db = "QuebecTEX"
chemsprops.QuebecTEX$confidence_degree = 4
chemsprops.QuebecTEX$project_url = ""
chemsprops.QuebecTEX$citation_url = "https://doi.org/10.7717/peerj.11685"
chemsprops.QuebecTEX = complete.vars(chemsprops.QuebecTEX, sel = [c](https://rdrr.io/r/base/c.html)("clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.QuebecTEX)
#> [1] 26648 36
```
#### 5\.3\.0\.41 Pseudo\-observations
* Pseudo\-observations using simulated points (world deserts)
```
if({
## 0 soil organic carbon + 98% sand content (deserts)
[load](https://rdrr.io/r/base/load.html)("deserts.pnt.rda")
nut.sim <- [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(spTransform(deserts.pnt, CRS("+proj=longlat +datum=WGS84")))
nut.sim[,1] <- NULL
nut.sim <- plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(nut.sim, [c](https://rdrr.io/r/base/c.html)("x"="longitude_decimal_degrees", "y"="latitude_decimal_degrees"))
nr = [nrow](https://rdrr.io/r/base/nrow.html)(nut.sim)
nut.sim$site_key <- [paste](https://rdrr.io/r/base/paste.html)("Simulated", 1:nr, sep="_")
## insert zeros for all nutrients except for the once we are not sure:
## http://www.decodedscience.org/chemistry-sahara-sand-elements-dunes/45828
sim.vars = [c](https://rdrr.io/r/base/c.html)("oc", "oc_d", "c_tot", "n_tot", "ecec", "clay_tot_psa", "mg_ext", "k_ext")
nut.sim[,sim.vars] <- 0
nut.sim$silt_tot_psa = 2
nut.sim$sand_tot_psa = 98
nut.sim$hzn_top = 0
nut.sim$hzn_bot = 30
nut.sim$db_od = 1.55
nut.sim2 = nut.sim
nut.sim2$silt_tot_psa = 1
nut.sim2$sand_tot_psa = 99
nut.sim2$hzn_top = 30
nut.sim2$hzn_bot = 60
nut.sim2$db_od = 1.6
nut.simA = [rbind](https://rdrr.io/r/base/cbind.html)(nut.sim, nut.sim2)
#str(nut.simA)
nut.simA$source_db = "Simulated"
nut.simA$confidence_degree = 10
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(nut.simA))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ nut.simA[,i] = NA } }
chemsprops.SIM = nut.simA[,col.names]
chemsprops.SIM$project_url = "https://gitlab.com/openlandmap/"
chemsprops.SIM$citation_url = "https://gitlab.com/openlandmap/compiled-ess-point-data-sets/"
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SIM)
#> [1] 718 36
```
Other potential large soil profile DBs of interest:
* Shangguan, W., Dai, Y., Liu, B., Zhu, A., Duan, Q., Wu, L., … \& Chen, D. (2013\). [A China data set of soil properties for land surface modeling](https://doi.org/10.1002/jame.20026). Journal of Advances in Modeling Earth Systems, 5(2\), 212\-224\.
* Salković, E., Djurović, I., Knežević, M., Popović\-Bugarin, V., \& Topalović, A. (2018\). Digitization and mapping of national legacy soil data of Montenegro. Soil and Water Research, 13(2\), 83\-89\. [https://doi.org/10\.17221/81/2017\-SWR](https://doi.org/10.17221/81/2017-SWR)
#### 5\.3\.0\.1 National Cooperative Soil Survey Characterization Database
* National Cooperative Soil Survey, (2020\). National Cooperative Soil Survey Characterization Database. Data download URL: <http://ncsslabdatamart.sc.egov.usda.gov/>
* O’Geen, A., Walkinshaw, M., \& Beaudette, D. (2017\). SoilWeb: A multifaceted interface to soil survey information. Soil Science Society of America Journal, 81(4\), 853\-862\. [https://doi.org/10\.2136/sssaj2016\.11\.0386n](https://doi.org/10.2136/sssaj2016.11.0386n)
This data set is continuously updated.
```
if({
ncss.site <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Site_Location.csv", stringsAsFactors = FALSE)
ncss.layer <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Layer.csv", stringsAsFactors = FALSE)
ncss.bdm <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Bulk_Density_and_Moisture.csv", stringsAsFactors = FALSE)
## multiple measurements
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(ncss.bdm$prep_code))
ncss.bdm.0 <- ncss.bdm[ncss.bdm$prep_code=="S",]
[summary](https://rdrr.io/r/base/summary.html)(ncss.bdm.0$db_od)
## 0 BD values --- error!
ncss.carb <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Carbon_and_Extractions.csv", stringsAsFactors = FALSE)
ncss.organic <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Organic.csv", stringsAsFactors = FALSE)
ncss.pH <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_pH_and_Carbonates.csv", stringsAsFactors = FALSE)
#str(ncss.pH)
#summary(ncss.pH$ph_h2o)
#summary(!is.na(ncss.pH$ph_h2o))
ncss.PSDA <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_PSDA_and_Rock_Fragments.csv", stringsAsFactors = FALSE)
ncss.CEC <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_CEC_and_Bases.csv")
ncss.salt <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Salt.csv")
ncss.horizons <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(ncss.bdm.0, ncss.layer, ncss.carb, ncss.organic[,[c](https://rdrr.io/r/base/c.html)("labsampnum", "result_source_key", "c_tot", "n_tot", "db_od", "oc")], ncss.pH, ncss.PSDA, ncss.CEC, ncss.salt), type = "full", by="labsampnum")
#head(ncss.horizons)
[nrow](https://rdrr.io/r/base/nrow.html)(ncss.horizons)
ncss.horizons$oc_d = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$oc / 100 * ncss.horizons$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(ncss.horizons$wpg2), 0, ncss.horizons$wpg2))/100, 3)
ncss.horizons$ca_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$ca_nh4 * 200, 4)
ncss.horizons$mg_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$mg_nh4 * 121, 3)
ncss.horizons$na_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$na_nh4 * 230, 3)
ncss.horizons$k_ext = [signif](https://rdrr.io/r/base/Round.html)(ncss.horizons$k_nh4 * 391, 3)
#summary(ncss.horizons$oc_d)
## Values <0!!
chemsprops.NCSS = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(ncss.site[,site.names], ncss.horizons[,hor.names], by="site_key")
chemsprops.NCSS$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(chemsprops.NCSS$site_obsdate, format="%m/%d/%Y"), "%Y-%m-%d")
chemsprops.NCSS$source_db = "USDA_NCSS"
#dim(chemsprops.NCSS)
chemsprops.NCSS$oc = chemsprops.NCSS$oc * 10
chemsprops.NCSS$n_tot = chemsprops.NCSS$n_tot * 10
#hist(log1p(chemsprops.NCSS$oc), breaks=45, col="gray")
chemsprops.NCSS$confidence_degree = 1
chemsprops.NCSS$project_url = "http://ncsslabdatamart.sc.egov.usda.gov/"
chemsprops.NCSS$citation_url = "https://doi.org/10.2136/sssaj2016.11.0386n"
chemsprops.NCSS = complete.vars(chemsprops.NCSS, sel=[c](https://rdrr.io/r/base/c.html)("tex_psda","oc","clay_tot_psa","ecec","ph_h2o","ec_12pre","k_ext"))
#rm(ncss.horizons)
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NCSS)
#> [1] 136011 36
#summary(!is.na(chemsprops.NCSS$oc))
## texture classes need to be cleaned-up
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(chemsprops.NCSS$tex_psda))
#> c C cl CL
#> 2391 10908 19 10158 5
#> cos CoS COS cosl CoSL
#> 2424 6 4 4543 2
#> Fine Sandy Loam fs fsl FSL l
#> 1 2357 10701 16 17038
#> L lcos LCoS lfs ls
#> 2 2933 3 1805 3166
#> LS lvfs s S sc
#> 1 152 2958 1 601
#> scl SCL si sic SiC
#> 5456 5 854 7123 11
#> sicl SiCL sil SiL SIL
#> 13718 14 22230 4 28
#> sl SL vfs vfsl VFSL
#> 5547 13 64 2678 3
#> NA's
#> 6068
```
#### 5\.3\.0\.2 Rapid Carbon Assessment (RaCA)
* Soil Survey Staff. Rapid Carbon Assessment (RaCA) project. United States Department of Agriculture, Natural Resources Conservation Service. Available online. June 1, 2013 (FY2013 official release). Data download URL: [https://www.nrcs.usda.gov/wps/portal/nrcs/detailfull/soils/research/?cid\=nrcs142p2\_054164](https://www.nrcs.usda.gov/wps/portal/nrcs/detailfull/soils/research/?cid=nrcs142p2_054164)
* **Note**: Locations of each site have been degraded due to confidentiality and only reflect the general position of each site.
* Wills, S. et al. (2013\) [“Rapid carbon assessment (RaCA) methodology: Sampling and Initial Summary. United States Department of Agriculture.”](https://www.nrcs.usda.gov/wps/PA_NRCSConsumption/download?cid=nrcs142p2_052841&ext=pdf) Natural Resources Conservation Service, National Soil Survey Center.
```
if({
raca.df <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/RaCA/RaCa_general_location.csv", stringsAsFactors = FALSE)
[names](https://rdrr.io/r/base/names.html)(raca.df)[1] = "rcasiteid"
raca.layer <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/RaCA/RaCA_samples_JULY2016.csv", stringsAsFactors = FALSE)
raca.layer$longitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(raca.layer["rcasiteid"], raca.df, match ="first")$Gen_long
raca.layer$latitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(raca.layer["rcasiteid"], raca.df, match ="first")$Gen_lat
raca.layer$site_obsdate = "2013"
[summary](https://rdrr.io/r/base/summary.html)(raca.layer$Calc_SOC)
#plot(raca.layer[!duplicated(raca.layer$rcasiteid),c("longitude_decimal_degrees", "latitude_decimal_degrees")])
#summary(raca.layer$SOC_pred1)
## some strange groupings around small values
raca.layer$oc_d = [signif](https://rdrr.io/r/base/Round.html)(raca.layer$Calc_SOC / 100 * raca.layer$Bulkdensity * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(raca.layer$fragvolc), 0, raca.layer$fragvolc))/100, 3)
raca.layer$oc = raca.layer$Calc_SOC * 10
#summary(raca.layer$oc_d)
raca.h.lst <- [c](https://rdrr.io/r/base/c.html)("rcasiteid", "lay_field_label1", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "Lab.Sample.No", "layer_Number", "TOP", "BOT", "hzname", "texture", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot_ncs", "n_tot_ncs", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "fragvolc", "Bulkdensity", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = raca.h.lst[[which](https://rdrr.io/r/base/which.html)(!raca.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(raca.layer))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ raca.layer[,i] = NA } }
chemsprops.RaCA = raca.layer[,raca.h.lst]
chemsprops.RaCA$source_db = "RaCA2016"
chemsprops.RaCA$confidence_degree = 4
chemsprops.RaCA$project_url = "https://www.nrcs.usda.gov/survey/raca/"
chemsprops.RaCA$citation_url = "https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/nrcs142p2_052841.pdf"
chemsprops.RaCA = complete.vars(chemsprops.RaCA, sel = [c](https://rdrr.io/r/base/c.html)("oc", "fragvolc"))
}
#> Joining by: rcasiteid
#> Joining by: rcasiteid
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.RaCA)
#> [1] 53664 36
```
#### 5\.3\.0\.3 National Geochemical Database Soil
* Smith, D.B., Cannon, W.F., Woodruff, L.G., Solano, Federico, Kilburn, J.E., and Fey, D.L., (2013\). [Geochemical and
mineralogical data for soils of the conterminous United States](http://pubs.usgs.gov/ds/801/). U.S. Geological Survey Data Series 801, 19 p., <http://pubs.usgs.gov/ds/801/>.
* Grossman, J. N. (2004\). [The National Geochemical Survey\-database and documentation](https://doi.org/10.3133/ofr20041001). U.S. Geological Survey Open\-File Report 2004\-1001\. [DOI:10\.3133/ofr20041001](DOI:10.3133/ofr20041001).
* **Note**: NGS focuses on stream\-sediment samples, but also contains many soil samples.
```
if({
ngs.points <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/geochemical/ds-801-csv/site.txt", sep=",")
## 4857 pnts
ngs.layers <- [lapply](https://rdrr.io/r/base/lapply.html)([c](https://rdrr.io/r/base/c.html)("top5cm.txt", "ahorizon.txt", "chorizon.txt"), function(i){[read.csv](https://rdrr.io/r/utils/read.table.html)([paste0](https://rdrr.io/r/base/paste.html)("/mnt/diskstation/data/Soil_points/USA/geochemical/ds-801-csv/", i), sep=",")})
ngs.layers = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)(ngs.layers)
#dim(ngs.layers)
# 14571 126
#summary(ngs.layers$tot_carb_pct)
#lattice::xyplot(c_org_pct ~ c_tot_pct, ngs.layers, scales=list(x = list(log = 2), y = list(log = 2)))
#lattice::xyplot(c_org_pct ~ tot_clay_pct, ngs.layers, scales=list(y = list(log = 2)))
ngs.layers$c_tot = ngs.layers$c_tot_pct * 10
ngs.layers$oc = ngs.layers$c_org_pct * 10
ngs.layers$hzn_top = [sapply](https://rdrr.io/r/base/lapply.html)(ngs.layers$depth_cm, function(i){[strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-")[[1]][1]})
ngs.layers$hzn_bot = [sapply](https://rdrr.io/r/base/lapply.html)(ngs.layers$depth_cm, function(i){[strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-")[[1]][2]})
#summary(ngs.layers$tot_clay_pct)
#summary(ngs.layers$k_pct) ## very high numbers?
## question is if the geochemical element results are compatible with e.g. k_ext?
t.ngs = [c](https://rdrr.io/r/base/c.html)("lab_id", "site_id", "horizon", "hzn_top", "hzn_bot", "tot_clay_pct", "c_tot", "oc")
ngs.m = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(ngs.points, ngs.layers[
ngs.m$site_obsdate = [as.Date](https://rdrr.io/r/base/as.Date.html)(ngs.m$colldate, format="%Y-%m-%d")
ngs.h.lst <- [c](https://rdrr.io/r/base/c.html)("site_id", "quad", "site_obsdate", "longitude", "latitude", "lab_id", "layer_sequence", "hzn_top", "hzn_bot", "horizon", "tex_psda", "tot_clay_pct", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = ngs.h.lst[[which](https://rdrr.io/r/base/which.html)(!ngs.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(ngs.m))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ ngs.m[,i] = NA } }
chemsprops.USGS.NGS = ngs.m[,ngs.h.lst]
chemsprops.USGS.NGS$source_db = "USGS.NGS"
chemsprops.USGS.NGS$confidence_degree = 1
chemsprops.USGS.NGS$project_url = "https://mrdata.usgs.gov/ds-801/"
chemsprops.USGS.NGS$citation_url = "https://pubs.usgs.gov/ds/801/"
chemsprops.USGS.NGS = complete.vars(chemsprops.USGS.NGS, sel = [c](https://rdrr.io/r/base/c.html)("tot_clay_pct", "oc"), coords = [c](https://rdrr.io/r/base/c.html)("longitude", "latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.USGS.NGS)
#> [1] 9446 36
```
#### 5\.3\.0\.4 Forest Inventory and Analysis Database (FIADB)
* Domke, G. M., Perry, C. H., Walters, B. F., Nave, L. E., Woodall, C. W., \& Swanston, C. W. (2017\). [Toward inventory‐based estimates of soil organic carbon in forests of the United States](https://doi.org/10.1002/eap.1516). Ecological Applications, 27(4\), 1223\-1235\. [https://doi.org/10\.1002/eap.1516](https://doi.org/10.1002/eap.1516)
* Forest Inventory and Analysis, (2014\). The Forest Inventory and Analysis Database: Database description
and user guide version 6\.0\.1 for Phase 3\. U.S. Department of Agriculture, Forest Service. 182 p.
\[Online]. Available: [https://www.fia.fs.fed.us/library/database\-documentation/](https://www.fia.fs.fed.us/library/database-documentation/)
* **Note**: samples are taken only from the top\-soil either 0–10\.16 cm or 10\.16–20\.32 cm.
```
if({
fia.loc <- vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/USA/FIADB/ENTIRE/PLOT.csv")
fia.loc$site_id = [paste](https://rdrr.io/r/base/paste.html)(fia.loc$STATECD, fia.loc$COUNTYCD, fia.loc$PLOT, sep="_")
fia.lab <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/FIADB/ENTIRE/SOILS_LAB.csv")
fia.lab$site_id = [paste](https://rdrr.io/r/base/paste.html)(fia.lab$STATECD, fia.lab$COUNTYCD, fia.lab$PLOT, sep="_")
## 23,765 rows
fia.des <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/FIADB/ENTIRE/SOILS_SAMPLE_LOC.csv")
fia.des$site_id = [paste](https://rdrr.io/r/base/paste.html)(fia.des$STATECD, fia.des$COUNTYCD, fia.des$PLOT, sep="_")
#fia.lab$TXTRLYR1 = plyr::join(fia.lab[c("site_id","INVYR")], fia.des[c("site_id","TXTRLYR1","INVYR")], match ="first")$TXTRLYR1
fia.lab$TXTRLYR2 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","TXTRLYR2","INVYR")], match ="first")$TXTRLYR2
#summary(as.factor(fia.lab$TXTRLYR1))
fia.lab$tex_psda = [factor](https://rdrr.io/r/base/factor.html)(fia.lab$TXTRLYR2, labels = [c](https://rdrr.io/r/base/c.html)("Organic", "Loamy", "Clayey", "Sandy", "Coarse sand", "Not measured"))
#Code Description
# 0 Organic.
# 1 Loamy.
# 2 Clayey.
# 3 Sandy.
# 4 Coarse sand.
# 9 Not measured - make plot notes
fia.lab$FORFLTHK = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","FORFLTHK","INVYR")], match ="first")$FORFLTHK
#summary(fia.lab$FORFLTHK)
fia.lab$LTRLRTHK = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","LTRLRTHK","INVYR")], match ="first")$LTRLRTHK
fia.lab$tot_thk = [rowSums](https://rdrr.io/r/base/colSums.html)(fia.lab[,[c](https://rdrr.io/r/base/c.html)("FORFLTHK", "LTRLRTHK")], na.rm=TRUE)
fia.lab$DPTHSBSL = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab[[c](https://rdrr.io/r/base/c.html)("site_id","INVYR")], fia.des[[c](https://rdrr.io/r/base/c.html)("site_id","DPTHSBSL","INVYR")], match ="first")$DPTHSBSL
#summary(fia.lab$DPTHSBSL)
sel.fia = fia.loc$site_id [%in%](https://rdrr.io/r/base/match.html) fia.lab$site_id
#summary(sel.fia)
# 15,109
fia.loc = fia.loc[sel.fia, [c](https://rdrr.io/r/base/c.html)("site_id", "LON", "LAT")]
#summary(fia.lab$BULK_DENSITY) ## some strange values for BD!
#quantile(fia.lab$BULK_DENSITY, c(0.02, 0.98), na.rm=TRUE)
#summary(fia.lab$C_ORG_PCT)
#summary(as.factor(fia.lab$LAYER_TYPE))
#lattice::xyplot(BULK_DENSITY ~ C_ORG_PCT, fia.lab, scales=list(x = list(log = 2)))
#dim(fia.lab)
# 14571 126
fia.lab$c_tot = fia.lab$C_ORG_PCT * 10
fia.lab$oc = fia.lab$C_TOTAL_PCT * 10
fia.lab$n_tot = fia.lab$N_TOTAL_PCT * 10
fia.lab$db_od = [ifelse](https://rdrr.io/r/base/ifelse.html)(fia.lab$BULK_DENSITY < 0.001 | fia.lab$BULK_DENSITY > 1.8, NA, fia.lab$BULK_DENSITY)
#lattice::xyplot(db_od ~ C_ORG_PCT, fia.lab, par.settings = list(plot.symbol = list(col=scales::alpha("black", 0.6), fill=scales::alpha("red", 0.6), pch=21, cex=0.6)), scales = list(x=list(log=TRUE, equispaced.log=FALSE)), ylab="Bulk density", xlab="SOC wpct")
#hist(fia.lab$db_od, breaks=45)
## A lot of very small BD measurements
fia.lab$oc_d = [signif](https://rdrr.io/r/base/Round.html)(fia.lab$oc / 100 * fia.lab$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$COARSE_FRACTION_PCT), 0, fia.lab$COARSE_FRACTION_PCT))/100, 3)
#hist(fia.lab$oc_d, breaks=45, col="grey")
fia.lab$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)(fia.lab$LAYER_TYPE=="FF_TOTAL" | fia.lab$LAYER_TYPE=="L_ORG", 0, NA)
fia.lab$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)(fia.lab$LAYER_TYPE=="FF_TOTAL" | fia.lab$LAYER_TYPE=="L_ORG", fia.lab$tot_thk, NA)
fia.lab$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_top) & (fia.lab$LAYER_TYPE=="MIN_2" | fia.lab$LAYER_TYPE=="ORG_2"), 10.2 + fia.lab$tot_thk, fia.lab$hzn_top)
fia.lab$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_bot) & (fia.lab$LAYER_TYPE=="MIN_2" | fia.lab$LAYER_TYPE=="ORG_2"), 20.3 + fia.lab$tot_thk, fia.lab$hzn_bot)
fia.lab$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_top) & (fia.lab$LAYER_TYPE=="MIN_1" | fia.lab$LAYER_TYPE=="ORG_1"), 0 + fia.lab$tot_thk, fia.lab$hzn_top)
fia.lab$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fia.lab$hzn_bot) & (fia.lab$LAYER_TYPE=="MIN_1" | fia.lab$LAYER_TYPE=="ORG_1"), 10.2 + fia.lab$tot_thk, fia.lab$hzn_bot)
#summary(fia.lab$EXCHNG_K) ## Negative values!
fia.m = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(fia.lab, fia.loc)
#fia.m = fia.m[!duplicated(as.factor(paste(fia.m$site_id, fia.m$INVYR, fia.m$LAYER_TYPE, sep="_"))),]
fia.m$site_obsdate = [as.Date](https://rdrr.io/r/base/as.Date.html)(fia.m$SAMPLE_DATE, format="%Y-%m-%d")
sel.d.fia = fia.m$site_obsdate < [as.Date](https://rdrr.io/r/base/as.Date.html)("1980-01-01", format="%Y-%m-%d")
fia.m$site_obsdate[[which](https://rdrr.io/r/base/which.html)(sel.d.fia)] = NA
#hist(fia.m$site_obsdate, breaks=25)
fia.h.lst <- [c](https://rdrr.io/r/base/c.html)("site_id", "usiteid", "site_obsdate", "LON", "LAT", "SAMPLE_ID", "layer_sequence", "hzn_top", "hzn_bot", "LAYER_TYPE", "tex_psda", "tot_clay_pct", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "PH_H2O", "PH_CACL2", "cec_sum", "cec_nh4", "ECEC", "COARSE_FRACTION_PCT", "db_od", "EXCHNG_CA", "EXCHNG_MG", "EXCHNG_NA", "EXCHNG_K", "ec_satp", "ec_12pre")
x.na = fia.h.lst[[which](https://rdrr.io/r/base/which.html)(!fia.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(fia.m))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ fia.m[,i] = NA } }
chemsprops.FIADB = fia.m[,fia.h.lst]
chemsprops.FIADB$source_db = "FIADB"
chemsprops.FIADB$confidence_degree = 2
chemsprops.FIADB$project_url = "http://www.fia.fs.fed.us/"
chemsprops.FIADB$citation_url = "https://www.fia.fs.fed.us/library/database-documentation/"
chemsprops.FIADB = complete.vars(chemsprops.FIADB, sel = [c](https://rdrr.io/r/base/c.html)("PH_H2O", "oc", "EXCHNG_K"), coords = [c](https://rdrr.io/r/base/c.html)("LON", "LAT"))
#str(unique(paste(chemsprops.FIADB$LON, chemsprops.FIADB$LAT, sep="_")))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.FIADB)
#> [1] 23208 36
#write.csv(chemsprops.FIADB, "/mnt/diskstation/data/Soil_points/USA/FIADB/fiadb_soil.pnts.csv")
```
#### 5\.3\.0\.5 Africa soil profiles database
* Leenaars, J. G., Van Oostrum, A. J. M., \& Ruiperez Gonzalez, M. (2014\). [Africa soil profiles database version 1\.2\. A compilation of georeferenced and standardized legacy soil profile data for Sub\-Saharan Africa (with dataset)](https://www.isric.org/projects/africa-soil-profiles-database-afsp). Wageningen: ISRIC Report 2014/01; 2014\. Data download URL: <https://data.isric.org/>
```
if({
[library](https://rdrr.io/r/base/library.html)([foreign](https://svn.r-project.org/R-packages/trunk/foreign))
afspdb.profiles <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Profiles.dbf", as.is=TRUE)
afspdb.layers <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Layers.dbf", as.is=TRUE)
afspdb.s.lst <- [c](https://rdrr.io/r/base/c.html)("ProfileID", "FldMnl_ID", "T_Year", "X_LonDD", "Y_LatDD")
#summary(afspdb.layers$BlkDens)
## add missing columns
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(afspdb.layers)){
if([is.numeric](https://rdrr.io/r/base/numeric.html)(afspdb.layers[,j])) {
afspdb.layers[,j] <- [ifelse](https://rdrr.io/r/base/ifelse.html)(afspdb.layers[,j] < 0, NA, afspdb.layers[,j])
}
}
afspdb.layers$ca_ext = afspdb.layers$ExCa * 200
afspdb.layers$mg_ext = afspdb.layers$ExMg * 121
afspdb.layers$na_ext = afspdb.layers$ExNa * 230
afspdb.layers$k_ext = afspdb.layers$ExK * 391
#summary(afspdb.layers$k_ext)
afspdb.m = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(afspdb.profiles[,afspdb.s.lst], afspdb.layers)
afspdb.m$oc_d = [signif](https://rdrr.io/r/base/Round.html)(afspdb.m$OrgC * afspdb.m$BlkDens * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(afspdb.m$CfPc), 0, afspdb.m$CfPc))/100, 3)
#summary(afspdb.m$T_Year)
afspdb.m$T_Year = [ifelse](https://rdrr.io/r/base/ifelse.html)(afspdb.m$T_Year < 0, NA, afspdb.m$T_Year)
afspdb.h.lst <- [c](https://rdrr.io/r/base/c.html)("ProfileID", "FldMnl_ID", "T_Year", "X_LonDD", "Y_LatDD", "LayerID", "LayerNr", "UpDpth", "LowDpth", "HorDes", "LabTxtr", "Clay", "Silt", "Sand", "OrgC", "oc_d", "TotC", "TotalN", "PHKCl", "PHH2O", "PHCaCl2", "CecSoil", "cec_nh4", "Ecec", "CfPc" , "BlkDens", "ca_ext", "mg_ext", "na_ext", "k_ext", "EC", "ec_12pre")
x.na = afspdb.h.lst[[which](https://rdrr.io/r/base/which.html)(!afspdb.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(afspdb.m))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ afspdb.m[,i] = NA } }
chemsprops.AfSPDB = afspdb.m[,afspdb.h.lst]
chemsprops.AfSPDB$source_db = "AfSPDB"
chemsprops.AfSPDB$confidence_degree = 5
chemsprops.AfSPDB$project_url = "https://www.isric.org/projects/africa-soil-profiles-database-afsp"
chemsprops.AfSPDB$citation_url = "https://www.isric.org/sites/default/files/isric_report_2014_01.pdf"
chemsprops.AfSPDB = complete.vars(chemsprops.AfSPDB, sel = [c](https://rdrr.io/r/base/c.html)("LabTxtr","OrgC","Clay","Ecec","PHH2O","EC","k_ext"), coords = [c](https://rdrr.io/r/base/c.html)("X_LonDD", "Y_LatDD"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.AfSPDB)
#> [1] 60306 36
```
#### 5\.3\.0\.6 Africa Soil Information Service (AfSIS) Soil Chemistry
* Towett, E. K., Shepherd, K. D., Tondoh, J. E., Winowiecki, L. A., Lulseged, T., Nyambura, M., … \& Cadisch, G. (2015\). Total elemental composition of soils in Sub\-Saharan Africa and relationship with soil forming factors. Geoderma Regional, 5, 157\-168\. [https://doi.org/10\.1016/j.geodrs.2015\.06\.002](https://doi.org/10.1016/j.geodrs.2015.06.002)
* [AfSIS Soil Chemistry](https://github.com/qedsoftware/afsis-soil-chem-tutorial) produced by World Agroforestry Centre (ICRAF), Quantitative Engineering Design (QED), Center for International Earth Science Information Network (CIESIN), The International Center for Tropical Agriculture (CIAT), Crop Nutrition Laboratory Services (CROPNUTS) and Rothamsted Research (RRES). Data download URL: <https://registry.opendata.aws/afsis/>
```
if({
afsis1.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/2009-2013/Georeferences/georeferences.csv")
afsis1.xy$Sampling.date = 2011
afsis1.lst = [list.files](https://rdrr.io/r/base/list.files.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/2009-2013/Wet_Chemistry", pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("*.csv$"), full.names = TRUE, recursive = TRUE)
afsis1.hor = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([lapply](https://rdrr.io/r/base/lapply.html)(afsis1.lst, read.csv))
tansis.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/tansis/Georeferences/georeferences.csv")
#summary(tansis.xy$Sampling.date)
tansis.xy$Sampling.date = 2018
tansis.lst = [list.files](https://rdrr.io/r/base/list.files.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SSL/tansis/Wet_Chemistry", pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("*.csv$"), full.names = TRUE, recursive = TRUE)
tansis.hor = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([lapply](https://rdrr.io/r/base/lapply.html)(tansis.lst, read.csv))
afsis1t.df = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([list](https://rdrr.io/r/base/list.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(afsis1.hor, afsis1.xy, by="SSN"), plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tansis.hor, tansis.xy, by="SSN")))
afsis1t.df$UpDpth = [ifelse](https://rdrr.io/r/base/ifelse.html)(afsis1t.df$Depth=="sub", 20, 0)
afsis1t.df$LowDpth = [ifelse](https://rdrr.io/r/base/ifelse.html)(afsis1t.df$Depth=="sub", 50, 20)
afsis1t.df$LayerNr = [ifelse](https://rdrr.io/r/base/ifelse.html)(afsis1t.df$Depth=="sub", 2, 1)
#summary(afsis1t.df$C...Org)
afsis1t.df$oc = [rowMeans](https://rdrr.io/r/base/colSums.html)(afsis1t.df[,[c](https://rdrr.io/r/base/c.html)("C...Org", "X.C")], na.rm=TRUE) * 10
afsis1t.df$c_tot = afsis1t.df$Total.carbon
afsis1t.df$n_tot = [rowMeans](https://rdrr.io/r/base/colSums.html)(afsis1t.df[,[c](https://rdrr.io/r/base/c.html)("Total.nitrogen", "X.N")], na.rm=TRUE) * 10
afsis1t.df$ph_h2o = [rowMeans](https://rdrr.io/r/base/colSums.html)(afsis1t.df[,[c](https://rdrr.io/r/base/c.html)("PH", "pH")], na.rm=TRUE)
## multiple texture fractons - which one is the total clay, sand, silt?
## Clay content for water dispersed particles-recorded after 4 minutes of ultrasonication
#summary(afsis1t.df$Psa.w4clay)
#plot(afsis1t.df[,c("Longitude", "Latitude")])
afsis1.h.lst <- [c](https://rdrr.io/r/base/c.html)("SSN", "Site", "Sampling.date", "Longitude", "Latitude", "Soil.material", "LayerNr", "UpDpth", "LowDpth", "HorDes", "LabTxtr", "Psa.w4clay", "Psa.w4silt", "Psa.w4sand", "oc", "oc_d", "c_tot", "n_tot", "PHKCl", "ph_h2o", "PHCaCl2", "CecSoil", "cec_nh4", "Ecec", "CfPc" , "BlkDens", "ca_ext", "M3.Mg", "M3.Na", "M3.K", "EC", "ec_12pre")
x.na = afspdb.h.lst[[which](https://rdrr.io/r/base/which.html)(!afsis1.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(afsis1t.df))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ afsis1t.df[,i] = NA } }
chemsprops.AfSIS1 = afsis1t.df[,afsis1.h.lst]
chemsprops.AfSIS1$source_db = "AfSIS1"
chemsprops.AfSIS1$confidence_degree = 2
chemsprops.AfSIS1$project_url = "https://registry.opendata.aws/afsis/"
chemsprops.AfSIS1$citation_url = "https://doi.org/10.1016/j.geodrs.2015.06.002"
chemsprops.AfSIS1 = complete.vars(chemsprops.AfSIS1, sel = [c](https://rdrr.io/r/base/c.html)("Psa.w4clay","oc","ph_h2o","M3.K"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.AfSIS1)
#> [1] 4162 36
```
#### 5\.3\.0\.7 Fine Root Ecology Database (FRED)
* Iversen CM, McCormack ML, Baer JK, Powell AS, Chen W, Collins C, Fan Y, Fanin N, Freschet GT, Guo D, Hogan JA, Kou L, Laughlin DC, Lavely E, Liese R, Lin D, Meier IC, Montagnoli A, Roumet C, See CR, Soper F, Terzaghi M, Valverde\-Barrantes OJ, Wang C, Wright SJ, Wurzburger N, Zadworny M. (2021\). [Fine\-Root Ecology Database (FRED): A Global Collection of Root Trait Data with Coincident Site, Vegetation, Edaphic, and Climatic Data, Version 3](https://roots.ornl.gov/). Oak Ridge National Laboratory, TES SFA, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A. Access on\-line at: [https://doi.org/10\.25581/ornlsfa.014/1459186](https://doi.org/10.25581/ornlsfa.014/1459186).
```
if({
[Sys.setenv](https://rdrr.io/r/base/Sys.setenv.html)("VROOM_CONNECTION_SIZE" = 131072 * 2)
fred = vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED3_Entire_Database_2021.csv", skip = 10, col_names=FALSE)
## 57,190 x 1,164
#nm.fred = read.csv("/mnt/diskstation/data/Soil_points/INT/FRED/FRED3_Column_Definitions_20210423-091040.csv", header=TRUE)
nm.fred0 = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED3_Entire_Database_2021.csv", nrows=2)
[names](https://rdrr.io/r/base/names.html)(fred) = [make.names](https://rdrr.io/r/base/make.names.html)([t](https://rdrr.io/r/base/t.html)(nm.fred0)[,1])
## 1164 columns!
fred.h.lst = [c](https://rdrr.io/r/base/c.html)("Notes_Row.ID", "Data.source_DOI", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "Soil.horizon", "Soil.texture", "Soil.texture_Fraction.clay", "Soil.texture_Fraction.silt", "Soil.texture_Fraction.sand", "Soil.organic.C.content", "oc_d", "c_tot", "Soil.N.content", "ph_kcl", "Soil.pH_Water", "Soil.pH_Salt", "Soil.cation.exchange.capacity..CEC.", "cec_nh4", "Soil.effective.cation.exchange.capacity..ECEC.", "wpg2", "Soil.bulk.density", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db", "confidence_degree")
fred$site_obsdate = [as.integer](https://rdrr.io/r/base/integer.html)([rowMeans](https://rdrr.io/r/base/colSums.html)(fred[,[c](https://rdrr.io/r/base/c.html)("Sample.collection_Year.ending.collection", "Sample.collection_Year.beginning.collection")], na.rm=TRUE))
#summary(fred$site_obsdate)
fred$longitude_decimal_degrees = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Longitude), fred$Longitude_Estimated, fred$Longitude)
fred$latitude_decimal_degrees = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Latitude), fred$Latitude_Estimated, fred$Latitude)
#names(fred)[grep("Notes_Row", names(fred))]
#summary(fred[,grep("clay", names(fred))])
#summary(fred[,grep("cation.exchange", names(fred))])
#summary(fred[,grep("organic.C", names(fred))])
#summary(fred$Soil.organic.C.content)
#summary(fred$Soil.bulk.density)
#summary(as.factor(fred$Soil.horizon))
fred$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Lower.sampling.depth), fred$Soil.depth - 5, fred$Soil.depth_Lower.sampling.depth)
fred$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Upper.sampling.depth), fred$Soil.depth + 5, fred$Soil.depth_Upper.sampling.depth)
fred$oc_d = [signif](https://rdrr.io/r/base/Round.html)(fred$Soil.organic.C.content / 1000 * fred$Soil.bulk.density * 1000, 3)
#summary(fred$oc_d)
x.na = fred.h.lst[[which](https://rdrr.io/r/base/which.html)(!fred.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(fred))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ fred[,i] = NA } }
chemsprops.FRED = fred[,fred.h.lst]
#plot(chemsprops.FRED[,4:5])
chemsprops.FRED$source_db = "FRED"
chemsprops.FRED$confidence_degree = 5
chemsprops.FRED$project_url = "https://roots.ornl.gov/"
chemsprops.FRED$citation_url = "https://doi.org/10.25581/ornlsfa.014/1459186"
chemsprops.FRED = complete.vars(chemsprops.FRED, sel = [c](https://rdrr.io/r/base/c.html)("Soil.organic.C.content", "Soil.texture_Fraction.clay", "Soil.pH_Water"))
## many duplicates
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.FRED)
#> [1] 858 36
```
#### 5\.3\.0\.8 Global root traits (GRooT) database (compilation)
* Guerrero‐Ramírez, N. R., Mommer, L., Freschet, G. T., Iversen, C. M., McCormack, M. L., Kattge, J., … \& Weigelt, A. (2021\). [Global root traits (GRooT) database](https://dx.doi.org/10.1111/geb.13179). Global ecology and biogeography, 30(1\), 25\-37\. [https://dx.doi.org/10\.1111/geb.13179](https://dx.doi.org/10.1111/geb.13179)
```
if({
#Sys.setenv("VROOM_CONNECTION_SIZE" = 131072 * 2)
GROOT = vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/INT/GRooT/GRooTFullVersion.csv")
## 114,222 x 73
[c](https://rdrr.io/r/base/c.html)("locationID", "GRooTID", "originalID", "source", "year", "decimalLatitude", "decimalLongitud", "soilpH", "soilTexture", "soilCarbon", "soilNitrogen", "soilPhosphorus", "soilCarbonToNitrogen", "soilBaseCationSaturation", "soilCationExchangeCapacity", "soilOrganicMatter", "soilWaterGravimetric", "soilWaterVolumetric")
#summary(GROOT$soilCarbon)
#summary(!is.na(GROOT$soilCarbon))
#summary(GROOT$soilOrganicMatter)
#summary(GROOT$soilNitrogen)
#summary(GROOT$soilpH)
#summary(as.factor(GROOT$soilTexture))
#lattice::xyplot(soilCarbon ~ soilpH, GROOT, par.settings = list(plot.symbol = list(col=scales::alpha("black", 0.6), fill=scales::alpha("red", 0.6), pch=21, cex=0.6)), scales = list(y=list(log=TRUE, equispaced.log=FALSE)), ylab="SOC", xlab="pH")
GROOT$site_obsdate = [as.Date](https://rdrr.io/r/base/as.Date.html)([paste0](https://rdrr.io/r/base/paste.html)(GROOT$year, "-01-01"), format="%Y-%m-%d")
GROOT$hzn_top = 0
GROOT$hzn_bot = 30
GROOT.h.lst = [c](https://rdrr.io/r/base/c.html)("locationID", "originalID", "site_obsdate", "decimalLongitud", "decimalLatitude", "GRooTID", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "soilTexture", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "soilCarbon", "oc_d", "c_tot", "soilNitrogen", "ph_kcl", "soilpH", "ph_cacl2", "soilCationExchangeCapacity", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = GROOT.h.lst[[which](https://rdrr.io/r/base/which.html)(!GROOT.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(GROOT))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ GROOT[,i] = NA } }
chemsprops.GROOT = GROOT[,GROOT.h.lst]
chemsprops.GROOT$source_db = "GROOT"
chemsprops.GROOT$confidence_degree = 8
chemsprops.GROOT$project_url = "https://groot-database.github.io/GRooT/"
chemsprops.GROOT$citation_url = "https://dx.doi.org/10.1111/geb.13179"
chemsprops.GROOT = complete.vars(chemsprops.GROOT, sel = [c](https://rdrr.io/r/base/c.html)("soilCarbon", "soilpH"), coords = [c](https://rdrr.io/r/base/c.html)("decimalLongitud", "decimalLatitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.GROOT)
#> [1] 718 36
```
#### 5\.3\.0\.9 Global Soil Respiration DB
* Bond\-Lamberty, B. and Thomson, A. (2010\). A global database of soil respiration data, Biogeosciences, 7, 1915–1926, [https://doi.org/10\.5194/bg\-7\-1915\-2010](https://doi.org/10.5194/bg-7-1915-2010)
```
if({
srdb = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SRDB/srdb-data.csv")
## 10366 x 85
srdb.h.lst = [c](https://rdrr.io/r/base/c.html)("Site_ID", "Notes", "Study_midyear", "Longitude", "Latitude", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psd", "Soil_clay", "Soil_silt", "Soil_sand", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "Soil_BD", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db", "confidence_degree")
#summary(srdb$Study_midyear)
srdb$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(srdb$C_soildepth), 100, srdb$C_soildepth)
srdb$hzn_top = 0
#summary(srdb$Soil_clay)
#summary(srdb$C_soilmineral)
srdb$oc_d = [signif](https://rdrr.io/r/base/Round.html)(srdb$C_soilmineral / 1000 / (srdb$hzn_bot/100), 3)
#summary(srdb$oc_d)
#summary(srdb$Soil_BD)
srdb$oc = srdb$oc_d / srdb$Soil_BD
#summary(srdb$oc)
x.na = srdb.h.lst[[which](https://rdrr.io/r/base/which.html)(!srdb.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(srdb))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ srdb[,i] = NA } }
chemsprops.SRDB = srdb[,srdb.h.lst]
#plot(chemsprops.SRDB[,4:5])
chemsprops.SRDB$source_db = "SRDB"
chemsprops.SRDB$confidence_degree = 5
chemsprops.SRDB$project_url = "https://github.com/bpbond/srdb/"
chemsprops.SRDB$citation_url = "https://doi.org/10.5194/bg-7-1915-2010"
chemsprops.SRDB = complete.vars(chemsprops.SRDB, sel = [c](https://rdrr.io/r/base/c.html)("oc", "Soil_clay", "Soil_BD"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SRDB)
#> [1] 1596 36
```
#### 5\.3\.0\.10 SOils DAta Harmonization database (SoDaH)
* Wieder, W. R., Pierson, D., Earl, S., Lajtha, K., Baer, S., Ballantyne, F., … \& Weintraub, S. (2020\). [SoDaH: the SOils DAta Harmonization database, an open\-source synthesis of soil data from research networks, version 1\.0](https://doi.org/10.5194/essd-2020-195). Earth System Science Data Discussions, 1\-19\. [https://doi.org/10\.5194/essd\-2020\-195](https://doi.org/10.5194/essd-2020-195). Data download URL: [https://doi.org/10\.6073/pasta/9733f6b6d2ffd12bf126dc36a763e0b4](https://doi.org/10.6073/pasta/9733f6b6d2ffd12bf126dc36a763e0b4)
```
if({
sodah.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SoDaH/521_soils_data_harmonization_6e8416fa0c9a2c2872f21ba208e6a919.csv")
#head(sodah.hor)
#summary(sodah.hor$coarse_frac)
#summary(sodah.hor$lyr_soc)
#summary(sodah.hor$lyr_som_WalkleyBlack/1.724)
#summary(as.factor(sodah.hor$observation_date))
sodah.hor$site_obsdate = [as.integer](https://rdrr.io/r/base/integer.html)([substr](https://rdrr.io/r/base/substr.html)(sodah.hor$observation_date, 1, 4))
sodah.hor$oc = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sodah.hor$lyr_soc), sodah.hor$lyr_som_WalkleyBlack/1.724, sodah.hor$lyr_soc) * 10
sodah.hor$n_tot = sodah.hor$lyr_n_tot * 10
sodah.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(sodah.hor$oc / 1000 * sodah.hor$bd_samp * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sodah.hor$coarse_frac), 0, sodah.hor$coarse_frac))/100, 3)
sodah.hor$site_key = [paste](https://rdrr.io/r/base/paste.html)(sodah.hor$network, sodah.hor$location_name, sep="_")
sodah.hor$labsampnum = [make.unique](https://rdrr.io/r/base/make.unique.html)([paste](https://rdrr.io/r/base/paste.html)(sodah.hor$network, sodah.hor$location_name, sodah.hor$L1, sep="_"))
#summary(sodah.hor$oc_d)
sodah.h.lst = [c](https://rdrr.io/r/base/c.html)("site_key", "data_file", "observation_date", "long", "lat", "labsampnum", "layer_sequence", "layer_top", "layer_bot", "hzn", "profile_texture_class", "clay", "silt", "sand", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl", "cec_sum", "cec_nh4", "ecec", "coarse_frac", "bd_samp", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre", "source_db", "confidence_degree")
x.na = sodah.h.lst[[which](https://rdrr.io/r/base/which.html)(!sodah.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sodah.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sodah.hor[,i] = NA } }
chemsprops.SoDaH = sodah.hor[,sodah.h.lst]
#plot(chemsprops.SoDaH[,4:5])
chemsprops.SoDaH$source_db = "SoDaH"
chemsprops.SoDaH$confidence_degree = 3
chemsprops.SoDaH$project_url = "https://lter.github.io/som-website"
chemsprops.SoDaH$citation_url = "https://doi.org/10.5194/essd-2020-195"
chemsprops.SoDaH = complete.vars(chemsprops.SoDaH, sel = [c](https://rdrr.io/r/base/c.html)("oc", "clay", "ph_h2o"), coords = [c](https://rdrr.io/r/base/c.html)("long", "lat"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SoDaH)
#> [1] 20383 36
```
#### 5\.3\.0\.11 ISRIC WISE harmonized soil profile data
* Batjes, N.H. (2019\). [Harmonized soil profile data for applications at global and continental scales: updates to the WISE database](http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x). Soil Use and Management 5:124–127\. Data download URL: [https://files.isric.org/public/wise/WD\-WISE.zip](https://files.isric.org/public/wise/WD-WISE.zip)
```
if({
wise.site <- [read.table](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_SITE.csv", sep=",", header=TRUE, stringsAsFactors = FALSE, fill=TRUE)
wise.s.lst <- [c](https://rdrr.io/r/base/c.html)("WISE3_id", "PITREF", "DATEYR", "LONDD", "LATDD")
wise.site$LONDD = [as.numeric](https://rdrr.io/r/base/numeric.html)(wise.site$LONDD)
wise.site$LATDD = [as.numeric](https://rdrr.io/r/base/numeric.html)(wise.site$LATDD)
wise.layer <- [read.table](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_HORIZON.csv", sep=",", header=TRUE, stringsAsFactors = FALSE, fill=TRUE)
wise.layer$ca_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXCA * 200, 4)
wise.layer$mg_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXMG * 121, 3)
wise.layer$na_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXNA * 230, 3)
wise.layer$k_ext = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$EXK * 391, 3)
wise.layer$oc_d = [signif](https://rdrr.io/r/base/Round.html)(wise.layer$ORGC / 1000 * wise.layer$BULKDENS * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(wise.layer$GRAVEL), 0, wise.layer$GRAVEL))/100, 3)
wise.h.lst <- [c](https://rdrr.io/r/base/c.html)("WISE3_ID", "labsampnum", "HONU", "TOPDEP", "BOTDEP", "DESIG", "tex_psda", "CLAY", "SILT", "SAND", "ORGC", "oc_d", "c_tot", "TOTN", "PHKCL", "PHH2O", "PHCACL2", "CECSOIL", "cec_nh4", "ecec", "GRAVEL" , "BULKDENS", "ca_ext", "mg_ext", "na_ext", "k_ext", "ECE", "ec_12pre")
x.na = wise.h.lst[[which](https://rdrr.io/r/base/which.html)(!wise.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(wise.layer))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ wise.layer[,i] = NA } }
chemsprops.WISE = [merge](https://rdrr.io/r/base/merge.html)(wise.site[,wise.s.lst], wise.layer[,wise.h.lst], by.x="WISE3_id", by.y="WISE3_ID")
chemsprops.WISE$source_db = "ISRIC_WISE"
chemsprops.WISE$confidence_degree = 4
chemsprops.WISE$project_url = "https://isric.org"
chemsprops.WISE$citation_url = "http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x"
chemsprops.WISE = complete.vars(chemsprops.WISE, sel = [c](https://rdrr.io/r/base/c.html)("ORGC","CLAY","PHH2O","CECSOIL","k_ext"), coords = [c](https://rdrr.io/r/base/c.html)("LONDD", "LATDD"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.WISE)
#> [1] 23278 36
```
#### 5\.3\.0\.12 GEMAS
* Reimann, C., Fabian, K., Birke, M., Filzmoser, P., Demetriades, A., Négrel, P., … \& Anderson, M. (2018\). [GEMAS: Establishing geochemical background and threshold for 53 chemical elements in European agricultural soil](https://doi.org/10.1016/j.apgeochem.2017.01.021). Applied Geochemistry, 88, 302\-318\. Data download URL: <http://gemas.geolba.ac.at/>
```
if({
gemas.samples <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/EU/GEMAS/GEMAS.csv", stringsAsFactors = FALSE)
## GEMAS, agricultural soil, 0-20 cm, air dried, <2 mm, aqua regia Data from ACME, total C, TOC, CEC, ph_CaCl2
gemas.samples$hzn_top = 0
gemas.samples$hzn_bot = 20
gemas.samples$oc = gemas.samples$TOC * 10
#summary(gemas.samples$oc)
gemas.samples$c_tot = gemas.samples$C_tot * 10
gemas.samples$site_obsdate = 2009
gemas.h.lst <- [c](https://rdrr.io/r/base/c.html)("ID", "COUNRTY", "site_obsdate", "XCOO", "YCOO", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "TYPE", "tex_psda", "clay", "silt", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "pH_CaCl2", "CEC", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = gemas.h.lst[[which](https://rdrr.io/r/base/which.html)(!gemas.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(gemas.samples))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ gemas.samples[,i] = NA } }
chemsprops.GEMAS <- gemas.samples[,gemas.h.lst]
chemsprops.GEMAS$source_db = "GEMAS_2009"
chemsprops.GEMAS$confidence_degree = 2
chemsprops.GEMAS$project_url = "http://gemas.geolba.ac.at/"
chemsprops.GEMAS$citation_url = "https://doi.org/10.1016/j.apgeochem.2017.01.021"
chemsprops.GEMAS = complete.vars(chemsprops.GEMAS, sel = [c](https://rdrr.io/r/base/c.html)("oc","clay","pH_CaCl2"), coords = [c](https://rdrr.io/r/base/c.html)("XCOO", "YCOO"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.GEMAS)
#> [1] 4131 36
```
#### 5\.3\.0\.13 LUCAS soil
* Orgiazzi, A., Ballabio, C., Panagos, P., Jones, A., \& Fernández‐Ugalde, O. (2018\). [LUCAS Soil, the largest expandable soil dataset for Europe: a review](https://doi.org/10.1111/ejss.12499). European Journal of Soil Science, 69(1\), 140\-153\. Data download URL: [https://esdac.jrc.ec.europa.eu/content/lucas\-2009\-topsoil\-data](https://esdac.jrc.ec.europa.eu/content/lucas-2009-topsoil-data)
```
if({
lucas.samples <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_TOPSOIL_v1.xlsx", sheet = 1)
lucas.samples$site_obsdate <- "2009"
#summary(lucas.samples$N)
lucas.ro <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/EU/LUCAS/Romania.xlsx", sheet = 1)
lucas.ro$site_obsdate <- "2012"
[names](https://rdrr.io/r/base/names.html)(lucas.samples)[[which](https://rdrr.io/r/base/which.html)(]
lucas.ro = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(lucas.ro, replace=[c](https://rdrr.io/r/base/c.html)("Soil.ID"="sample_ID", "GPS_X_LONG"="GPS_LONG", "GPS_Y_LAT"="GPS_LAT", "pHinH2O"="pH_in_H2O", "pHinCaCl2"="pH_in_CaCl"))
lucas.bu <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/EU/LUCAS/Bulgaria.xlsx", sheet = 1)
lucas.bu$site_obsdate <- "2012"
[names](https://rdrr.io/r/base/names.html)(lucas.samples)[[which](https://rdrr.io/r/base/which.html)(]
#lucas.ch <- openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_2015_Topsoil_data_of_Switzerland-with-coordinates.xlsx_.xlsx", sheet = 1, startRow = 2)
#lucas.ch = plyr::rename(lucas.ch, replace=c("Soil_ID"="sample_ID", "GPS_.LAT"="GPS_LAT", "pH.in.H2O"="pH_in_H2O", "pH.in.CaCl2"="pH_in_CaCl", "Calcium.carbonate/.g.kg–1"="CaCO3", "Silt/.g.kg–1"="silt", "Sand/.g.kg–1"="sand", "Clay/.g.kg–1"="clay", "Organic.carbon/.g.kg–1"="OC"))
## Double readings?
lucas.t = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([list](https://rdrr.io/r/base/list.html)(lucas.samples, lucas.ro, lucas.bu))
lucas.h.lst <- [c](https://rdrr.io/r/base/c.html)("POINT_ID", "usiteid", "site_obsdate", "GPS_LONG", "GPS_LAT", "sample_ID", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psda", "clay", "silt", "sand", "OC", "oc_d", "c_tot", "N", "ph_kcl", "pH_in_H2O", "pH_in_CaCl", "CEC", "cec_nh4", "ecec", "coarse", "db_od", "ca_ext", "mg_ext", "na_ext", "K", "ec_satp", "ec_12pre")
x.na = lucas.h.lst[[which](https://rdrr.io/r/base/which.html)(!lucas.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(lucas.t))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ lucas.t[,i] = NA } }
chemsprops.LUCAS <- lucas.t[,lucas.h.lst]
chemsprops.LUCAS$source_db = "LUCAS_2009"
chemsprops.LUCAS$hzn_top <- 0
chemsprops.LUCAS$hzn_bot <- 20
chemsprops.LUCAS$confidence_degree = 2
chemsprops.LUCAS$project_url = "https://esdac.jrc.ec.europa.eu/"
chemsprops.LUCAS$citation_url = "https://doi.org/10.1111/ejss.12499"
chemsprops.LUCAS = complete.vars(chemsprops.LUCAS, sel = [c](https://rdrr.io/r/base/c.html)("OC","clay","pH_in_H2O"), coords = [c](https://rdrr.io/r/base/c.html)("GPS_LONG", "GPS_LAT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.LUCAS)
#> [1] 21272 36
```
```
if({
#lucas2015.samples <- openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_Topsoil_2015_20200323.xlsx", sheet = 1)
lucas2015.xy = readOGR("/mnt/diskstation/data/Soil_points/EU/LUCAS/LUCAS_Topsoil_2015_20200323.shp")
#head(as.data.frame(lucas2015.xy))
lucas2015.xy = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(lucas2015.xy)
## https://www.aqion.de/site/130
## 1 mS/m = 100 mS/cm | 1 dS/m = 1 mS/cm = 1 mS/m / 100
lucas2015.xy$ec_satp = lucas2015.xy$EC / 100
lucas2015.h.lst <- [c](https://rdrr.io/r/base/c.html)("Point_ID", "LC0_Desc", "site_obsdate", "coords.x1", "coords.x2", "sample_ID", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psda", "Clay", "Silt", "Sand", "OC", "oc_d", "c_tot", "N", "ph_kcl", "pH_H20", "pH_CaCl2", "CEC", "cec_nh4", "ecec", "coarse", "db_od", "ca_ext", "mg_ext", "na_ext", "K", "ec_satp", "ec_12pre")
x.na = lucas2015.h.lst[[which](https://rdrr.io/r/base/which.html)(!lucas2015.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(lucas2015.xy))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ lucas2015.xy[,i] = NA } }
chemsprops.LUCAS2 <- lucas2015.xy[,lucas2015.h.lst]
chemsprops.LUCAS2$source_db = "LUCAS_2015"
chemsprops.LUCAS2$hzn_top <- 0
chemsprops.LUCAS2$hzn_bot <- 20
chemsprops.LUCAS2$site_obsdate <- "2015"
chemsprops.LUCAS2$confidence_degree = 2
chemsprops.LUCAS2$project_url = "https://esdac.jrc.ec.europa.eu/"
chemsprops.LUCAS2$citation_url = "https://doi.org/10.1111/ejss.12499"
chemsprops.LUCAS2 = complete.vars(chemsprops.LUCAS2, sel = [c](https://rdrr.io/r/base/c.html)("OC","Clay","pH_H20"), coords = [c](https://rdrr.io/r/base/c.html)("coords.x1", "coords.x2"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.LUCAS2)
#> [1] 21859 36
```
#### 5\.3\.0\.14 Mangrove forest soil DB
* Sanderman, J., Hengl, T., Fiske, G., Solvik, K., Adame, M. F., Benson, L., … \& Duncan, C. (2018\). [A global map of mangrove forest soil carbon at 30 m spatial resolution](https://doi.org/10.1088/1748-9326/aabe1c). Environmental Research Letters, 13(5\), 055002\. Data download URL: [https://dataverse.harvard.edu/dataset.xhtml?persistentId\=doi:10\.7910/DVN/OCYUIT](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/OCYUIT)
```
if({
mng.profs <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/TNC_mangroves/mangrove_soc_database_v10_sites.csv", skip=1)
mng.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/TNC_mangroves/mangrove_soc_database_v10_horizons.csv", skip=1)
mngALL = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(mng.hors, mng.profs, by=[c](https://rdrr.io/r/base/c.html)("Site.name"))
mngALL$oc = mngALL$OC_final * 10
mngALL$oc_d = mngALL$CD_calc * 1000
mngALL$hzn_top = mngALL$U_depth * 100
mngALL$hzn_bot = mngALL$L_depth * 100
mngALL$wpg2 = 0
#summary(mngALL$BD_reported) ## some very high values 3.26 t/m3
mngALL$Year = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(mngALL$Year_sampled), mngALL$Years_collected, mngALL$Year_sampled)
mng.col = [c](https://rdrr.io/r/base/c.html)("Site.name", "Site..", "Year", "Longitude_Adjusted", "Latitude_Adjusted", "labsampnum", "layer_sequence","hzn_top","hzn_bot","hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "BD_reported", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = mng.col[[which](https://rdrr.io/r/base/which.html)(!mng.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(mngALL))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ mngALL[,i] = NA } }
chemsprops.Mangroves = mngALL[,mng.col]
chemsprops.Mangroves$source_db = "MangrovesDB"
chemsprops.Mangroves$confidence_degree = 4
chemsprops.Mangroves$project_url = "http://maps.oceanwealth.org/mangrove-restoration/"
chemsprops.Mangroves$citation_url = "https://doi.org/10.1088/1748-9326/aabe1c"
chemsprops.Mangroves = complete.vars(chemsprops.Mangroves, sel = [c](https://rdrr.io/r/base/c.html)("oc","BD_reported"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude_Adjusted", "Latitude_Adjusted"))
#head(chemsprops.Mangroves)
#levels(as.factor(mngALL$OK.to.release.))
mng.rm = chemsprops.Mangroves$Site.name[chemsprops.Mangroves$Site.name [%in%](https://rdrr.io/r/base/match.html) mngALL$Site.name[[grep](https://rdrr.io/r/base/grep.html)("N", mngALL$OK.to.release., ignore.case = FALSE)]]
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Mangroves)
#> [1] 7734 36
```
#### 5\.3\.0\.15 CIFOR peatland points
Peatland soil measurements (points) from the literature described in:
* Murdiyarso, D., Roman\-Cuesta, R. M., Verchot, L. V., Herold, M., Gumbricht, T., Herold, N., \& Martius, C. (2017\). New map reveals more peat in the tropics (Vol. 189\). CIFOR. [https://doi.org/10\.17528/cifor/006452](https://doi.org/10.17528/cifor/006452)
```
if({
cif.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/CIFOR_peatlands/SOC_literature_CIFOR.csv")
#summary(cif.hors$BD..g.cm..)
#summary(cif.hors$SOC)
cif.hors$oc = cif.hors$SOC * 10
cif.hors$wpg2 = 0
cif.hors$c_tot = cif.hors$TOC.content.... * 10
cif.hors$oc_d = cif.hors$C.density..kg.C.m..
cif.hors$site_obsdate = [as.integer](https://rdrr.io/r/base/integer.html)([substr](https://rdrr.io/r/base/substr.html)(cif.hors$year, 1, 4))-1
cif.col = [c](https://rdrr.io/r/base/c.html)("SOURCEID", "usiteid", "site_obsdate", "modelling.x", "modelling.y", "labsampnum", "layer_sequence", "Upper", "Lower", "hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "BD..g.cm..", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = cif.col[[which](https://rdrr.io/r/base/which.html)(!cif.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(cif.hors))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ cif.hors[,i] = NA } }
chemsprops.Peatlands = cif.hors[,cif.col]
chemsprops.Peatlands$source_db = "CIFOR"
chemsprops.Peatlands$confidence_degree = 4
chemsprops.Peatlands$project_url = "https://www.cifor.org/"
chemsprops.Peatlands$citation_url = "https://doi.org/10.17528/cifor/006452"
chemsprops.Peatlands = complete.vars(chemsprops.Peatlands, sel = [c](https://rdrr.io/r/base/c.html)("oc","BD..g.cm.."), coords = [c](https://rdrr.io/r/base/c.html)("modelling.x", "modelling.y"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Peatlands)
#> [1] 756 36
```
#### 5\.3\.0\.16 LandPKS observations
* Herrick, J. E., Urama, K. C., Karl, J. W., Boos, J., Johnson, M. V. V., Shepherd, K. D., … \& Kosnik, C. (2013\). [The Global Land\-Potential Knowledge System (LandPKS): Supporting Evidence\-based, Site\-specific Land Use and Management through Cloud Computing, Mobile Applications, and Crowdsourcing](https://doi.org/10.2489/jswc.68.1.5A). Journal of Soil and Water Conservation, 68(1\), 5A\-12A. Data download URL: [http://portal.landpotential.org/\#/landpksmap](http://portal.landpotential.org/#/landpksmap)
```
if({
pks = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/LandPKS/Export_LandInfo_Data.csv", stringsAsFactors = FALSE)
#str(pks)
pks.hor = [data.frame](https://rdrr.io/r/base/data.frame.html)(rock_fragments = [c](https://rdrr.io/r/base/c.html)(pks$rock_fragments_layer_0_1cm,
pks$rock_fragments_layer_1_10cm,
pks$rock_fragments_layer_10_20cm,
pks$rock_fragments_layer_20_50cm,
pks$rock_fragments_layer_50_70cm,
pks$rock_fragments_layer_70_100cm,
pks$rock_fragments_layer_100_120cm),
tex_field = [c](https://rdrr.io/r/base/c.html)(pks$texture_layer_0_1cm,
pks$texture_layer_1_10cm,
pks$texture_layer_10_20cm,
pks$texture_layer_20_50cm,
pks$texture_layer_50_70cm,
pks$texture_layer_70_100cm,
pks$texture_layer_100_120cm))
pks.hor$hzn_top = [c](https://rdrr.io/r/base/c.html)([rep](https://rdrr.io/r/base/rep.html)(0, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(1, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(10, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(20, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(50, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(70, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(100, [nrow](https://rdrr.io/r/base/nrow.html)(pks)))
pks.hor$hzn_bot = [c](https://rdrr.io/r/base/c.html)([rep](https://rdrr.io/r/base/rep.html)(1, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(10, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(20, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(50, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(70, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(100, [nrow](https://rdrr.io/r/base/nrow.html)(pks)),
[rep](https://rdrr.io/r/base/rep.html)(120, [nrow](https://rdrr.io/r/base/nrow.html)(pks)))
pks.hor$longitude_decimal_degrees = [rep](https://rdrr.io/r/base/rep.html)(pks$longitude, 7)
pks.hor$latitude_decimal_degrees = [rep](https://rdrr.io/r/base/rep.html)(pks$latitude, 7)
pks.hor$site_obsdate = [rep](https://rdrr.io/r/base/rep.html)(pks$modified_date, 7)
pks.hor$site_key = [rep](https://rdrr.io/r/base/rep.html)(pks$id, 7)
#summary(as.factor(pks.hor$tex_field))
tex.tr = [data.frame](https://rdrr.io/r/base/data.frame.html)(tex_field=[c](https://rdrr.io/r/base/c.html)("CLAY", "CLAY LOAM", "LOAM", "LOAMY SAND", "SAND", "SANDY CLAY", "SANDY CLAY LOAM", "SANDY LOAM", "SILT LOAM", "SILTY CLAY", "SILTY CLAY LOAM"),
clay_tot_psa=[c](https://rdrr.io/r/base/c.html)(62.4, 34.0, 19.0, 5.8, 3.3, 41.7, 27.0, 10.0, 13.1, 46.7, 34.0),
silt_tot_psa=[c](https://rdrr.io/r/base/c.html)(17.8, 34.0, 40.0, 12.0, 5.0, 6.7, 13.0, 25.0, 65.7, 46.7, 56.0),
sand_tot_psa=[c](https://rdrr.io/r/base/c.html)(19.8, 32.0, 41.0, 82.2, 91.7, 51.6, 60.0, 65.0, 21.2, 6.7, 10.0))
pks.hor$clay_tot_psa = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(pks.hor["tex_field"], tex.tr)$clay_tot_psa
pks.hor$silt_tot_psa = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(pks.hor["tex_field"], tex.tr)$silt_tot_psa
pks.hor$sand_tot_psa = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(pks.hor["tex_field"], tex.tr)$sand_tot_psa
#summary(as.factor(pks.hor$rock_fragments))
pks.hor$wpg2 = [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments==">60%", 65, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="35-60%", 47.5, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="15-35%", 25, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="1-15%" | pks.hor$rock_fragments=="0-15%", 7.5, [ifelse](https://rdrr.io/r/base/ifelse.html)(pks.hor$rock_fragments=="0-1%", 0.5, NA)))))
#head(pks.hor)
#plot(pks.hor[,c("longitude_decimal_degrees","latitude_decimal_degrees")])
pks.col = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence","hzn_top","hzn_bot","hzn_desgn", "tex_field", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = pks.col[[which](https://rdrr.io/r/base/which.html)(!pks.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(pks.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ pks.hor[,i] = NA } }
chemsprops.LandPKS = pks.hor[,pks.col]
chemsprops.LandPKS$source_db = "LandPKS"
chemsprops.LandPKS$confidence_degree = 8
chemsprops.LandPKS$project_url = "http://portal.landpotential.org"
chemsprops.LandPKS$citation_url = "https://doi.org/10.2489/jswc.68.1.5A"
chemsprops.LandPKS = complete.vars(chemsprops.LandPKS, sel = [c](https://rdrr.io/r/base/c.html)("clay_tot_psa","wpg2"), coords = [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.LandPKS)
#> [1] 41644 36
```
#### 5\.3\.0\.17 EGRPR
* [Russian Federation: The Unified State Register of Soil Resources (EGRPR)](http://egrpr.esoil.ru/). Data download URL: <http://egrpr.esoil.ru/content/1DB.html>
```
if({
russ.HOR = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Russia/EGRPR/Russia_EGRPR_soil_pedons.csv")
russ.HOR$SOURCEID = [paste](https://rdrr.io/r/base/paste.html)(russ.HOR$CardID, russ.HOR$SOIL_ID, sep="_")
russ.HOR$wpg2 = russ.HOR$TEXTSTNS
russ.HOR$SNDPPT <- russ.HOR$TEXTSAF + russ.HOR$TEXSCM
russ.HOR$SLTPPT <- russ.HOR$TEXTSIC + russ.HOR$TEXTSIM + 0.8 * [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(russ.HOR$TEXTSIF), 0, russ.HOR$TEXTSIF)
russ.HOR$CLYPPT <- russ.HOR$TEXTCL + 0.2 * [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(russ.HOR$TEXTSIF), 0, russ.HOR$TEXTSIF)
## Correct texture fractions:
sumTex <- [rowSums](https://rdrr.io/r/base/colSums.html)(russ.HOR[,[c](https://rdrr.io/r/base/c.html)("SLTPPT","CLYPPT","SNDPPT")])
russ.HOR$SNDPPT <- russ.HOR$SNDPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$SLTPPT <- russ.HOR$SLTPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$oc <- [rowMeans](https://rdrr.io/r/base/colSums.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(x1=russ.HOR$CORG * 10, x2=russ.HOR$ORGMAT/1.724 * 10), na.rm=TRUE)
russ.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(russ.HOR$oc / 1000 * russ.HOR$DVOL * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(russ.HOR$wpg2), 0, russ.HOR$wpg2))/100, 3)
russ.HOR$n_tot <- russ.HOR$NTOT * 10
russ.HOR$ca_ext = russ.HOR$EXCA * 200
russ.HOR$mg_ext = russ.HOR$EXMG * 121
russ.HOR$na_ext = russ.HOR$EXNA * 230
russ.HOR$k_ext = russ.HOR$EXK * 391
## Sampling year not available but with high confidence <2000
russ.HOR$site_obsdate = "1982"
russ.sel.h <- [c](https://rdrr.io/r/base/c.html)("SOURCEID", "SOIL_ID", "site_obsdate", "LONG", "LAT", "labsampnum", "HORNMB", "HORTOP", "HORBOT", "HISMMN", "tex_psda", "CLYPPT", "SLTPPT", "SNDPPT", "oc", "oc_d", "c_tot", "NTOT", "PHSLT", "PHH2O", "ph_cacl2", "CECST", "cec_nh4", "ecec", "wpg2", "DVOL", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = russ.sel.h[[which](https://rdrr.io/r/base/which.html)(!russ.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(russ.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ russ.HOR[,i] = NA } }
chemsprops.EGRPR = russ.HOR[,russ.sel.h]
chemsprops.EGRPR$source_db = "Russia_EGRPR"
chemsprops.EGRPR$confidence_degree = 2
chemsprops.EGRPR$project_url = "http://egrpr.esoil.ru/"
chemsprops.EGRPR$citation_url = "https://doi.org/10.19047/0136-1694-2016-86-115-123"
chemsprops.EGRPR <- complete.vars(chemsprops.EGRPR, sel=[c](https://rdrr.io/r/base/c.html)("oc", "CLYPPT"), coords = [c](https://rdrr.io/r/base/c.html)("LONG", "LAT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.EGRPR)
#> [1] 4437 36
```
#### 5\.3\.0\.18 Canada National Pedon DB
* [Agriculture and Agri\-Food Canada National Pedon Database](https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0). Data download URL: <https://open.canada.ca/data/en/>
```
if({
NPDB.nm = [c](https://rdrr.io/r/base/c.html)("NPDB_V2_sum_source_info.csv","NPDB_V2_sum_chemical.csv", "NPDB_V2_sum_horizons_raw.csv", "NPDB_V2_sum_physical.csv")
NPDB.HOR = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)([paste0](https://rdrr.io/r/base/paste.html)("/mnt/diskstation/data/Soil_points/Canada/NPDB/", NPDB.nm), read.csv), type = "full")
NPDB.HOR$HISMMN = [paste0](https://rdrr.io/r/base/paste.html)(NPDB.HOR$HZN_MAS, NPDB.HOR$HZN_SUF, NPDB.HOR$HZN_MOD)
NPDB.HOR$CARB_ORG[NPDB.HOR$CARB_ORG==9] <- NA
NPDB.HOR$N_TOTAL[NPDB.HOR$N_TOTAL==9] <- NA
NPDB.HOR$oc = NPDB.HOR$CARB_ORG * 10
NPDB.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(NPDB.HOR$oc / 1000 * NPDB.HOR$BULK_DEN * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(NPDB.HOR$VC_SAND), 0, NPDB.HOR$VC_SAND))/100, 3)
NPDB.HOR$ca_ext = NPDB.HOR$EXCH_CA * 200
NPDB.HOR$mg_ext = NPDB.HOR$EXCH_MG * 121
NPDB.HOR$na_ext = NPDB.HOR$EXCH_NA * 230
NPDB.HOR$k_ext = NPDB.HOR$EXCH_K * 391
npdb.sel.h = [c](https://rdrr.io/r/base/c.html)("PEDON_ID", "usiteid", "CAL_YEAR", "DD_LONG", "DD_LAT", "labsampnum", "layer_sequence", "U_DEPTH", "L_DEPTH", "HISMMN", "tex_psda", "T_CLAY", "T_SILT", "T_SAND", "oc", "oc_d", "c_tot", "N_TOTAL", "ph_kcl", "PH_H2O", "PH_CACL2", "CEC", "cec_nh4", "ecec", "VC_SAND", "BULK_DEN", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = npdb.sel.h[[which](https://rdrr.io/r/base/which.html)(!npdb.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(NPDB.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ NPDB.HOR[,i] = NA } }
chemsprops.NPDB = NPDB.HOR[,npdb.sel.h]
chemsprops.NPDB$source_db = "Canada_NPDB"
chemsprops.NPDB$confidence_degree = 2
chemsprops.NPDB$project_url = "https://open.canada.ca/data/en/"
chemsprops.NPDB$citation_url = "https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0"
chemsprops.NPDB <- complete.vars(chemsprops.NPDB, sel=[c](https://rdrr.io/r/base/c.html)("oc", "PH_H2O", "T_CLAY"), coords = [c](https://rdrr.io/r/base/c.html)("DD_LONG", "DD_LAT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NPDB)
#> [1] 15946 36
```
#### 5\.3\.0\.19 Canadian upland forest soil profile and carbon stocks database
* Shaw, C., Hilger, A., Filiatrault, M., \& Kurz, W. (2018\). [A Canadian upland forest soil profile and carbon stocks database](https://doi.org/10.1002/ecy.2159). Ecology, 99(4\), 989\-989\. Data download URL: [https://esajournals.onlinelibrary.wiley.com/action/downloadSupplement?doi\=10\.1002%2Fecy.2159\&file\=ecy2159\-sup\-0001\-DataS1\.zip](https://esajournals.onlinelibrary.wiley.com/action/downloadSupplement?doi=10.1002%2Fecy.2159&file=ecy2159-sup-0001-DataS1.zip)
\*Organic horizons have negative values, the first mineral soil horizon has a value of 0 cm, and other mineral soil horizons have positive values. This needs to be corrected before the values can be bind with other international sets.
```
if({
## Reading of the .dat file was tricky
cufs.HOR = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Canada/CUFSDB/PROFILES.csv", stringsAsFactors = FALSE)
cufs.HOR$LOWER_HZN_LIMIT =cufs.HOR$UPPER_HZN_LIMIT + cufs.HOR$HZN_THICKNESS
## Correct depth (Canadian data can have negative depths for soil horizons):
z.min.cufs <- ddply(cufs.HOR, .(LOCATION_ID), summarize, aggregated = [min](https://rdrr.io/r/base/Extremes.html)(UPPER_HZN_LIMIT, na.rm=TRUE))
z.shift.cufs <- join(cufs.HOR["LOCATION_ID"], z.min.cufs, type="left")$aggregated
## fixed shift
z.shift.cufs <- [ifelse](https://rdrr.io/r/base/ifelse.html)(z.shift.cufs>0, 0, z.shift.cufs)
cufs.HOR$hzn_top <- cufs.HOR$UPPER_HZN_LIMIT - z.shift.cufs
cufs.HOR$hzn_bot <- cufs.HOR$LOWER_HZN_LIMIT - z.shift.cufs
cufs.SITE = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Canada/CUFSDB/SITES.csv", stringsAsFactors = FALSE)
cufs.HOR$longitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$LONGITUDE
cufs.HOR$latitude_decimal_degrees = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$LATITUDE
cufs.HOR$site_obsdate = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$YEAR_SAMPLED
cufs.HOR$usiteid = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(cufs.HOR["LOCATION_ID"], cufs.SITE)$RELEASE_SOURCE_SITEID
#summary(cufs.HOR$ORG_CARB_PCT)
#hist(cufs.HOR$ORG_CARB_PCT, breaks=45)
cufs.HOR$oc = cufs.HOR$ORG_CARB_PCT*10
#cufs.HOR$c_tot = cufs.HOR$oc + ifelse(is.na(cufs.HOR$CARBONATE_CARB_PCT), 0, cufs.HOR$CARBONATE_CARB_PCT*10)
cufs.HOR$n_tot = cufs.HOR$TOT_NITRO_PCT*10
cufs.HOR$ca_ext = cufs.HOR$EXCH_Ca * 200
cufs.HOR$mg_ext = cufs.HOR$EXCH_Mg * 121
cufs.HOR$na_ext = cufs.HOR$EXCH_Na * 230
cufs.HOR$k_ext = cufs.HOR$EXCH_K * 391
cufs.HOR$ph_cacl2 = cufs.HOR$pH
cufs.HOR$ph_cacl2[!cufs.HOR$pH_H2O_CACL2=="CACL2"] = NA
cufs.HOR$ph_h2o = cufs.HOR$pH
cufs.HOR$ph_h2o[!cufs.HOR$pH_H2O_CACL2=="H2O"] = NA
#summary(cufs.HOR$CF_VOL_PCT) ## is NA == 0??
cufs.HOR$wpg2 = [ifelse](https://rdrr.io/r/base/ifelse.html)(cufs.HOR$CF_CORR_FACTOR==1, 0, cufs.HOR$CF_VOL_PCT)
cufs.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(cufs.HOR$oc / 1000 * cufs.HOR$BULK_DENSITY * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(cufs.HOR$wpg2), 0, cufs.HOR$wpg2))/100, 3)
cufs.sel.h = [c](https://rdrr.io/r/base/c.html)("LOCATION_ID", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "HZN_SEQ_NO", "hzn_top", "hzn_bot", "HORIZON", "TEXT_CLASS", "CLAY_PCT", "SILT_PCT", "SAND_PCT", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "CEC_CALCULATED", "cec_nh4", "ecec", "wpg2", "BULK_DENSITY", "ca_ext", "mg_ext", "na_ext", "k_ext", "ELEC_COND", "ec_12pre")
x.na = cufs.sel.h[[which](https://rdrr.io/r/base/which.html)(!cufs.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(cufs.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ cufs.HOR[,i] = NA } }
chemsprops.CUFS = cufs.HOR[,cufs.sel.h]
chemsprops.CUFS$source_db = "Canada_CUFS"
chemsprops.CUFS$confidence_degree = 1
chemsprops.CUFS$project_url = "https://cfs.nrcan.gc.ca/publications/centre/nofc"
chemsprops.CUFS$citation_url = "https://doi.org/10.1002/ecy.2159"
chemsprops.CUFS <- complete.vars(chemsprops.CUFS, sel=[c](https://rdrr.io/r/base/c.html)("oc", "ph_h2o", "CLAY_PCT"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CUFS)
#> [1] 15162 36
```
#### 5\.3\.0\.20 Permafrost in subarctic Canada
* Estop\-Aragones, C.; Fisher, J.P.; Cooper, M.A.; Thierry, A.; Treharne, R.; Murton, J.B.; Phoenix, G.K.; Charman, D.J.; Williams, M.; Hartley, I.P. (2016\). Bulk density, carbon and nitrogen content in soil profiles from permafrost in subarctic Canada. NERC Environmental Information Data Centre. [https://doi.org/10\.5285/efa2a84b\-3505\-4221\-a7da\-12af3cdc1952](https://doi.org/10.5285/efa2a84b-3505-4221-a7da-12af3cdc1952). Data download URL:
```
if({
caperm.HOR = vroom::[vroom](https://vroom.r-lib.org/reference/vroom.html)("/mnt/diskstation/data/Soil_points/Canada/NorthCanada/Bulk_density_CandNcontent_profiles_all_sites.csv")
#measurements::conv_unit("-99 36 15.7", from = "deg_min_sec", to = "dec_deg")
#caperm.HOR$longitude_decimal_degrees = as.numeric(measurements::conv_unit(paste0("-", gsub('\"W', '', gsub("'", ' ', iconv(caperm.HOR$Coordinates_West, "UTF-8", "UTF-8", sub=' ')), fixed = TRUE)), from = "deg_min_sec", to = "dec_deg"))
caperm.HOR$longitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)(measurements::[conv_unit](https://rdrr.io/pkg/measurements/man/conv_unit.html)([paste0](https://rdrr.io/r/base/paste.html)("-", caperm.HOR$Cordinates_West), from = "deg_min_sec", to = "dec_deg"))
#caperm.HOR$latitude_decimal_degrees = as.numeric(measurements::conv_unit(gsub('\"N', '', gsub('o', '', gsub("'", ' ', iconv(caperm.HOR$Coordinates_North, "UTF-8", "UTF-8", sub=' '))), fixed = TRUE), from = "deg_min_sec", to = "dec_deg"))
caperm.HOR$latitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)(measurements::[conv_unit](https://rdrr.io/pkg/measurements/man/conv_unit.html)(caperm.HOR$Cordinates_North, from = "deg_min_sec", to = "dec_deg"))
#plot(caperm.HOR[,c("longitude_decimal_degrees","latitude_decimal_degrees")])
caperm.HOR$site_obsdate = "2013"
caperm.HOR$site_key = [make.unique](https://rdrr.io/r/base/make.unique.html)(caperm.HOR$Soil.core)
#summary(as.factor(caperm.HOR$Soil_depth_cm))
caperm.HOR$hzn_top = caperm.HOR$Soil_depth_cm-1
caperm.HOR$hzn_bot = caperm.HOR$Soil_depth_cm+1
caperm.HOR$db_od = caperm.HOR$Bulk_density_gdrysoil_cm3wetsoil
caperm.HOR$oc = caperm.HOR$Ccontent_percentage_on_drymass * 10
caperm.HOR$n_tot = caperm.HOR$Ncontent_percentage_on_drymass * 10
caperm.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(caperm.HOR$oc / 1000 * caperm.HOR$db_od * 1000, 3)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(caperm.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ caperm.HOR[,i] = NA } }
chemsprops.CAPERM = caperm.HOR[,col.names]
chemsprops.CAPERM$source_db = "Canada_subarctic"
chemsprops.CAPERM$confidence_degree = 2
chemsprops.CAPERM$project_url = "http://arp.arctic.ac.uk/projects/carbon-cycling-linkages-permafrost-systems-cyclops/"
chemsprops.CAPERM$citation_url = "https://doi.org/10.5285/efa2a84b-3505-4221-a7da-12af3cdc1952"
chemsprops.CAPERM <- complete.vars(chemsprops.CAPERM, sel=[c](https://rdrr.io/r/base/c.html)("oc", "n_tot"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CAPERM)
#> [1] 1180 36
```
#### 5\.3\.0\.21 SOTER China soil profiles
* Dijkshoorn, K., van Engelen, V., \& Huting, J. (2008\). [Soil and landform properties for LADA partner countries](https://isric.org/sites/default/files/isric_report_2008_06.pdf). ISRIC report 2008/06 and GLADA report 2008/03, ISRIC – World Soil Information and FAO, Wageningen. Data download URL: [https://files.isric.org/public/soter/CN\-SOTER.zip](https://files.isric.org/public/soter/CN-SOTER.zip)
```
if({
sot.sites = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/China/China_SOTERv1/CHINA_SOTERv1_Profile.csv")
sot.horizons = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/China/China_SOTERv1/CHINA_SOTERv1_Horizon.csv")
sot.HOR = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(sot.sites, sot.horizons), type = "full")
sot.HOR$oc = sot.HOR$SOCA * 10
sot.HOR$ca_ext = sot.HOR$EXCA * 200
sot.HOR$mg_ext = sot.HOR$EXMG * 121
sot.HOR$na_ext = sot.HOR$EXNA * 230
sot.HOR$k_ext = sot.HOR$EXCK * 391
## upper depth missing needs to be derived manually
sot.HOR$hzn_top = NA
sot.HOR$hzn_top[2:[nrow](https://rdrr.io/r/base/nrow.html)(sot.HOR)] <- sot.HOR$HBDE[1:([nrow](https://rdrr.io/r/base/nrow.html)(sot.HOR)-1)]
sot.HOR$hzn_top <- [ifelse](https://rdrr.io/r/base/ifelse.html)(sot.HOR$hzn_top > sot.HOR$HBDE, 0, sot.HOR$hzn_top)
sot.HOR$hzn_top <- [ifelse](https://rdrr.io/r/base/ifelse.html)(sot.HOR$HONU==1 & [is.na](https://rdrr.io/r/base/NA.html)(sot.HOR$hzn_top), 0, sot.HOR$hzn_top)
sot.HOR$oc_d = [signif](https://rdrr.io/r/base/Round.html)(sot.HOR$oc / 1000 * sot.HOR$BULK * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sot.HOR$SDVC), 0, sot.HOR$SDVC))/100, 3)
sot.sel.h = [c](https://rdrr.io/r/base/c.html)("PRID", "PDID", "SAYR", "LNGI", "LATI", "labsampnum", "HONU", "hzn_top","HBDE","HODE", "PSCL", "CLPC", "STPC", "SDTO", "oc", "oc_d", "TOTC", "TOTN", "PHKC", "PHAQ", "ph_cacl2", "CECS", "cec_nh4", "ecec", "SDVC", "BULK", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = sot.sel.h[[which](https://rdrr.io/r/base/which.html)(!sot.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sot.HOR))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sot.HOR[,i] = NA } }
chemsprops.CNSOT = sot.HOR[,sot.sel.h]
chemsprops.CNSOT$source_db = "China_SOTER"
chemsprops.CNSOT$confidence_degree = 8
chemsprops.CNSOT$project_url = "https://www.isric.org/explore/soter"
chemsprops.CNSOT$citation_url = "https://isric.org/sites/default/files/isric_report_2008_06.pdf"
chemsprops.CNSOT <- complete.vars(chemsprops.CNSOT, sel=[c](https://rdrr.io/r/base/c.html)("TOTC", "PHAQ", "CLPC"), coords = [c](https://rdrr.io/r/base/c.html)("LNGI", "LATI"))
}
#> Joining by: PRID, INFR
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CNSOT)
#> [1] 5105 36
```
#### 5\.3\.0\.22 SISLAC
* Sistema de Información de Suelos de Latinoamérica (SISLAC), Data download URL: [http://54\.229\.242\.119/sislac/es](http://54.229.242.119/sislac/es)
```
if({
sis.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/SA/SISLAC/sislac_profiles_es.csv", stringsAsFactors = FALSE)
#str(sis.hor)
## SOC for Urugvay do not match the original soil profile data (see e.g. http://www.mgap.gub.uy/sites/default/files/multimedia/skmbt_c45111090914030.pdf)
## compare with:
#sis.hor[sis.hor$perfil_id=="23861",]
## Subset to SISINTA/WOSIS points:
cor.sel = [c](https://rdrr.io/r/base/c.html)([grep](https://rdrr.io/r/base/grep.html)("WoSIS", [paste](https://rdrr.io/r/base/paste.html)(sis.hor$perfil_numero)), [grep](https://rdrr.io/r/base/grep.html)("SISINTA", [paste](https://rdrr.io/r/base/paste.html)(sis.hor$perfil_numero)))
#length(cor.sel)
sis.hor = sis.hor[cor.sel,]
#summary(sis.hor$analitico_carbono_organico_c)
sis.hor$oc = sis.hor$analitico_carbono_organico_c * 10
sis.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(sis.hor$oc / 1000 * sis.hor$analitico_densidad_aparente * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(sis.hor$analitico_gravas), 0, sis.hor$analitico_gravas))/100, 3)
#summary(sis.hor$analitico_base_k)
#summary(as.factor(sis.hor$perfil_fecha))
sis.sel.h = [c](https://rdrr.io/r/base/c.html)("perfil_id", "perfil_numero", "perfil_fecha", "perfil_ubicacion_longitud", "perfil_ubicacion_latitud", "id", "layer_sequence", "profundidad_superior", "profundidad_inferior", "hzn_desgn", "tex_psda", "analitico_arcilla", "analitico_limo_2_50", "analitico_arena_total", "oc", "oc_d", "c_tot", "n_tot", "analitico_ph_kcl", "analitico_ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "analitico_gravas", "analitico_densidad_aparente", "ca_ext", "mg_ext", "na_ext", "k_ext", "analitico_conductividad", "ec_12pre")
x.na = sis.sel.h[[which](https://rdrr.io/r/base/which.html)(!sis.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sis.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sis.hor[,i] = NA } }
chemsprops.SISLAC = sis.hor[,sis.sel.h]
chemsprops.SISLAC$source_db = "SISLAC"
chemsprops.SISLAC$confidence_degree = 4
chemsprops.SISLAC$project_url = "http://54.229.242.119/sislac/es"
chemsprops.SISLAC$citation_url = "https://hdl.handle.net/10568/49611"
chemsprops.SISLAC <- complete.vars(chemsprops.SISLAC, sel=[c](https://rdrr.io/r/base/c.html)("oc","analitico_ph_kcl","analitico_arcilla"), coords = [c](https://rdrr.io/r/base/c.html)("perfil_ubicacion_longitud", "perfil_ubicacion_latitud"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SISLAC)
#> [1] 49994 36
```
#### 5\.3\.0\.23 FEBR
* Samuel\-Rosa, A., Dalmolin, R. S. D., Moura\-Bueno, J. M., Teixeira, W. G., \& Alba, J. M. F. (2020\). Open legacy soil survey data in Brazil: geospatial data quality and how to improve it. Scientia Agricola, 77(1\). [https://doi.org/10\.1590/1678\-992x\-2017\-0430](https://doi.org/10.1590/1678-992x-2017-0430)
* Free Brazilian Repository for Open Soil Data – febr. Data download URL: <http://www.ufsm.br/febr/>
```
if({
#library(febr)
## download up-to-date copy of data
#febr.lab = febr::layer(dataset = "all", variable="all")
#febr.lab = febr::observation(dataset = "all")
febr.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Brasil/FEBR/febr-superconjunto.csv", stringsAsFactors = FALSE, dec = ",", sep = ";")
#head(febr.hor)
#summary(febr.hor$carbono)
#summary(febr.hor$ph)
#summary(febr.hor$dsi) ## bulk density of total soil
febr.hor$clay_tot_psa = febr.hor$argila /10
febr.hor$sand_tot_psa = febr.hor$areia /10
febr.hor$silt_tot_psa = febr.hor$silte /10
febr.hor$wpg2 = (1000-febr.hor$terrafina)/10
febr.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(febr.hor$carbono / 1000 * febr.hor$dsi * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(febr.hor$wpg2), 0, febr.hor$wpg2))/100, 3)
febr.sel.h <- [c](https://rdrr.io/r/base/c.html)("observacao_id", "usiteid", "observacao_data", "coord_x", "coord_y", "sisb_id", "camada_id", "profund_sup", "profund_inf", "camada_nome", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "carbono", "oc_d", "c_tot", "nitrogenio", "ph_kcl", "ph", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "dsi", "ca_ext", "mg_ext", "na_ext", "k_ext", "ce", "ec_12pre")
x.na = febr.sel.h[[which](https://rdrr.io/r/base/which.html)(!febr.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(febr.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ febr.hor[,i] = NA } }
chemsprops.FEBR = febr.hor[,febr.sel.h]
chemsprops.FEBR$source_db = "FEBR"
chemsprops.FEBR$confidence_degree = 4
chemsprops.FEBR$project_url = "http://www.ufsm.br/febr/"
chemsprops.FEBR$citation_url = "https://doi.org/10.1590/1678-992x-2017-0430"
chemsprops.FEBR <- complete.vars(chemsprops.FEBR, sel=[c](https://rdrr.io/r/base/c.html)("carbono","ph","clay_tot_psa","dsi"), coords = [c](https://rdrr.io/r/base/c.html)("coord_x", "coord_y"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.FEBR)
#> [1] 7842 36
```
#### 5\.3\.0\.24 PRONASOLOS
* POLIDORO, J., COELHO, M., CARVALHO FILHO, A. D., LUMBRERAS, J., de OLIVEIRA, A. P., VASQUES, G. D. M., … \& BREFIN, M. (2021\). [Programa Nacional de Levantamento e Interpretação de Solos do Brasil (PronaSolos): diretrizes para implementação](https://www.infoteca.cnptia.embrapa.br/infoteca/handle/doc/1135056). Embrapa Solos\-Documentos (INFOTECA\-E).
* Download URL: <http://geoinfo.cnps.embrapa.br/documents/3013/download>
```
if({
pronas.hor = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(sf::[read_sf](https://r-spatial.github.io/sf/reference/st_read.html)("/mnt/diskstation/data/Soil_points/Brasil/Pronasolos/Perfis_PronaSolos_20201202v2.shp"))
## 34,464 rows
#head(pronas.hor)
#summary(as.numeric(pronas.hor$carbono_or))
#summary(as.numeric(pronas.hor$densidade_))
#summary(as.numeric(pronas.hor$argila))
#summary(as.numeric(pronas.hor$cascalho))
#summary(as.numeric(pronas.hor$ph_h2o))
#summary(as.numeric(pronas.hor$complexo_2))
## A lot of errors / typos e.g. very high values and 0 values!!
#pronas.hor$data_colet[1:50]
pronas.in.name = [c](https://rdrr.io/r/base/c.html)("sigla", "codigo_pon", "data_colet", "gcs_latitu", "gcs_longit", "simbolo_ho", "profundida",
"profundi_1", "cascalho", "areia_tota", "silte", "argila", "densidade_", "ph_h2o", "ph_kcl",
"complexo_s", "complexo_1", "complexo_2", "complexo_3", "valor_s", "carbono_or", "nitrogenio",
"condutivid", "classe_tex")
#pronas.in.name[which(!pronas.in.name %in% names(pronas.hor))]
pronas.x = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(pronas.hor[,pronas.in.name])
pronas.out.name = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "latitude_decimal_degrees", "longitude_decimal_degrees",
"hzn_desgn", "hzn_bot", "hzn_top", "wpg2", "sand_tot_psa", "silt_tot_psa",
"clay_tot_psa", "db_od", "ph_h2o", "ph_kcl", "ca_ext",
"mg_ext", "k_ext", "na_ext", "cec_sum", "oc", "n_tot", "ec_satp", "tex_psda")
## translate values
pronas.fun.lst = [as.list](https://rdrr.io/r/base/list.html)([rep](https://rdrr.io/r/base/rep.html)("as.numeric(x)*1", [length](https://rdrr.io/r/base/length.html)(pronas.in.name)))
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="sigla")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="codigo_pon")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="data_colet")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="simbolo_ho")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="classe_tex")]] = "paste(x)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_s")]] = "as.numeric(x)*200"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_1")]] = "as.numeric(x)*121"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_2")]] = "as.numeric(x)*391"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="complexo_3")]] = "as.numeric(x)*230"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="areia_tota")]] = "round(as.numeric(x)/10, 1)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="silte")]] = "round(as.numeric(x)/10, 1)"
pronas.fun.lst[[[which](https://rdrr.io/r/base/which.html)(pronas.in.name=="argila")]] = "round(as.numeric(x)/10, 1)"
## save translation rules:
[write.csv](https://rdrr.io/r/utils/write.table.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(pronas.in.name, pronas.out.name, [unlist](https://rdrr.io/r/base/unlist.html)(pronas.fun.lst)), "pronas_soilab_transvalues.csv")
pronas.soil = transvalues(pronas.x, pronas.out.name, pronas.in.name, pronas.fun.lst)
pronas.soil$oc_d = [signif](https://rdrr.io/r/base/Round.html)(pronas.soil$oc / 1000 * pronas.soil$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(pronas.soil$wpg2), 0, pronas.soil$wpg2))/100, 3)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(pronas.soil))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ pronas.soil[,i] = NA } }
chemsprops.PRONASOLOS = pronas.soil[,col.names]
chemsprops.PRONASOLOS$source_db = "PRONASOLOS"
chemsprops.PRONASOLOS$confidence_degree = 2
chemsprops.PRONASOLOS$project_url = "https://geoportal.cprm.gov.br/pronasolos/"
chemsprops.PRONASOLOS$citation_url = "https://www.infoteca.cnptia.embrapa.br/infoteca/handle/doc/1135056"
chemsprops.PRONASOLOS <- complete.vars(chemsprops.PRONASOLOS, sel=[c](https://rdrr.io/r/base/c.html)("oc","ph_h2o","clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.PRONASOLOS)
#> [1] 31747 36
```
#### 5\.3\.0\.25 Soil Profile DB for Costa Rica
* Mata, R., Vázquez, A., Rosales, A., \& Salazar, D. (2012\). [Mapa digital de suelos de Costa Rica](http://www.cia.ucr.ac.cr/?page_id=139). Asociación Costarricense de la Ciencia del Suelo, San José, CRC. Escala, 1, 200000\. Data download URL: [http://www.cia.ucr.ac.cr/wp\-content/recursosnaturales/Base%20perfiles%20de%20suelos%20v1\.1\.rar](http://www.cia.ucr.ac.cr/wp-content/recursosnaturales/Base%20perfiles%20de%20suelos%20v1.1.rar)
* Mata\-Chinchilla, R., \& Castro\-Chinchilla, J. (2019\). Geoportal de suelos de Costa Rica como Bien Público al servicio del país. Revista Tecnología En Marcha, 32(7\), Pág. 51\-56\. [https://doi.org/10\.18845/tm.v32i7\.4259](https://doi.org/10.18845/tm.v32i7.4259)
```
if({
cr.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Costa_Rica/Base_de_datos_version_1.2.3.csv", stringsAsFactors = FALSE)
#plot(cr.hor[,c("X","Y")], pch="+", asp=1)
cr.hor$usiteid = [paste](https://rdrr.io/r/base/paste.html)(cr.hor$Provincia, cr.hor$Cantón, cr.hor$Id, sep="_")
#summary(cr.hor$Corg.)
cr.hor$oc = cr.hor$Corg. * 10
cr.hor$Densidad.Aparente = [as.numeric](https://rdrr.io/r/base/numeric.html)([paste0](https://rdrr.io/r/base/paste.html)(cr.hor$Densidad.Aparente))
#summary(cr.hor$K)
cr.hor$ca_ext = cr.hor$Ca * 200
cr.hor$mg_ext = cr.hor$Mg * 121
#cr.hor$na_ext = cr.hor$Na * 230
cr.hor$k_ext = cr.hor$K * 391
cr.hor$wpg2 = NA
cr.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(cr.hor$oc / 1000 * cr.hor$Densidad.Aparente * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(cr.hor$wpg2), 0, cr.hor$wpg2))/100, 3)
cr.sel.h = [c](https://rdrr.io/r/base/c.html)("Id", "usiteid", "Fecha", "X", "Y", "labsampnum", "horizonte", "prof_inicio", "prof_final", "id_hz", "Clase.Textural", "ARCILLA", "LIMO", "ARENA", "oc", "oc_d", "c_tot", "n_tot", "pHKCl", "pH_H2O", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "Densidad.Aparente", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = cr.sel.h[[which](https://rdrr.io/r/base/which.html)(!cr.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(cr.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ cr.hor[,i] = NA } }
chemsprops.CostaRica = cr.hor[,cr.sel.h]
chemsprops.CostaRica$source_db = "CostaRica"
chemsprops.CostaRica$confidence_degree = 4
chemsprops.CostaRica$project_url = "http://www.cia.ucr.ac.cr"
chemsprops.CostaRica$citation_url = "https://doi.org/10.18845/tm.v32i7.42"
chemsprops.CostaRica <- complete.vars(chemsprops.CostaRica, sel=[c](https://rdrr.io/r/base/c.html)("oc","pH_H2O","ARCILLA","Densidad.Aparente"), coords = [c](https://rdrr.io/r/base/c.html)("X", "Y"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CostaRica)
#> [1] 2042 36
```
#### 5\.3\.0\.26 Iran soil profile DB
* Dewan, M. L., \& Famouri, J. (1964\). The soils of Iran. Food and Agriculture Organization of the United Nations.
* Hengl, T., Toomanian, N., Reuter, H. I., \& Malakouti, M. J. (2007\). [Methods to interpolate soil categorical variables from profile observations: Lessons from Iran](https://doi.org/10.1016/j.geoderma.2007.04.022). Geoderma, 140(4\), 417\-427\.
* Mohammad, H. B. (2000\). Soil resources and use potentiality map of Iran. Soil and Water Research Institute, Teheran, Iran.
```
if({
na.s = [c](https://rdrr.io/r/base/c.html)("?","","?.","??", -2147483647, -1.00e+308, "<NA>")
iran.hor = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Iran/iran_sdbana.txt", stringsAsFactors = FALSE, na.strings = na.s, header = FALSE)[,1:12]
[names](https://rdrr.io/r/base/names.html)(iran.hor) = [c](https://rdrr.io/r/base/c.html)("site_key", "hzn_desgn", "hzn_top", "hzn_bot", "ph_h2o", "ec_satp", "oc", "CACO", "PBS", "sand_tot_psa", "silt_tot_psa", "clay_tot_psa")
iran.hor$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(iran.hor$hzn_top) & iran.hor$hzn_desgn=="A", 0, iran.hor$hzn_top)
iran.hor2 = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Iran/iran_sdbhor.txt", stringsAsFactors = FALSE, na.strings = na.s, header = FALSE)[,1:8]
[names](https://rdrr.io/r/base/names.html)(iran.hor2) = [c](https://rdrr.io/r/base/c.html)("site_key", "layer_sequence", "DESI", "hzn_top", "hzn_bot", "M_colour", "tex_psda", "hzn_desgn")
iran.site = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Iran/iran_sgdb.txt", stringsAsFactors = FALSE, na.strings = na.s, header = FALSE)
[names](https://rdrr.io/r/base/names.html)(iran.site) = [c](https://rdrr.io/r/base/c.html)("usiteid", "latitude_decimal_degrees", "longitude_decimal_degrees", "FAO", "Tax", "site_key")
iran.db = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(iran.site, iran.hor, iran.hor2))
iran.db$oc = iran.db$oc * 10
#summary(iran.db$oc)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(iran.db))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ iran.db[,i] = NA } }
chemsprops.IRANSPDB = iran.db[,col.names]
chemsprops.IRANSPDB$source_db = "Iran_SPDB"
chemsprops.IRANSPDB$confidence_degree = 4
chemsprops.IRANSPDB$project_url = ""
chemsprops.IRANSPDB$citation_url = "https://doi.org/10.1016/j.geoderma.2007.04.022"
chemsprops.IRANSPDB <- complete.vars(chemsprops.IRANSPDB, sel=[c](https://rdrr.io/r/base/c.html)("oc","ph_h2o","clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.IRANSPDB)
#> [1] 4759 36
```
#### 5\.3\.0\.27 Northern circumpolar permafrost soil profiles
* Hugelius, G., Bockheim, J. G., Camill, P., Elberling, B., Grosse, G., Harden, J. W., … \& Michaelson, G. (2013\). [A new data set for estimating organic carbon storage to 3 m depth in soils of the northern circumpolar permafrost region](https://doi.org/10.5194/essd-5-393-2013). Earth System Science Data (Online), 5(2\). Data download URL: [http://dx.doi.org/10\.5879/ECDS/00000002](http://dx.doi.org/10.5879/ECDS/00000002)
```
if({
ncscd.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/NCSCD/Harden_etal_2012_Hugelius_etal_2013_cleaned_data.csv", stringsAsFactors = FALSE)
ncscd.hors$oc = [as.numeric](https://rdrr.io/r/base/numeric.html)(ncscd.hors$X.C)*10
#summary(ncscd.hors$oc)
#hist(ncscd.hors$Layer.thickness.cm, breaks = 45)
ncscd.hors$Layer.thickness.cm = [ifelse](https://rdrr.io/r/base/ifelse.html)(ncscd.hors$Layer.thickness.cm<0, NA, ncscd.hors$Layer.thickness.cm)
ncscd.hors$hzn_bot = ncscd.hors$Basal.Depth.cm + ncscd.hors$Layer.thickness.cm
ncscd.hors$db_od = [as.numeric](https://rdrr.io/r/base/numeric.html)(ncscd.hors$bulk.density.g.cm.3)
## Can we assume no coarse fragments?
ncscd.hors$wpg2 = 0
ncscd.hors$oc_d = [signif](https://rdrr.io/r/base/Round.html)(ncscd.hors$oc / 1000 * ncscd.hors$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(ncscd.hors$wpg2), 0, ncscd.hors$wpg2))/100, 3)
## very high values >40 kg/m3
ncscd.hors$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(ncscd.hors$Sample.date, format="%d-%m-%Y"), "%Y-%m-%d")
#summary(ncscd.hors$db_od)
ncscd.col = [c](https://rdrr.io/r/base/c.html)("Profile.ID", "citation", "site_obsdate", "Long", "Lat", "labsampnum", "layer_sequence", "Basal.Depth.cm", "hzn_bot", "Horizon.type", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = ncscd.col[[which](https://rdrr.io/r/base/which.html)(!ncscd.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(ncscd.hors))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ ncscd.hors[,i] = NA } }
chemsprops.NCSCD = ncscd.hors[,ncscd.col]
chemsprops.NCSCD$source_db = "NCSCD"
chemsprops.NCSCD$confidence_degree = 10
chemsprops.NCSCD$project_url = "https://bolin.su.se/data/ncscd/"
chemsprops.NCSCD$citation_url = "https://doi.org/10.5194/essd-5-393-2013"
chemsprops.NCSCD = complete.vars(chemsprops.NCSCD, sel = [c](https://rdrr.io/r/base/c.html)("oc","db_od"), coords = [c](https://rdrr.io/r/base/c.html)("Long", "Lat"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NCSCD)
#> [1] 7104 36
```
#### 5\.3\.0\.28 CSIRO National Soil Site Database
* CSIRO (2020\). CSIRO National Soil Site Database. v4\. CSIRO. Data Collection. <https://data.csiro.au/collections/collection/CIcsiro:7526v004>. Data download URL: [https://doi.org/10\.25919/5eeb2a56eac12](https://doi.org/10.25919/5eeb2a56eac12) (available upon request)
* Searle, R. (2014\). The Australian site data collation to support the GlobalSoilMap. GlobalSoilMap: Basis of the global spatial soil information system, 127\.
```
if({
[library](https://rdrr.io/r/base/library.html)([Hmisc](http://biostat.mc.vanderbilt.edu/Hmisc))
cmdb <- [mdb.get](https://rdrr.io/pkg/Hmisc/man/mdb.get.html)("/mnt/diskstation/data/Soil_points/Australia/CSIRO/NatSoil_v2_20200612.mdb")
#str(cmdb$SITES)
au.obs = cmdb$OBSERVATIONS[,[c](https://rdrr.io/r/base/c.html)("s.id", "o.location.notes", "o.date.desc", "o.latitude.GDA94", "o.longitude.GDA94")]
au.obs = au.obs[,]
coordinates(au.obs) <- ~o.longitude.GDA94+o.latitude.GDA94
proj4string(au.obs) <- CRS("+proj=longlat +ellps=GRS80 +no_defs")
au.xy <- [data.frame](https://rdrr.io/r/base/data.frame.html)(spTransform(au.obs, CRS("+proj=longlat +ellps=WGS84 +datum=WGS84")))
#plot(au.xy[,c("o.longitude.GDA94", "o.latitude.GDA94")])
## all variables in one column and need to be sorted based on the lab method
#summary(cmdb$LAB_METHODS$LABM.SHORT.NAME)
#write.csv(cmdb$LAB_METHODS, "/mnt/diskstation/data/Soil_points/Australia/CSIRO/NatSoil_v2_20200612_lab_methods.csv")
lab.tbl = [list](https://rdrr.io/r/base/list.html)(
[c](https://rdrr.io/r/base/c.html)("6_DC", "6A1", "6A1_UC", "6B1", "6B2", "6B2a", "6B2b", "6B3", "6B4", "6B4a", "6B4b", "6Z"), # %
[c](https://rdrr.io/r/base/c.html)("6B3a"), # g/kg
[c](https://rdrr.io/r/base/c.html)("6H4", "6H4_SCaRP"), # %
[c](https://rdrr.io/r/base/c.html)("7_C_B", "7_NR", "7A1", "7A2", "7A2a", "7A2b", "7A3", "7A4", "7A5", "7A6", "7A6a", "7A6b", "7A6b_MCLW"), # g/kg
[c](https://rdrr.io/r/base/c.html)("4A1", "4_NR", "4A_C_2.5", "4A_C_1", "4G1"),
[c](https://rdrr.io/r/base/c.html)("4C_C_1", "4C1", "4C2", "23A"),
[c](https://rdrr.io/r/base/c.html)("4B_C_2.5", "4B1", "4B2"),
[c](https://rdrr.io/r/base/c.html)("P10_NR_C", "P10_HYD_C", "P10_PB_C", "P10_PB1_C", "P10_CF_C", "P10_I_C"),
[c](https://rdrr.io/r/base/c.html)("P10_NR_Z", "P10_HYD_Z", "P10_PB_Z", "P10_PB1_Z", "P10_CF_Z", "P10_I_Z"),
[c](https://rdrr.io/r/base/c.html)("P10_NR_S", "P10_HYD_S", "P10_PB_S", "P10_PB1_S", "P10_CF_S", "P10_I_S"),
[c](https://rdrr.io/r/base/c.html)("15C1modCEC", "15_HSK_CEC", "15J_CEC"),
[c](https://rdrr.io/r/base/c.html)("15I1", "15I2", "15I3", "15I4", "15D3_CEC"),
[c](https://rdrr.io/r/base/c.html)("15_BASES", "15_NR", "15J_H", "15J1"),
[c](https://rdrr.io/r/base/c.html)("2Z2_Grav", "P10_GRAV"),
[c](https://rdrr.io/r/base/c.html)("503.08a", "P3A_NR", "P3A1", "P3A1_C4", "P3A1_CLOD", "P3A1_e"),
[c](https://rdrr.io/r/base/c.html)("18F1_CA"),
[c](https://rdrr.io/r/base/c.html)("18F1_MG"),
[c](https://rdrr.io/r/base/c.html)("18F1_NA"),
[c](https://rdrr.io/r/base/c.html)("18F1_K", "18F2", "18A1mod", "18_NR", "18A1", "18A1_NR", "18B1", "18B2"),
[c](https://rdrr.io/r/base/c.html)("3_C_B", "3_NR", "3A_TSS"),
[c](https://rdrr.io/r/base/c.html)("3A_C_2.5", "3A1")
)
[names](https://rdrr.io/r/base/names.html)(lab.tbl) = [c](https://rdrr.io/r/base/c.html)("oc", "ocP", "c_tot", "n_tot", "ph_h2o", "ph_kcl", "ph_cacl2", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
val.lst = [lapply](https://rdrr.io/r/base/lapply.html)(1:[length](https://rdrr.io/r/base/length.html)(lab.tbl), function(i){x <- cmdb$LAB_RESULTS[cmdb$LAB_RESULTS$labm.code [%in%](https://rdrr.io/r/base/match.html) lab.tbl[[i]], [c](https://rdrr.io/r/base/c.html)("agency.code", "proj.code", "s.id", "o.id", "h.no", "labr.value")]; [names](https://rdrr.io/r/base/names.html)(x)[6] <- [names](https://rdrr.io/r/base/names.html)(lab.tbl)[i]; [return](https://rdrr.io/r/base/function.html)(x) })
[names](https://rdrr.io/r/base/names.html)(val.lst) = [names](https://rdrr.io/r/base/names.html)(lab.tbl)
val.lst$oc$oc = val.lst$oc$oc * 10
[names](https://rdrr.io/r/base/names.html)(val.lst$ocP)[6] = "oc"
val.lst$oc <- [rbind](https://rdrr.io/r/base/cbind.html)(val.lst$oc, val.lst$ocP)
val.lst$ocP = NULL
#summary(val.lst$oc$oc)
#str(val.lst, max.level = 1)
for(i in 1:[length](https://rdrr.io/r/base/length.html)(val.lst)){ val.lst[[i]]$h.id <- [paste](https://rdrr.io/r/base/paste.html)(val.lst[[i]]$agency.code, val.lst[[i]]$proj.code, val.lst[[i]]$s.id, val.lst[[i]]$o.id, val.lst[[i]]$h.no, sep="_") }
au.hor <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)(val.lst, function(x){x[,6:7]}), match="first")
#str(as.factor(au.hor$h.id))
cmdb$HORIZONS$h.id = [paste](https://rdrr.io/r/base/paste.html)(cmdb$HORIZONS$agency.code, cmdb$HORIZONS$proj.code, cmdb$HORIZONS$s.id, cmdb$HORIZONS$o.id, cmdb$HORIZONS$h.no, sep="_")
cmdb$HORIZONS$hzn_desgn = [paste](https://rdrr.io/r/base/paste.html)(cmdb$HORIZONS$h.desig.master, cmdb$HORIZONS$h.desig.subdiv, cmdb$HORIZONS$h.desig.suffix, sep="")
au.horT <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(cmdb$HORIZONS[,[c](https://rdrr.io/r/base/c.html)("h.id","s.id","h.no","h.texture","hzn_desgn","h.upper.depth","h.lower.depth")], au.hor, au.xy))
au.horT$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(au.horT$o.date.desc, format="%d%m%Y"), "%Y-%m-%d")
au.horT$sand_tot_psa = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(au.horT$sand_tot_psa), 100-(au.horT$clay_tot_psa + au.horT$silt_tot_psa), au.horT$sand_tot_psa)
au.horT$hzn_top = au.horT$h.upper.depth*100
au.horT$hzn_bot = au.horT$h.lower.depth*100
au.horT$oc_d = [signif](https://rdrr.io/r/base/Round.html)(au.horT$oc / 1000 * au.horT$db_od * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(au.horT$wpg2), 0, au.horT$wpg2))/100, 3)
au.cols.n = [c](https://rdrr.io/r/base/c.html)("s.id", "o.location.notes", "site_obsdate", "o.longitude.GDA94", "o.latitude.GDA94", "h.id", "h.no", "hzn_top", "hzn_bot", "hzn_desgn", "h.texture", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = au.cols.n[[which](https://rdrr.io/r/base/which.html)(!au.cols.n [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(au.horT))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ au.horT[,i] = NA } }
chemsprops.NatSoil = au.horT[,au.cols.n]
chemsprops.NatSoil$source_db = "CSIRO_NatSoil"
chemsprops.NatSoil$confidence_degree = 4
chemsprops.NatSoil$project_url = "https://www.csiro.au/en/Do-business/Services/Enviro/Soil-archive"
chemsprops.NatSoil$citation_url = "https://doi.org/10.25919/5eeb2a56eac12"
chemsprops.NatSoil = complete.vars(chemsprops.NatSoil, sel = [c](https://rdrr.io/r/base/c.html)("oc","db_od","clay_tot_psa","ph_h2o"), coords = [c](https://rdrr.io/r/base/c.html)("o.longitude.GDA94", "o.latitude.GDA94"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NatSoil)
#> [1] 70791 36
```
#### 5\.3\.0\.29 NAMSOTER
* Coetzee, M. E. (2001\). [NAMSOTER, a SOTER database for Namibia](https://edepot.wur.nl/485173). Agroecological Zoning, 458\.
* Coetzee, M. E. (2009\). Chemical characterisation of the soils of East Central Namibia (Doctoral dissertation, Stellenbosch: University of Stellenbosch).
```
if({
nam.profs <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Namibia/NAMSOTER/Namibia_all_profiles.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("-9999", "999", "9999", "NA"))
nam.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Namibia/NAMSOTER/Namibia_all_horizons.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("-9999", "999", "9999", "NA"))
#summary(nam.hors$TOTN)
#summary(nam.hors$TOTC)
nam.hors$hzn_top <- NA
nam.hors$hzn_top <- [ifelse](https://rdrr.io/r/base/ifelse.html)(nam.hors$HONU==1, 0, nam.hors$hzn_top)
h.lst <- [lapply](https://rdrr.io/r/base/lapply.html)(1:7, function(x){[which](https://rdrr.io/r/base/which.html)(nam.hors$HONU==x)})
for(i in 2:7){
sel <- [match](https://rdrr.io/r/base/match.html)(nam.hors$PRID[h.lst[[i]]], nam.hors$PRID[h.lst[[i-1]]])
nam.hors$hzn_top[h.lst[[i]]] <- nam.hors$HBDE[h.lst[[i-1]]][sel]
}
nam.hors$HBDE <- [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(nam.hors$HBDE), nam.hors$hzn_top+50, nam.hors$HBDE)
#summary(nam.hors$HBDE)
namALL = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(nam.hors, nam.profs, by=[c](https://rdrr.io/r/base/c.html)("PRID"))
namALL$k_ext = namALL$EXCK * 391
namALL$ca_ext = namALL$EXCA * 200
namALL$mg_ext = namALL$EXMG * 121
namALL$na_ext = namALL$EXNA * 230
#summary(namALL$MINA)
namALL$BULK <- [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$BULK>2.4, NA, namALL$BULK)
namALL$wpg2 = [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="D", 80, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="A", 60, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="M", 25, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="C", 10, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="V", 1, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="F", 2.5, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="M/A", 40, [ifelse](https://rdrr.io/r/base/ifelse.html)(namALL$MINA=="C/M", 15, 0))))))))
#hist(namALL$wpg2)
namALL$oc_d = [signif](https://rdrr.io/r/base/Round.html)(namALL$TOTC / 1000 * namALL$BULK * 1000 * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(namALL$wpg2), 0, namALL$wpg2))/100, 3)
#summary(namALL$oc_d)
#summary(namALL$PHAQ) ## very high ph
namALL$site_obsdate = 2000
nam.col = [c](https://rdrr.io/r/base/c.html)("PRID", "SLID", "site_obsdate", "LONG", "LATI", "labsampnum", "HONU", "hzn_top", "HBDE", "HODE", "PSCL", "CLPC", "STPC", "SDTO", "TOTC", "oc_d", "c_tot", "TOTN", "PHKC", "PHAQ", "ph_cacl2", "CECS", "cec_nh4", "ecec", "wpg2", "BULK", "ca_ext", "mg_ext", "na_ext", "k_ext", "ELCO", "ec_12pre")
x.na = nam.col[[which](https://rdrr.io/r/base/which.html)(!nam.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(namALL))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ namALL[,i] = NA } }
chemsprops.NAMSOTER = namALL[,nam.col]
chemsprops.NAMSOTER$source_db = "NAMSOTER"
chemsprops.NAMSOTER$confidence_degree = 2
chemsprops.NAMSOTER$project_url = ""
chemsprops.NAMSOTER$citation_url = "https://edepot.wur.nl/485173"
chemsprops.NAMSOTER = complete.vars(chemsprops.NAMSOTER, sel = [c](https://rdrr.io/r/base/c.html)("TOTC","CLPC","PHAQ"), coords = [c](https://rdrr.io/r/base/c.html)("LONG", "LATI"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.NAMSOTER)
#> [1] 2953 36
```
#### 5\.3\.0\.30 Worldwide organic soil carbon and nitrogen data
* Zinke, P. J., Millemann, R. E., \& Boden, T. A. (1986\). [Worldwide organic soil carbon and nitrogen data](https://cdiac.ess-dive.lbl.gov/ftp/ndp018/ndp018.pdf). Carbon Dioxide Information Center, Environmental Sciences Division, Oak Ridge National Laboratory. Data download URL: [https://dx.doi.org/10\.3334/CDIAC/lue.ndp018](https://dx.doi.org/10.3334/CDIAC/lue.ndp018)
* Note: poor spatial location accuracy i.e. \<10 km. Bulk density for many points has been estimated not measured. Sampling year has not been but literature indicates: 1965, 1974, 1976, 1978, 1979, 1984\. Most of samples come from natural vegetation (undisturbed) areas.
```
if({
ndp.profs <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISCND/ndp018.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("-9999", "?", "NA"), stringsAsFactors = FALSE)
[names](https://rdrr.io/r/base/names.html)(ndp.profs) = [c](https://rdrr.io/r/base/c.html)("PROFILE", "CODE", "CARBON", "NITROGEN", "LAT", "LONG", "ELEV", "SOURCE", "HOLDRIGE", "OLSON", "PARENT")
for(j in [c](https://rdrr.io/r/base/c.html)("CARBON","NITROGEN","ELEV")){ ndp.profs[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(ndp.profs[,j]) }
#summary(ndp.profs$CARBON)
lat.s <- [grep](https://rdrr.io/r/base/grep.html)("S", ndp.profs$LAT) # lat.n <- grep("N", ndp.profs$LAT)
ndp.profs$latitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)("[^0-9.-]", "", ndp.profs$LAT))
ndp.profs$latitude_decimal_degrees[lat.s] = ndp.profs$latitude_decimal_degrees[lat.s] * -1
lon.w <- [grep](https://rdrr.io/r/base/grep.html)("W", ndp.profs$LONG) # lon.e <- grep("E", ndp.profs$LONG, fixed = TRUE)
ndp.profs$longitude_decimal_degrees = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)("[^0-9.-]", "", ndp.profs$LONG))
ndp.profs$longitude_decimal_degrees[lon.w] = ndp.profs$longitude_decimal_degrees[lon.w] * -1
#plot(ndp.profs[,c("longitude_decimal_degrees", "latitude_decimal_degrees")])
ndp.profs$hzn_top = 0; ndp.profs$hzn_bot = 100
## Sampling years from the doc: 1965, 1974, 1976, 1978, 1979, 1984
ndp.profs$site_obsdate = "1982"
ndp.col = [c](https://rdrr.io/r/base/c.html)("PROFILE", "CODE", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence","hzn_top","hzn_bot","hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "CARBON", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = ndp.col[[which](https://rdrr.io/r/base/which.html)(!ndp.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(ndp.profs))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ ndp.profs[,i] = NA } }
chemsprops.ISCND = ndp.profs[,ndp.col]
chemsprops.ISCND$source_db = "ISCND"
chemsprops.ISCND$confidence_degree = 8
chemsprops.ISCND$project_url = "https://iscn.fluxdata.org/data/"
chemsprops.ISCND$citation_url = "https://dx.doi.org/10.3334/CDIAC/lue.ndp018"
chemsprops.ISCND = complete.vars(chemsprops.ISCND, sel = [c](https://rdrr.io/r/base/c.html)("CARBON"), coords = [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.ISCND)
#> [1] 3977 36
```
#### 5\.3\.0\.31 Interior Alaska Carbon and Nitrogen stocks
* Manies, K., Waldrop, M., and Harden, J. (2020\): Generalized models to estimate carbon and nitrogen stocks of organic soil horizons in Interior Alaska, Earth Syst. Sci. Data, 12, 1745–1757, [https://doi.org/10\.5194/essd\-12\-1745\-2020](https://doi.org/10.5194/essd-12-1745-2020), Data download URL: [https://doi.org/10\.5066/P960N1F9](https://doi.org/10.5066/P960N1F9)
```
if({
al.gps <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/Alaska_Interior/Site_GPS_coordinates_v1-1.csv", stringsAsFactors = FALSE)
## Different datums!
#summary(as.factor(al.gps$Datum))
al.gps1 = al.gps[al.gps$Datum=="NAD83",]
coordinates(al.gps1) = ~ Longitude + Latitude
proj4string(al.gps1) = "+proj=longlat +datum=NAD83"
al.gps0 = spTransform(al.gps1, CRS("+proj=longlat +datum=WGS84"))
al.gps[[which](https://rdrr.io/r/base/which.html)(al.gps$Datum=="NAD83"),"Longitude"] = al.gps0@coords[,1]
al.gps[[which](https://rdrr.io/r/base/which.html)(al.gps$Datum=="NAD83"),"Latitude"] = al.gps0@coords[,2]
al.gps$site = al.gps$Site
al.hor <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/USA/Alaska_Interior/Generalized_models_for_CandN_Alaska_v1-1.csv", stringsAsFactors = FALSE)
al.hor$hzn_top = al.hor$depth - [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$thickness)
al.hor$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)(al.hor$date, format = "%m/%d/%Y"), "%Y-%m-%d")
al.hor$oc = [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$carbon) * 10
al.hor$n_tot = [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$nitrogen) * 10
al.hor$oc_d = [as.numeric](https://rdrr.io/r/base/numeric.html)(al.hor$Cdensity) * 1000
#summary(al.hor$oc_d)
al.horA = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(al.hor, al.gps, by=[c](https://rdrr.io/r/base/c.html)("site"))
al.col = [c](https://rdrr.io/r/base/c.html)("profile", "description", "site_obsdate", "Longitude", "Latitude", "sampleID", "layer_sequence", "hzn_top", "depth", "Hcode", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "BDfine", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = al.col[[which](https://rdrr.io/r/base/which.html)(!al.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(al.horA))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ al.horA[,i] = NA } }
chemsprops.Alaska = al.horA[,al.col]
chemsprops.Alaska$source_db = "Alaska_interior"
chemsprops.Alaska$confidence_degree = 1
chemsprops.Alaska$project_url = "https://www.usgs.gov/centers/gmeg"
chemsprops.Alaska$citation_url = "https://doi.org/10.5194/essd-12-1745-2020"
chemsprops.Alaska = complete.vars(chemsprops.Alaska, sel = [c](https://rdrr.io/r/base/c.html)("oc","oc_d"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Alaska)
#> [1] 3882 36
```
#### 5\.3\.0\.32 Croatian Soil Pedon data
* Martinović J., (2000\) [“Tla u Hrvatskoj”](https://books.google.nl/books?id=k_a2MgAACAAJ), Monografija, Državna uprava za zaštitu prirode i okoliša, str. 269, Zagreb. ISBN: 9536793059
* Bašić F., (2014\) [“The Soils of Croatia”](https://books.google.nl/books?id=VbJEAAAAQBAJ). World Soils Book Series, Springer Science \& Business Media, 179 pp. ISBN: 9400758154
```
if({
bpht.site <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Croatia/WBSoilHR_sites_1997.csv", stringsAsFactors = FALSE)
bpht.hors <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Croatia/WBSoilHR_1997.csv", stringsAsFactors = FALSE)
## filter typos
for(j in [c](https://rdrr.io/r/base/c.html)("GOR", "DON", "MKP", "PH1", "PH2", "MSP", "MP", "MG", "HUM", "EXTN", "EXTP", "EXTK", "CAR")){
bpht.hors[,j] = [as.numeric](https://rdrr.io/r/base/numeric.html)(bpht.hors[,j])
}
## Convert to the USDA standard
bpht.hors$sand_tot_psa <- bpht.hors$MSP * 0.8 + bpht.hors$MKP
bpht.hors$silt_tot_psa <- bpht.hors$MP + bpht.hors$MSP * 0.2
bpht.hors$oc <- [signif](https://rdrr.io/r/base/Round.html)(bpht.hors$HUM/1.724 * 10, 3)
## summary(bpht.hors$sand_tot_psa)
bpht.s.lst <- [c](https://rdrr.io/r/base/c.html)("site_key", "UZORAK", "Cro16.30_X", "Cro16.30_Y", "FITOC", "STIJENA", "HID_DREN", "DUBINA")
bpht.hor = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(bpht.site[,bpht.s.lst], bpht.hors)
bpht.hor$wpg2 = bpht.hor$STIJENA
bpht.hor$DON <- [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(bpht.hor$DON), bpht.hor$GOR+50, bpht.hor$DON)
bpht.hor$depth <- bpht.hor$GOR + (bpht.hor$DON - bpht.hor$GOR)/2
bpht.hor = bpht.hor[,]
bpht.hor$wpg2[[which](https://rdrr.io/r/base/which.html)(bpht.hor$GOR<30)] <- bpht.hor$wpg2[[which](https://rdrr.io/r/base/which.html)(bpht.hor$GOR<30)]*.3
bpht.hor$sample_key = [make.unique](https://rdrr.io/r/base/make.unique.html)([paste](https://rdrr.io/r/base/paste.html)(bpht.hor$PEDOL_ID, bpht.hor$OZN, sep="_"))
bpht.hor$sand_tot_psa[bpht.hor$sample_key=="805_Amo"] <- bpht.hor$sand_tot_psa[bpht.hor$sample_key=="805_Amo"]/10
## convert N, P, K
#summary(bpht.hor$EXTK) -- measurements units?
bpht.hor$p_ext = bpht.hor$EXTP * 4.364
bpht.hor$k_ext = bpht.hor$EXTK * 8.3013
bpht.hor = bpht.hor[,]
## coordinates:
bpht.pnts = SpatialPointsDataFrame(bpht.hor[,[c](https://rdrr.io/r/base/c.html)("Cro16.30_X","Cro16.30_Y")], bpht.hor["site_key"], proj4string = CRS("+proj=tmerc +lat_0=0 +lon_0=16.5 +k=0.9999 +x_0=2500000 +y_0=0 +ellps=bessel +towgs84=550.499,164.116,475.142,5.80967,2.07902,-11.62386,0.99999445824 +units=m"))
bpht.pnts.ll <- spTransform(bpht.pnts, CRS("+proj=longlat +datum=WGS84"))
bpht.hor$longitude_decimal_degrees = bpht.pnts.ll@coords[,1]
bpht.hor$latitude_decimal_degrees = bpht.pnts.ll@coords[,2]
bpht.h.lst <- [c](https://rdrr.io/r/base/c.html)('site_key', 'OZ_LIST_PROF', 'UZORAK', 'longitude_decimal_degrees', 'latitude_decimal_degrees', 'labsampnum', 'layer_sequence', 'GOR', 'DON', 'OZN', 'TT', 'MG', 'silt_tot_psa', 'sand_tot_psa', 'oc', 'oc_d', 'c_tot', 'EXTN', 'PH2', 'PH1', 'ph_cacl2', 'cec_sum', 'cec_nh4', 'ecec', 'wpg2', 'db_od', 'ca_ext', 'mg_ext', 'na_ext', 'k_ext', 'ec_satp', 'ec_12pre')
x.na = bpht.h.lst[[which](https://rdrr.io/r/base/which.html)(!bpht.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(bpht.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ bpht.hor[,i] = NA } }
chemsprops.bpht = bpht.hor[,bpht.h.lst]
chemsprops.bpht$source_db = "Croatian_Soil_Pedon"
chemsprops.bpht$confidence_degree = 1
chemsprops.bpht$project_url = "http://www.haop.hr/"
chemsprops.bpht$citation_url = "https://books.google.nl/books?id=k_a2MgAACAAJ"
chemsprops.bpht = complete.vars(chemsprops.bpht, sel = [c](https://rdrr.io/r/base/c.html)("oc","MG","PH1","k_ext"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.bpht)
#> [1] 5746 36
```
#### 5\.3\.0\.33 Remnant native SOC database
* Sanderman, J., (2017\) “Remnant native SOC database for release.xlsx”, Soil carbon profile data from paired land use comparisons, [https://doi.org/10\.7910/DVN/QQQM8V/8MSBNI](https://doi.org/10.7910/DVN/QQQM8V/8MSBNI), Harvard Dataverse, V1
```
if({
rem.hor <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/WHRC_remnant_SOC/remnant+native+SOC+database+for+release.xlsx", sheet = 3)
rem.site <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/WHRC_remnant_SOC/remnant+native+SOC+database+for+release.xlsx", sheet = 2)
rem.ref <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/WHRC_remnant_SOC/remnant+native+SOC+database+for+release.xlsx", sheet = 4)
rem.site = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(rem.site, rem.ref[,[c](https://rdrr.io/r/base/c.html)("Source.No.","DOI","Sample_year")], by=[c](https://rdrr.io/r/base/c.html)("Source.No."))
rem.site$Site = rem.site$Site.ID
rem.horA = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(rem.hor, rem.site, by=[c](https://rdrr.io/r/base/c.html)("Site"))
rem.horA$hzn_top = rem.horA$'U_depth.(m)'*100
rem.horA$hzn_bot = rem.horA$'L_depth.(m)'*100
rem.horA$db_od = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(rem.horA$'measured.BD.(Mg/m3)')), [as.numeric](https://rdrr.io/r/base/numeric.html)(rem.horA$'estimated.BD.(Mg/m3)'), [as.numeric](https://rdrr.io/r/base/numeric.html)(rem.horA$'measured.BD.(Mg/m3)'))
rem.horA$oc_d = [signif](https://rdrr.io/r/base/Round.html)(rem.horA$'OC.(g/kg)' * rem.horA$db_od, 3)
#summary(rem.horA$oc_d)
rem.col = [c](https://rdrr.io/r/base/c.html)("Source.No.", "Site", "Sample_year", "Longitude", "Latitude", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "OC.(g/kg)", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = rem.col[[which](https://rdrr.io/r/base/which.html)(!rem.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(rem.horA))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ rem.horA[,i] = NA } }
chemsprops.RemnantSOC = rem.horA[,rem.col]
chemsprops.RemnantSOC$source_db = "WHRC_remnant_SOC"
chemsprops.RemnantSOC$confidence_degree = 8
chemsprops.RemnantSOC$project_url = "https://www.woodwellclimate.org/research-area/carbon/"
chemsprops.RemnantSOC$citation_url = "http://dx.doi.org/10.1073/pnas.1706103114"
chemsprops.RemnantSOC = complete.vars(chemsprops.RemnantSOC, sel = [c](https://rdrr.io/r/base/c.html)("OC.(g/kg)","oc_d"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.RemnantSOC)
#> [1] 1604 36
```
#### 5\.3\.0\.34 Soil Health DB
* Jian, J., Du, X., \& Stewart, R. D. (2020\). A database for global soil health assessment. Scientific Data, 7(1\), 1\-8\. [https://doi.org/10\.1038/s41597\-020\-0356\-3](https://doi.org/10.1038/s41597-020-0356-3). Data download URL: <https://github.com/jinshijian/SoilHealthDB>
Note: some information is available about column names ([https://www.nature.com/articles/s41597\-020\-0356\-3/tables/3](https://www.nature.com/articles/s41597-020-0356-3/tables/3)) but detailed explanation is missing.
```
if({
shdb.hor <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/INT/SoilHealthDB/SoilHealthDB_V2.xlsx", sheet = 1, na.strings = [c](https://rdrr.io/r/base/c.html)("NA", "NotAvailable", "Not-available"))
#summary(as.factor(shdb.hor$SamplingDepth))
shdb.hor$hzn_top = [as.numeric](https://rdrr.io/r/base/numeric.html)([sapply](https://rdrr.io/r/base/lapply.html)(shdb.hor$SamplingDepth, function(i){ [strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-to-")[[1]][1] }))
shdb.hor$hzn_bot = [as.numeric](https://rdrr.io/r/base/numeric.html)([sapply](https://rdrr.io/r/base/lapply.html)(shdb.hor$SamplingDepth, function(i){ [strsplit](https://rdrr.io/r/base/strsplit.html)(i, "-to-")[[1]][2] }))
shdb.hor$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(shdb.hor$hzn_top), 0, shdb.hor$hzn_top)
shdb.hor$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(shdb.hor$hzn_bot), 15, shdb.hor$hzn_bot)
shdb.hor$oc = [as.numeric](https://rdrr.io/r/base/numeric.html)(shdb.hor$BackgroundSOC) * 10
shdb.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(shdb.hor$oc * shdb.hor$SoilBD, 3)
for(j in [c](https://rdrr.io/r/base/c.html)("ClayPerc", "SiltPerc", "SandPerc", "SoilpH")){ shdb.hor[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(shdb.hor[,j]) }
#summary(shdb.hor$oc_d)
shdb.col = [c](https://rdrr.io/r/base/c.html)("StudyID", "ExperimentID", "SamplingYear", "Longitude", "Latitude", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "Texture", "ClayPerc", "SiltPerc", "SandPerc", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "SoilpH", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "wpg2", "SoilBD", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = shdb.col[[which](https://rdrr.io/r/base/which.html)(!shdb.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(shdb.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ shdb.hor[,i] = NA } }
chemsprops.SoilHealthDB = shdb.hor[,shdb.col]
chemsprops.SoilHealthDB$source_db = "SoilHealthDB"
chemsprops.SoilHealthDB$confidence_degree = 8
chemsprops.SoilHealthDB$project_url = "https://github.com/jinshijian/SoilHealthDB"
chemsprops.SoilHealthDB$citation_url = "https://doi.org/10.1038/s41597-020-0356-3"
chemsprops.SoilHealthDB = complete.vars(chemsprops.SoilHealthDB, sel = [c](https://rdrr.io/r/base/c.html)("ClayPerc", "SoilpH", "oc"), coords = [c](https://rdrr.io/r/base/c.html)("Longitude", "Latitude"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SoilHealthDB)
#> [1] 120 36
```
#### 5\.3\.0\.35 Global Harmonized Dataset of SOC change under perennial crops
* Ledo, A., Hillier, J., Smith, P. et al. (2019\) A global, empirical, harmonised dataset of soil organic carbon changes under perennial crops. Sci Data 6, 57\. [https://doi.org/10\.1038/s41597\-019\-0062\-1](https://doi.org/10.1038/s41597-019-0062-1). Data download URL: [https://doi.org/10\.6084/m9\.figshare.7637210\.v2](https://doi.org/10.6084/m9.figshare.7637210.v2)
Note: Many missing years for PREVIOUS SOC AND SOIL CHARACTERISTICS.
```
if({
[library](https://rdrr.io/r/base/library.html)(["readxl"](https://readxl.tidyverse.org))
socpdb <- readxl::[read_excel](https://readxl.tidyverse.org/reference/read_excel.html)("/mnt/diskstation/data/Soil_points/INT/SOCPDB/SOC_perennials_DATABASE.xls", skip=1, sheet = 1)
#names(socpdb)
#summary(as.numeric(socpdb$year_measure))
socpdb$year_measure = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$year_measure)), [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$yearPpub)-5, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$year_measure))
socpdb$year_measure = [ifelse](https://rdrr.io/r/base/ifelse.html)(socpdb$year_measure<1960, NA, socpdb$year_measure)
socpdb$depth_current = socpdb$soil_to_cm_current - socpdb$soil_from_cm_current
socpdb = socpdb[socpdb$depth_current>5,]
socpdb$SOC_g_kg_current = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_current)), [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_Mg_ha_current) / (socpdb$depth_current/100 * [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$bulk_density_Mg_m3_current) * 1000) * 10, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_current))
socpdb$depth_previous = socpdb$soil_to_cm_previous - socpdb$soil_from_cm_previous
socpdb$SOC_g_kg_previous = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_previous)), [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_Mg_ha_previous) / (socpdb$depth_previous/100 * [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$Bulkdensity_previous) * 1000) * 10, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb$SOC_g_kg_previous))
hor.b = [which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(socpdb) [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("ID", "plotID", "Longitud", "Latitud", "year_measure", "years_since_luc", "USDA", "original_source"))
socpdb1 = socpdb[,[c](https://rdrr.io/r/base/c.html)(hor.b, [grep](https://rdrr.io/r/base/grep.html)("_current", [names](https://rdrr.io/r/base/names.html)(socpdb)))]
#summary(as.numeric(socpdb1$years_since_luc))
## 10 yrs median
socpdb1$site_obsdate = socpdb1$year_measure
socpdb2 = socpdb[,[c](https://rdrr.io/r/base/c.html)(hor.b, [grep](https://rdrr.io/r/base/grep.html)("_previous", [names](https://rdrr.io/r/base/names.html)(socpdb)))]
socpdb2$site_obsdate = socpdb2$year_measure - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)([as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb2$years_since_luc)), 10, [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdb2$years_since_luc))
[colnames](https://rdrr.io/r/base/colnames.html)(socpdb2) <- [sub](https://rdrr.io/r/base/grep.html)("_previous", "_current", [colnames](https://rdrr.io/r/base/colnames.html)(socpdb2))
nm.socpdb = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "hzn_top", "hzn_bot", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_h2o", "db_od")
sel.socdpb1 = [c](https://rdrr.io/r/base/c.html)("ID", "original_source", "site_obsdate", "Longitud", "Latitud", "soil_from_cm_current", "soil_to_cm_current", "%clay_current", "%silt_current", "%sand_current", "SOC_g_kg_current", "ph_current", "bulk_density_Mg_m3_current")
sel.socdpb2 = [c](https://rdrr.io/r/base/c.html)("ID", "original_source", "site_obsdate", "Longitud", "Latitud", "soil_from_cm_current", "soil_to_cm_current", "%clay_current", "%silt_current", "%sand_current", "SOC_g_kg_current", "ph_current", "Bulkdensity_current")
socpdbALL = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)([lapply](https://rdrr.io/r/base/lapply.html)([list](https://rdrr.io/r/base/list.html)(socpdb1[,sel.socdpb1], socpdb2[,sel.socdpb2]), function(i){ dplyr::[mutate_all](https://dplyr.tidyverse.org/reference/mutate_all.html)([setNames](https://rdrr.io/r/stats/setNames.html)(i, nm.socpdb), as.character) })))
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(socpdbALL)){ socpdbALL[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(socpdbALL[,j]) }
#summary(socpdbALL$oc) ## mean = 15
#summary(socpdbALL$db_od)
#summary(socpdbALL$ph_h2o)
socpdbALL$oc_d = [signif](https://rdrr.io/r/base/Round.html)(socpdbALL$oc * socpdbALL$db_od, 3)
#summary(socpdbALL$oc_d)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(socpdbALL))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ socpdbALL[,i] = NA } }
chemsprops.SOCPDB <- socpdbALL[,col.names]
chemsprops.SOCPDB$source_db = "SOCPDB"
chemsprops.SOCPDB$confidence_degree = 5
chemsprops.SOCPDB$project_url = "https://africap.info/"
chemsprops.SOCPDB$citation_url = "https://doi.org/10.1038/s41597-019-0062-1"
chemsprops.SOCPDB = complete.vars(chemsprops.SOCPDB, sel = [c](https://rdrr.io/r/base/c.html)("oc","ph_h2o","clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SOCPDB)
#> [1] 1526 36
```
#### 5\.3\.0\.36 Stocks of organic carbon in German agricultural soils (BZE\_LW)
* Poeplau, C., Jacobs, A., Don, A., Vos, C., Schneider, F., Wittnebel, M., … \& Flessa, H. (2020\). [Stocks of organic carbon in German agricultural soils—Key results of the first comprehensive inventory](https://doi.org/10.1002/jpln.202000113). Journal of Plant Nutrition and Soil Science, 183(6\), 665\-681\. [https://doi.org/10\.1002/jpln.202000113](https://doi.org/10.1002/jpln.202000113). Data download URL: [https://doi.org/10\.3220/DATA20200203151139](https://doi.org/10.3220/DATA20200203151139)
Note: For protection of data privacy, the coordinate was randomly generated within a radius of 4\-km around the planned sampling point. This data is hence probably not suitable for spatial analysis, predictive soil mapping.
```
if({
site.de <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/Germany/SITE.xlsx", sheet = 1)
site.de$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)([paste0](https://rdrr.io/r/base/paste.html)("01-", site.de$Sampling_month, "-", site.de$Sampling_year), format="%d-%m-%Y"), "%Y-%m-%d")
site.de.xy = site.de[,[c](https://rdrr.io/r/base/c.html)("PointID","xcoord","ycoord")]
## 3104
coordinates(site.de.xy) <- ~xcoord+ycoord
proj4string(site.de.xy) <- CRS("+proj=utm +zone=32 +ellps=WGS84 +datum=WGS84 +units=m +no_defs")
site.de.ll <- [data.frame](https://rdrr.io/r/base/data.frame.html)(spTransform(site.de.xy, CRS("+proj=longlat +ellps=WGS84 +datum=WGS84")))
site.de$longitude_decimal_degrees = site.de.ll[,2]
site.de$latitude_decimal_degrees = site.de.ll[,3]
hor.de <- openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)("/mnt/diskstation/data/Soil_points/Germany/LABORATORY_DATA.xlsx", sheet = 1)
#hor.de = plyr::join(openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/Germany/LABORATORY_DATA.xlsx", sheet = 1), openxlsx::read.xlsx("/mnt/diskstation/data/Soil_points/Germany/HORIZON_DATA.xlsx", sheet = 1), by="PointID")
## 17,189 rows
horALL.de = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hor.de, site.de, by="PointID")
## Sand content [Mass-%]; grain size 63-2000µm (DIN ISO 11277)
horALL.de$sand_tot_psa <- horALL.de$gS + horALL.de$mS + horALL.de$fS + 0.2 * horALL.de$gU
horALL.de$silt_tot_psa <- horALL.de$fU + horALL.de$mU + 0.8 * horALL.de$gU
## Convert millisiemens/meter [mS/m] to microsiemens/centimeter [μS/cm, uS/cm]
horALL.de$ec_satp = horALL.de$EC_H2O / 10
hor.sel.de <- [c](https://rdrr.io/r/base/c.html)("PointID", "Main.soil.type", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "layer_sequence", "Layer.upper.limit", "Layer.lower.limit", "hzn_desgn", "Soil.texture.class", "Clay", "silt_tot_psa", "sand_tot_psa", "TOC", "oc_d", "TC", "TN", "ph_kcl", "pH_H2O", "pH_CaCl2", "cec_sum", "cec_nh4", "ecec", "Rock.fragment.fraction", "BD_FS", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
#summary(horALL.de$TOC) ## mean = 12.3
#summary(horALL.de$BD_FS) ## mean = 1.41
#summary(horALL.de$pH_H2O)
horALL.de$oc_d = [signif](https://rdrr.io/r/base/Round.html)(horALL.de$TOC * horALL.de$BD_FS * (1-horALL.de$Rock.fragment.fraction/100), 3)
#summary(horALL.de$oc_d)
x.na = hor.sel.de[[which](https://rdrr.io/r/base/which.html)(!hor.sel.de [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(horALL.de))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ horALL.de[,i] = NA } }
chemsprops.BZE_LW <- horALL.de[,hor.sel.de]
chemsprops.BZE_LW$source_db = "BZE_LW"
chemsprops.BZE_LW$confidence_degree = 3
chemsprops.BZE_LW$project_url = "https://www.thuenen.de/de/ak/"
chemsprops.BZE_LW$citation_url = "https://doi.org/10.1002/jpln.202000113"
chemsprops.BZE_LW = complete.vars(chemsprops.BZE_LW, sel = [c](https://rdrr.io/r/base/c.html)("TOC", "pH_H2O", "Clay"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.BZE_LW)
#> [1] 17187 36
```
#### 5\.3\.0\.37 AARDEWERK\-Vlaanderen\-2010
* Beckers, V., Jacxsens, P., Van De Vreken, Ph., Van Meirvenne, M., Van Orshoven, J. (2011\). Gebruik en installatie van de bodemdatabank AARDEWERK\-Vlaanderen\-2010\. Spatial Applications Division Leuven, Belgium. Data download URL: [https://www.dov.vlaanderen.be/geonetwork/home/api/records/78e15dd4\-8070\-4220\-afac\-258ea040fb30](https://www.dov.vlaanderen.be/geonetwork/home/api/records/78e15dd4-8070-4220-afac-258ea040fb30)
* Ottoy, S., Beckers, V., Jacxsens, P., Hermy, M., \& Van Orshoven, J. (2015\). [Multi\-level statistical soil profiles for assessing regional soil organic carbon stocks](https://doi.org/10.1016/j.geoderma.2015.04.001). Geoderma, 253, 12\-20\. [https://doi.org/10\.1016/j.geoderma.2015\.04\.001](https://doi.org/10.1016/j.geoderma.2015.04.001)
```
if({
site.vl <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Belgium/Vlaanderen/Aardewerk-Vlaanderen-2010_Profiel.csv")
site.vl$site_obsdate = [format](https://rdrr.io/r/base/format.html)([as.Date](https://rdrr.io/r/base/as.Date.html)([sapply](https://rdrr.io/r/base/lapply.html)(site.vl$Profilering_Datum, function(i){[strsplit](https://rdrr.io/r/base/strsplit.html)(i, " ")[[1]][1]}), format="%d-%m-%Y"), "%Y-%m-%d")
site.vl.xy = site.vl[,[c](https://rdrr.io/r/base/c.html)("ID","Coordinaat_Lambert72_X","Coordinaat_Lambert72_Y")]
## 7020
site.vl.xy = site.vl.xy[[complete.cases](https://rdrr.io/r/stats/complete.cases.html)(site.vl.xy),]
coordinates(site.vl.xy) <- ~Coordinaat_Lambert72_X+Coordinaat_Lambert72_Y
proj4string(site.vl.xy) <- CRS("+init=epsg:31300")
site.vl.ll <- [data.frame](https://rdrr.io/r/base/data.frame.html)(spTransform(site.vl.xy, CRS("+proj=longlat +ellps=WGS84 +datum=WGS84")))
site.vl$longitude_decimal_degrees = join(site.vl["ID"], site.vl.ll, by="ID")$Coordinaat_Lambert72_X
site.vl$latitude_decimal_degrees = join(site.vl["ID"], site.vl.ll, by="ID")$Coordinaat_Lambert72_Y
site.vl$Profiel_ID = site.vl$ID
hor.vl <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Belgium/Vlaanderen/Aardewerk-Vlaanderen-2010_Horizont.csv")
## 42,529 rows
horALL.vl = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hor.vl, site.vl, by="Profiel_ID")
horALL.vl$oc = horALL.vl$Humus*10 /1.724
[summary](https://rdrr.io/r/base/summary.html)(horALL.vl$oc) ## mean = 7.8
#summary(horALL.vl$pH_H2O)
horALL.vl$hzn_top <- [rowSums](https://rdrr.io/r/base/colSums.html)(horALL.vl[,[c](https://rdrr.io/r/base/c.html)("Diepte_grens_boven1", "Diepte_grens_boven2")], na.rm=TRUE)/2
horALL.vl$hzn_bot <- [rowSums](https://rdrr.io/r/base/colSums.html)(horALL.vl[,[c](https://rdrr.io/r/base/c.html)("Diepte_grens_onder1","Diepte_grens_onder2")], na.rm=TRUE)/2
horALL.vl$sand_tot_psa <- horALL.vl$T50_100 + horALL.vl$T100_200 + horALL.vl$T200_500 + horALL.vl$T500_1000 + horALL.vl$T1000_2000
horALL.vl$silt_tot_psa <- horALL.vl$T2_10 + horALL.vl$T10_20 + horALL.vl$T20_50
horALL.vl$tex_psda = [paste0](https://rdrr.io/r/base/paste.html)(horALL.vl$HorizontTextuur_code1, horALL.vl$HorizontTextuur_code2)
## some corrupt coordinates
horALL.vl <- horALL.vl[horALL.vl$latitude_decimal_degrees > 50.6,]
hor.sel.vl <- [c](https://rdrr.io/r/base/c.html)("Profiel_ID", "Bodemgroep", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "labsampnum", "Hor_nr", "hzn_top", "hzn_bot", "Naam", "tex_psda", "T0_2", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "pH_KCl", "pH_H2O", "ph_cacl2", "Sorptiecapaciteit_Totaal", "cec_nh4", "ecec", "Tgroter_dan_2000", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = hor.sel.vl[[which](https://rdrr.io/r/base/which.html)(!hor.sel.vl [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(horALL.vl))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ horALL.vl[,i] = NA } }
chemsprops.Vlaanderen <- horALL.vl[,hor.sel.vl]
chemsprops.Vlaanderen$source_db = "Vlaanderen"
chemsprops.Vlaanderen$confidence_degree = 2
chemsprops.Vlaanderen$project_url = "https://www.dov.vlaanderen.be"
chemsprops.Vlaanderen$citation_url = "https://doi.org/10.1016/j.geoderma.2015.04.001"
chemsprops.Vlaanderen = complete.vars(chemsprops.Vlaanderen, sel = [c](https://rdrr.io/r/base/c.html)("oc", "pH_H2O", "T0_2"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.Vlaanderen)
#> [1] 41310 36
```
#### 5\.3\.0\.38 Chilean Soil Organic Carbon database
* Pfeiffer, M., Padarian, J., Osorio, R., Bustamante, N., Olmedo, G. F., Guevara, M., et al. (2020\) [CHLSOC: the Chilean Soil Organic Carbon database, a multi\-institutional collaborative effort](https://doi.org/10.5194/essd-12-457-2020). Earth Syst. Sci. Data, 12, 457–468, [https://doi.org/10\.5194/essd\-12\-457\-2020](https://doi.org/10.5194/essd-12-457-2020). Data download URL: [https://doi.org/10\.17605/OSF.IO/NMYS3](https://doi.org/10.17605/OSF.IO/NMYS3)
```
if({
chl.hor <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Chile/CHLSOC/CHLSOC_v1.0.csv", stringsAsFactors = FALSE)
#summary(chl.hor$oc)
chl.hor$oc = chl.hor$oc*10
#summary(chl.hor$bd)
chl.hor$oc_d = [signif](https://rdrr.io/r/base/Round.html)(chl.hor$oc * chl.hor$bd * (100 - [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(chl.hor$crf), 0, chl.hor$crf))/100, 3)
#summary(chl.hor$oc_d)
chl.col = [c](https://rdrr.io/r/base/c.html)("ProfileID", "usiteid", "year", "long", "lat", "labsampnum", "layer_sequence", "top", "bottom", "hzn_desgn", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum", "cec_nh4", "ecec", "crf", "bd", "ca_ext", "mg_ext", "na_ext", "k_ext", "ec_satp", "ec_12pre")
x.na = chl.col[[which](https://rdrr.io/r/base/which.html)(!chl.col [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(chl.hor))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ chl.hor[,i] = NA } }
chemsprops.CHLSOC = chl.hor[,chl.col]
chemsprops.CHLSOC$source_db = "Chilean_SOCDB"
chemsprops.CHLSOC$confidence_degree = 4
chemsprops.CHLSOC$project_url = "https://doi.org/10.17605/OSF.IO/NMYS3"
chemsprops.CHLSOC$citation_url = "https://doi.org/10.5194/essd-12-457-2020"
chemsprops.CHLSOC = complete.vars(chemsprops.CHLSOC, sel = [c](https://rdrr.io/r/base/c.html)("oc", "bd"), coords = [c](https://rdrr.io/r/base/c.html)("long", "lat"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.CHLSOC)
#> [1] 16371 36
```
#### 5\.3\.0\.39 Scotland (NSIS\_1\)
* Lilly, A., Bell, J.S., Hudson, G., Nolan, A.J. \& Towers. W. (Compilers) (2010\). National soil inventory of Scotland (NSIS\_1\); site location, sampling and profile description protocols. (1978\-1988\). Technical Bulletin. Macaulay Institute, Aberdeen. [https://doi.org/10\.5281/zenodo.4650230](https://doi.org/10.5281/zenodo.4650230). Data download URL: [https://www.hutton.ac.uk/learning/natural\-resource\-datasets/soilshutton/soils\-maps\-scotland/download](https://www.hutton.ac.uk/learning/natural-resource-datasets/soilshutton/soils-maps-scotland/download)
```
if({
sco.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Scotland/NSIS_10km.csv")
coordinates(sco.xy) = ~ easting + northing
proj4string(sco.xy) = "EPSG:27700"
sco.ll = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(spTransform(sco.xy, CRS("EPSG:4326")))
sco.ll$site_obsdate = [as.numeric](https://rdrr.io/r/base/numeric.html)([sapply](https://rdrr.io/r/base/lapply.html)(sco.ll$profile_da, function(x){[substr](https://rdrr.io/r/base/substr.html)(x, [nchar](https://rdrr.io/r/base/nchar.html)(x)-3, [nchar](https://rdrr.io/r/base/nchar.html)(x))}))
#hist(sco.ll$site_obsdate[sco.ll$site_obsdate>1000])
## no points after 1990!!
#summary(sco.ll$exch_k)
sco.in.name = [c](https://rdrr.io/r/base/c.html)("profile_id", "site_obsdate", "easting", "northing", "horz_top", "horz_botto",
"horz_symb", "sample_id", "texture_ps",
"sand_int", "silt_int", "clay", "carbon", "nitrogen", "ph_h2o", "exch_ca",
"exch_mg", "exch_na", "exch_k", "sum_cation")
#sco.in.name[which(!sco.in.name %in% names(sco.ll))]
sco.x = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(sco.ll[,sco.in.name])
#sco.x = sco.x[!sco.x$sample_id==0,]
#summary(sco.x$carbon)
sco.out.name = [c](https://rdrr.io/r/base/c.html)("usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees",
"hzn_bot", "hzn_top", "hzn_desgn", "labsampnum", "tex_psda", "sand_tot_psa", "silt_tot_psa",
"clay_tot_psa", "oc", "n_tot", "ph_h2o", "ca_ext",
"mg_ext", "na_ext", "k_ext", "cec_sum")
## translate values
sco.fun.lst = [as.list](https://rdrr.io/r/base/list.html)([rep](https://rdrr.io/r/base/rep.html)("as.numeric(x)*1", [length](https://rdrr.io/r/base/length.html)(sco.in.name)))
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="profile_id")]] = "paste(x)"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_ca")]] = "as.numeric(x)*200"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_mg")]] = "as.numeric(x)*121"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_k")]] = "as.numeric(x)*391"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="exch_na")]] = "as.numeric(x)*230"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="carbon")]] = "as.numeric(x)*10"
sco.fun.lst[[[which](https://rdrr.io/r/base/which.html)(sco.in.name=="nitrogen")]] = "as.numeric(x)*10"
## save translation rules:
[write.csv](https://rdrr.io/r/utils/write.table.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(sco.in.name, sco.out.name, [unlist](https://rdrr.io/r/base/unlist.html)(sco.fun.lst)), "scotland_soilab_transvalues.csv")
sco.soil = transvalues(sco.x, sco.out.name, sco.in.name, sco.fun.lst)
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sco.soil))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ sco.soil[,i] = NA } }
chemsprops.ScotlandNSIS1 = sco.soil[,col.names]
chemsprops.ScotlandNSIS1$source_db = "ScotlandNSIS1"
chemsprops.ScotlandNSIS1$confidence_degree = 2
chemsprops.ScotlandNSIS1$project_url = "http://soils.environment.gov.scot/"
chemsprops.ScotlandNSIS1$citation_url = "https://doi.org/10.5281/zenodo.4650230"
chemsprops.ScotlandNSIS1 = complete.vars(chemsprops.ScotlandNSIS1, sel = [c](https://rdrr.io/r/base/c.html)("oc", "ph_h2o", "clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.ScotlandNSIS1)
#> [1] 2977 36
```
#### 5\.3\.0\.40 Ecoforest map of Quebec, Canada
* Duchesne, L., Ouimet, R., (2021\). Digital mapping of soil texture in ecoforest polygons in Quebec, Canada. PeerJ 9:e11685 [https://doi.org/10\.7717/peerj.11685](https://doi.org/10.7717/peerj.11685). Data download URL: [https://doi.org/10\.7717/peerj.11685/supp\-1](https://doi.org/10.7717/peerj.11685/supp-1)
```
if({
que.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Canada/Quebec/RawData.csv")
#summary(as.factor(que.xy$Horizon))
## horizon depths were not measured - we assume 15-30 and 30-80
que.xy$hzn_top = [ifelse](https://rdrr.io/r/base/ifelse.html)(que.xy$Horizon=="B", 15, 30)
que.xy$hzn_bot = [ifelse](https://rdrr.io/r/base/ifelse.html)(que.xy$Horizon=="B", 30, 80)
que.xy$site_key = que.xy$usiteid
que.xy$latitude_decimal_degrees = que.xy$Latitude
que.xy$longitude_decimal_degrees = que.xy$Longitude
que.xy$hzn_desgn = que.xy$Horizon
que.xy$sand_tot_psa = que.xy$PC_Sand
que.xy$silt_tot_psa = que.xy$PC_Silt
que.xy$clay_tot_psa = que.xy$PC_Clay
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(que.xy))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ que.xy[,i] = NA } }
chemsprops.QuebecTEX = que.xy[,col.names]
chemsprops.QuebecTEX$source_db = "QuebecTEX"
chemsprops.QuebecTEX$confidence_degree = 4
chemsprops.QuebecTEX$project_url = ""
chemsprops.QuebecTEX$citation_url = "https://doi.org/10.7717/peerj.11685"
chemsprops.QuebecTEX = complete.vars(chemsprops.QuebecTEX, sel = [c](https://rdrr.io/r/base/c.html)("clay_tot_psa"))
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.QuebecTEX)
#> [1] 26648 36
```
#### 5\.3\.0\.41 Pseudo\-observations
* Pseudo\-observations using simulated points (world deserts)
```
if({
## 0 soil organic carbon + 98% sand content (deserts)
[load](https://rdrr.io/r/base/load.html)("deserts.pnt.rda")
nut.sim <- [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(spTransform(deserts.pnt, CRS("+proj=longlat +datum=WGS84")))
nut.sim[,1] <- NULL
nut.sim <- plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(nut.sim, [c](https://rdrr.io/r/base/c.html)("x"="longitude_decimal_degrees", "y"="latitude_decimal_degrees"))
nr = [nrow](https://rdrr.io/r/base/nrow.html)(nut.sim)
nut.sim$site_key <- [paste](https://rdrr.io/r/base/paste.html)("Simulated", 1:nr, sep="_")
## insert zeros for all nutrients except for the once we are not sure:
## http://www.decodedscience.org/chemistry-sahara-sand-elements-dunes/45828
sim.vars = [c](https://rdrr.io/r/base/c.html)("oc", "oc_d", "c_tot", "n_tot", "ecec", "clay_tot_psa", "mg_ext", "k_ext")
nut.sim[,sim.vars] <- 0
nut.sim$silt_tot_psa = 2
nut.sim$sand_tot_psa = 98
nut.sim$hzn_top = 0
nut.sim$hzn_bot = 30
nut.sim$db_od = 1.55
nut.sim2 = nut.sim
nut.sim2$silt_tot_psa = 1
nut.sim2$sand_tot_psa = 99
nut.sim2$hzn_top = 30
nut.sim2$hzn_bot = 60
nut.sim2$db_od = 1.6
nut.simA = [rbind](https://rdrr.io/r/base/cbind.html)(nut.sim, nut.sim2)
#str(nut.simA)
nut.simA$source_db = "Simulated"
nut.simA$confidence_degree = 10
x.na = col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(nut.simA))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ nut.simA[,i] = NA } }
chemsprops.SIM = nut.simA[,col.names]
chemsprops.SIM$project_url = "https://gitlab.com/openlandmap/"
chemsprops.SIM$citation_url = "https://gitlab.com/openlandmap/compiled-ess-point-data-sets/"
}
[dim](https://rdrr.io/r/base/dim.html)(chemsprops.SIM)
#> [1] 718 36
```
Other potential large soil profile DBs of interest:
* Shangguan, W., Dai, Y., Liu, B., Zhu, A., Duan, Q., Wu, L., … \& Chen, D. (2013\). [A China data set of soil properties for land surface modeling](https://doi.org/10.1002/jame.20026). Journal of Advances in Modeling Earth Systems, 5(2\), 212\-224\.
* Salković, E., Djurović, I., Knežević, M., Popović\-Bugarin, V., \& Topalović, A. (2018\). Digitization and mapping of national legacy soil data of Montenegro. Soil and Water Research, 13(2\), 83\-89\. [https://doi.org/10\.17221/81/2017\-SWR](https://doi.org/10.17221/81/2017-SWR)
5\.4 alt text Bind all datasets
-------------------------------
#### 5\.4\.0\.1 Bind and clean\-up
```
[ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("chemsprops.*"))
#> [1] "chemsprops.AfSIS1" "chemsprops.AfSPDB"
#> [3] "chemsprops.Alaska" "chemsprops.bpht"
#> [5] "chemsprops.BZE_LW" "chemsprops.CAPERM"
#> [7] "chemsprops.CHLSOC" "chemsprops.CNSOT"
#> [9] "chemsprops.CostaRica" "chemsprops.CUFS"
#> [11] "chemsprops.EGRPR" "chemsprops.FEBR"
#> [13] "chemsprops.FIADB" "chemsprops.FRED"
#> [15] "chemsprops.GEMAS" "chemsprops.GROOT"
#> [17] "chemsprops.IRANSPDB" "chemsprops.ISCND"
#> [19] "chemsprops.LandPKS" "chemsprops.LUCAS"
#> [21] "chemsprops.LUCAS2" "chemsprops.Mangroves"
#> [23] "chemsprops.NAMSOTER" "chemsprops.NatSoil"
#> [25] "chemsprops.NCSCD" "chemsprops.NCSS"
#> [27] "chemsprops.NPDB" "chemsprops.Peatlands"
#> [29] "chemsprops.PRONASOLOS" "chemsprops.QuebecTEX"
#> [31] "chemsprops.RaCA" "chemsprops.RemnantSOC"
#> [33] "chemsprops.ScotlandNSIS1" "chemsprops.SIM"
#> [35] "chemsprops.SISLAC" "chemsprops.SOCPDB"
#> [37] "chemsprops.SoDaH" "chemsprops.SoilHealthDB"
#> [39] "chemsprops.SRDB" "chemsprops.USGS.NGS"
#> [41] "chemsprops.Vlaanderen" "chemsprops.WISE"
tot_sprops = dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)([lapply](https://rdrr.io/r/base/lapply.html)([ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("chemsprops.*")), function(i){ mutate_all([setNames](https://rdrr.io/r/stats/setNames.html)([get](https://rdrr.io/r/base/get.html)(i), col.names), as.character) }))
## convert to numeric:
for(j in [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees", "layer_sequence",
"hzn_top", "hzn_bot", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa",
"oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum",
"cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext",
"ec_satp", "ec_12pre")){
tot_sprops[,j] = [as.numeric](https://rdrr.io/r/base/numeric.html)(tot_sprops[,j])
}
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#head(tot_sprops)
```
Clean up typos and physically impossible values:
```
tex.rm = [rowSums](https://rdrr.io/r/base/colSums.html)(tot_sprops[,[c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "sand_tot_psa", "silt_tot_psa")])
[summary](https://rdrr.io/r/base/summary.html)(tex.rm)
#> Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
#> -2997.00 100.00 100.00 99.39 100.00 500.00 274336
for(j in [c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "sand_tot_psa", "silt_tot_psa", "wpg2")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>100|tot_sprops[,j]<0, NA, tot_sprops[,j])
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tex.rm<99|[is.na](https://rdrr.io/r/base/NA.html)(tex.rm)|tex.rm>101, NA, tot_sprops[,j])
}
for(j in [c](https://rdrr.io/r/base/c.html)("ph_h2o","ph_kcl","ph_cacl2")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>12|tot_sprops[,j]<2, NA, tot_sprops[,j])
}
#hist(tot_sprops$db_od)
for(j in [c](https://rdrr.io/r/base/c.html)("db_od")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>2.4|tot_sprops[,j]<0.05, NA, tot_sprops[,j])
}
#hist(tot_sprops$oc)
for(j in [c](https://rdrr.io/r/base/c.html)("oc")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>800|tot_sprops[,j]<0, NA, tot_sprops[,j])
}
```
Fill\-in the missing depths:
```
## soil layer depth (middle)
tot_sprops$hzn_depth = tot_sprops$hzn_top + (tot_sprops$hzn_bot-tot_sprops$hzn_top)/2
[summary](https://rdrr.io/r/base/summary.html)([is.na](https://rdrr.io/r/base/NA.html)(tot_sprops$hzn_depth))
#> Mode FALSE TRUE
#> logical 766689 5465
## Note: large number of horizons without a depth
tot_sprops = tot_sprops[,]
#quantile(tot_sprops$hzn_depth, c(0.01,0.99), na.rm=TRUE)
tot_sprops$hzn_depth = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops$hzn_depth<0, 10, [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops$hzn_depth>800, 800, tot_sprops$hzn_depth))
#hist(tot_sprops$hzn_depth, breaks=45)
```
Summary number of points per data source:
```
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$source_db))
#> AfSIS1 AfSPDB Alaska_interior BZE_LW
#> 4162 60277 3880 17187
#> Canada_CUFS Canada_NPDB Canada_subarctic Chilean_SOCDB
#> 15162 14900 1180 16358
#> China_SOTER CIFOR CostaRica Croatian_Soil_Pedon
#> 5105 561 2029 5746
#> CSIRO_NatSoil FEBR FIADB FRED
#> 70688 7804 23208 625
#> GEMAS_2009 GROOT Iran_SPDB ISCND
#> 4131 718 4677 3977
#> ISRIC_WISE LandPKS LUCAS_2009 LUCAS_2015
#> 23278 41644 21272 21859
#> MangrovesDB NAMSOTER NCSCD PRONASOLOS
#> 7733 2941 7082 31655
#> QuebecTEX RaCA2016 Russia_EGRPR ScotlandNSIS1
#> 26648 53663 4437 2977
#> Simulated SISLAC SOCPDB SoDaH
#> 718 49416 1526 17766
#> SoilHealthDB SRDB USDA_NCSS USGS.NGS
#> 120 1596 135671 9398
#> Vlaanderen WHRC_remnant_SOC
#> 41310 1604
```
Add unique row identifier
```
tot_sprops$uuid = openssl::[md5](https://rdrr.io/pkg/openssl/man/hash.html)([make.unique](https://rdrr.io/r/base/make.unique.html)([paste](https://rdrr.io/r/base/paste.html)("OpenLandMap", tot_sprops$site_key, tot_sprops$layer_sequence, sep="_")))
```
and unique location based on the [Open Location Code](https://cran.r-project.org/web/packages/olctools/vignettes/Introduction_to_olctools.html):
```
tot_sprops$olc_id = olctools::[encode_olc](https://rdrr.io/pkg/olctools/man/encode_olc.html)(tot_sprops$latitude_decimal_degrees, tot_sprops$longitude_decimal_degrees, 11)
[length](https://rdrr.io/r/base/length.html)([levels](https://rdrr.io/r/base/levels.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$olc_id)))
#> [1] 205687
## 205,620
```
```
tot_sprops.pnts = tot_sprops[]
coordinates(tot_sprops.pnts) <- ~ longitude_decimal_degrees + latitude_decimal_degrees
proj4string(tot_sprops.pnts) <- "EPSG:4326"
```
Remove points falling in the sea or similar:
```
if({
#mask = terra::rast("./layers1km/lcv_landmask_esacci.lc.l4_c_1km_s0..0cm_2000..2015_v1.0.tif")
mask = terra::[rast](https://rdrr.io/pkg/terra/man/rast.html)("/mnt/diskstation/data/LandGIS/layers250m/lcv_landmask_esacci.lc.l4_c_250m_s0..0cm_2000..2015_v1.0.tif")
ov.sprops <- terra::[extract](https://rdrr.io/pkg/terra/man/extract.html)(mask, terra::[vect](https://rdrr.io/pkg/terra/man/vect.html)(tot_sprops.pnts)) ## TAKES 2 mins
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(ov.sprops[,2]))
if([sum](https://rdrr.io/r/base/sum.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]))>0 | [sum](https://rdrr.io/r/base/sum.html)(ov.sprops[,2]==2)>0){
rem.lst = [which](https://rdrr.io/r/base/which.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]) | ov.sprops[,2]==2 | ov.sprops[,2]==4)
rem.sp = tot_sprops.pnts$site_key[rem.lst]
tot_sprops.pnts = tot_sprops.pnts[-rem.lst,]
}
}
## final number of unique spatial locations:
[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts)
#> [1] 205687
## 203,107
```
#### 5\.4\.0\.2 Histogram plots
Have in mind that some datasets only represent top\-soil (e.g. LUCAS) while other
cover the whole soil depth, hence higher mean values for some regions (Europe) should
be considered within the context of diverse soil depths.
```
[library](https://rdrr.io/r/base/library.html)([ggplot2](http://ggplot2.tidyverse.org))
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(oc))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 195328 rows containing non-finite values (stat_boxplot).
```
```
sel.db = [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "Canada_CUFS", "USDA_NCSS", "AfSPDB", "Canada_NPDB", "FIADB", "PRONASOLOS", "CSIRO_NatSoil")
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db,], x = "hzn_depth", y = "oc", method = "hexbin",
col = "increment", type = "source_db", log.x = TRUE, log.y=TRUE, ylab="SOC wprm", xlab="depth in cm")
```
Note: FIADB includes litter and the soil samples are taken at fixed depths. Canada
dataset also shows SOC content for peatlands.
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=db_od)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 537198 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db,], x = "oc", y = "db_od", method = "hexbin",
col = "increment", type = "source_db", log.x=TRUE, ylab="Bulk density", xlab="SOC wprm")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(oc_d))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(oc_d): NaNs produced
#> Warning in log1p(oc_d): NaNs produced
#> Warning: Removed 561232 rows containing non-finite values (stat_boxplot).
```
```
sel.db0 = [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "Canada_CUFS", "USDA_NCSS", "AfSPDB", "PRONASOLOS", "CSIRO_NatSoil")
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "oc", y = "oc_d", method = "hexbin",
col = "increment", type = "source_db", log.x=TRUE, log.y=TRUE, xlab="SOC wprm", ylab="SOC kg/m3")
```
Note: SOC (%) and SOC density (kg/m3\) are almost linear relationship (curving toward fixed value),
except for organic soils (especially litter) where relationship is slightly shifted.
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "oc", y = "n_tot", method = "hexbin",
col = "increment", type = "source_db", log.x=TRUE, log.y=TRUE, xlab="SOC wprm", ylab="N total wprm")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=ph_h2o)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 270562 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "USDA_NCSS", "AfSPDB"),], x = "ph_kcl", y = "ph_h2o", method = "hexbin",
col = "increment", type = "source_db", xlab="soil pH KCl", ylab="soil pH H2O")
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "hzn_depth", y = "ph_h2o", method = "hexbin",
col = "increment", type = "source_db", log.x = TRUE, log.y=TRUE, ylab="soil pH H2O", xlab="depth in cm")
```
Note: there seems to be no apparent correlation between soil pH and soil depth.
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "USDA_NCSS", "AfSPDB", "PRONASOLOS"),], x = "ph_h2o", y = "cec_sum", method = "hexbin",
col = "increment", type = "source_db", log.y=TRUE, ylab="CEC", xlab="soil pH H2O")
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "ph_h2o", y = "oc", method = "hexbin",
col = "increment", type = "source_db", log.y=TRUE, ylab="SOC wprm", xlab="soil pH H2O")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=clay_tot_psa)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 279635 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], y = "clay_tot_psa", x = "sand_tot_psa", method = "hexbin",
col = "increment", type = "source_db", ylab="clay %", xlab="sand %")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(cec_sum))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(cec_sum): NaNs produced
#> Warning in log1p(cec_sum): NaNs produced
#> Warning: Removed 514240 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(n_tot))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(n_tot): NaNs produced
#> Warning in log1p(n_tot): NaNs produced
#> Warning: Removed 446028 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(k_ext))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(k_ext): NaNs produced
#> Warning in log1p(k_ext): NaNs produced
#> Warning: Removed 523010 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(ec_satp))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 608509 rows containing non-finite values (stat_boxplot).
```
```
sprops_yrs = tot_sprops[]
sprops_yrs$year = [as.numeric](https://rdrr.io/r/base/numeric.html)([substr](https://rdrr.io/r/base/substr.html)(x=sprops_yrs$site_obsdate, 1, 4))
#> Warning: NAs introduced by coercion
sprops_yrs$year = [ifelse](https://rdrr.io/r/base/ifelse.html)(sprops_yrs$year <1960, NA, [ifelse](https://rdrr.io/r/base/ifelse.html)(sprops_yrs$year>2024, NA, sprops_yrs$year))
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(sprops_yrs, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=year)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 70644 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(sprops_yrs[sprops_yrs$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], y = "oc", x = "year", method = "hexbin",
col = "increment", type = "source_db", log.y=TRUE, ylab="SOC wprm", xlab="Sampling year")
```
#### 5\.4\.0\.3 Convert to wide format
Add `layer_sequence` where missing since this is needed to be able to convert to wide
format:
```
#summary(tot_sprops$layer_sequence)
tot_sprops$dsiteid = [paste](https://rdrr.io/r/base/paste.html)(tot_sprops$source_db, tot_sprops$site_key, tot_sprops$site_obsdate, sep="_")
if({
[library](https://rdrr.io/r/base/library.html)([dplyr](https://dplyr.tidyverse.org))
## Note: takes >2 mins
l.s1 <- tot_sprops[,[c](https://rdrr.io/r/base/c.html)("dsiteid","hzn_depth")] [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [group_by](https://dplyr.tidyverse.org/reference/group_by.html)(dsiteid) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(layer_sequence.f = data.table::[frank](https://Rdatatable.gitlab.io/data.table/reference/frank.html)(hzn_depth, ties.method = "first"))
tot_sprops$layer_sequence.f = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(tot_sprops$layer_sequence), l.s1$layer_sequence.f, tot_sprops$layer_sequence)
tot_sprops$layer_sequence.f = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops$layer_sequence.f>6, 6, tot_sprops$layer_sequence.f)
}
```
Convert long table to [wide table format](https://ncss-tech.github.io/AQP/aqp/aqp-intro.html) so that each depth gets unique column
(note: the most computational / time\-consuming step usually):
```
if({
[library](https://rdrr.io/r/base/library.html)([data.table](http://r-datatable.com))
tot_sprops.w = data.table::[dcast](https://Rdatatable.gitlab.io/data.table/reference/dcast.data.table.html)( [as.data.table](https://Rdatatable.gitlab.io/data.table/reference/as.data.table.html)(tot_sprops),
formula = olc_id ~ layer_sequence.f,
value.var = [c](https://rdrr.io/r/base/c.html)("uuid", hor.names[-[which](https://rdrr.io/r/base/which.html)(hor.names [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("site_key", "layer_sequence"))]),
## "labsampnum", "hzn_desgn", "tex_psda"
#fun=function(x){ mean(x, na.rm=TRUE) },
## Note: does not work for characters
fun=function(x){ x[1] },
verbose = FALSE)
## remove "0" layers added automatically but containing no values
tot_sprops.w = tot_sprops.w[,[grep](https://rdrr.io/r/base/grep.html)("*_0$", [colnames](https://rdrr.io/r/base/colnames.html)(tot_sprops.w)):=NULL]
}
tot_sprops_w.pnts = tot_sprops.pnts
tot_sprops_w.pnts@data = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tot_sprops.pnts@data, tot_sprops.w)
#> Joining by: olc_id
```
Write all soil profiles using a wide format:
```
sel.rm.pnts <- tot_sprops_w.pnts$source_db=="LUCAS_2009" | tot_sprops_w.pnts$source_db=="LUCAS_2015" | tot_sprops_w.pnts$site_key [%in%](https://rdrr.io/r/base/match.html) mng.rm | tot_sprops_w.pnts$site_key [%in%](https://rdrr.io/r/base/match.html) rem.sp
out.gpkg = "./out/gpkg/sol_chem.pnts_horizons.gpkg"
#unlink(out.gpkg)
if({
writeOGR(tot_sprops_w.pnts[!sel.rm.pnts,], "./out/gpkg/sol_chem.pnts_horizons.gpkg", "sol_chem.pnts_horizons", drive="GPKG")
}
```
#### 5\.4\.0\.4 Save RDS files
Remove points that are not allowed to be distributed publicly:
```
sel.rm <- tot_sprops$source_db=="LUCAS_2009" | tot_sprops$source_db=="LUCAS_2015" | tot_sprops$site_key [%in%](https://rdrr.io/r/base/match.html) mng.rm | tot_sprops$site_key [%in%](https://rdrr.io/r/base/match.html) rem.sp
tot_sprops.s = tot_sprops[!sel.rm,]
```
Plot in Goode Homolozine projection and save final objects:
```
if({
tot_sprops.pnts_sf <- st_as_sf(tot_sprops.pnts[1], crs=4326)
plot_gh(tot_sprops.pnts_sf, out.pdf="./img/sol_chem.pnts_sites.pdf")
## extremely slow --- takes 15mins
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_chem.pnts_sites.pdf ./img/sol_chem.pnts_sites -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_chem.pnts_sites.png ./img/sol_chem.pnts_sites.png")
}
```
(\#fig:sol\_chem.pnts\_sites)Soil profiles and soil samples with chemical and physical properties global compilation.
Fig. 1: Soil profiles and soil samples with chemical and physical properties global compilation.
#### 5\.4\.0\.1 Bind and clean\-up
```
[ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("chemsprops.*"))
#> [1] "chemsprops.AfSIS1" "chemsprops.AfSPDB"
#> [3] "chemsprops.Alaska" "chemsprops.bpht"
#> [5] "chemsprops.BZE_LW" "chemsprops.CAPERM"
#> [7] "chemsprops.CHLSOC" "chemsprops.CNSOT"
#> [9] "chemsprops.CostaRica" "chemsprops.CUFS"
#> [11] "chemsprops.EGRPR" "chemsprops.FEBR"
#> [13] "chemsprops.FIADB" "chemsprops.FRED"
#> [15] "chemsprops.GEMAS" "chemsprops.GROOT"
#> [17] "chemsprops.IRANSPDB" "chemsprops.ISCND"
#> [19] "chemsprops.LandPKS" "chemsprops.LUCAS"
#> [21] "chemsprops.LUCAS2" "chemsprops.Mangroves"
#> [23] "chemsprops.NAMSOTER" "chemsprops.NatSoil"
#> [25] "chemsprops.NCSCD" "chemsprops.NCSS"
#> [27] "chemsprops.NPDB" "chemsprops.Peatlands"
#> [29] "chemsprops.PRONASOLOS" "chemsprops.QuebecTEX"
#> [31] "chemsprops.RaCA" "chemsprops.RemnantSOC"
#> [33] "chemsprops.ScotlandNSIS1" "chemsprops.SIM"
#> [35] "chemsprops.SISLAC" "chemsprops.SOCPDB"
#> [37] "chemsprops.SoDaH" "chemsprops.SoilHealthDB"
#> [39] "chemsprops.SRDB" "chemsprops.USGS.NGS"
#> [41] "chemsprops.Vlaanderen" "chemsprops.WISE"
tot_sprops = dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)([lapply](https://rdrr.io/r/base/lapply.html)([ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("chemsprops.*")), function(i){ mutate_all([setNames](https://rdrr.io/r/stats/setNames.html)([get](https://rdrr.io/r/base/get.html)(i), col.names), as.character) }))
## convert to numeric:
for(j in [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees", "layer_sequence",
"hzn_top", "hzn_bot", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa",
"oc", "oc_d", "c_tot", "n_tot", "ph_kcl", "ph_h2o", "ph_cacl2", "cec_sum",
"cec_nh4", "ecec", "wpg2", "db_od", "ca_ext", "mg_ext", "na_ext", "k_ext",
"ec_satp", "ec_12pre")){
tot_sprops[,j] = [as.numeric](https://rdrr.io/r/base/numeric.html)(tot_sprops[,j])
}
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#head(tot_sprops)
```
Clean up typos and physically impossible values:
```
tex.rm = [rowSums](https://rdrr.io/r/base/colSums.html)(tot_sprops[,[c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "sand_tot_psa", "silt_tot_psa")])
[summary](https://rdrr.io/r/base/summary.html)(tex.rm)
#> Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
#> -2997.00 100.00 100.00 99.39 100.00 500.00 274336
for(j in [c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "sand_tot_psa", "silt_tot_psa", "wpg2")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>100|tot_sprops[,j]<0, NA, tot_sprops[,j])
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tex.rm<99|[is.na](https://rdrr.io/r/base/NA.html)(tex.rm)|tex.rm>101, NA, tot_sprops[,j])
}
for(j in [c](https://rdrr.io/r/base/c.html)("ph_h2o","ph_kcl","ph_cacl2")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>12|tot_sprops[,j]<2, NA, tot_sprops[,j])
}
#hist(tot_sprops$db_od)
for(j in [c](https://rdrr.io/r/base/c.html)("db_od")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>2.4|tot_sprops[,j]<0.05, NA, tot_sprops[,j])
}
#hist(tot_sprops$oc)
for(j in [c](https://rdrr.io/r/base/c.html)("oc")){
tot_sprops[,j] = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops[,j]>800|tot_sprops[,j]<0, NA, tot_sprops[,j])
}
```
Fill\-in the missing depths:
```
## soil layer depth (middle)
tot_sprops$hzn_depth = tot_sprops$hzn_top + (tot_sprops$hzn_bot-tot_sprops$hzn_top)/2
[summary](https://rdrr.io/r/base/summary.html)([is.na](https://rdrr.io/r/base/NA.html)(tot_sprops$hzn_depth))
#> Mode FALSE TRUE
#> logical 766689 5465
## Note: large number of horizons without a depth
tot_sprops = tot_sprops[,]
#quantile(tot_sprops$hzn_depth, c(0.01,0.99), na.rm=TRUE)
tot_sprops$hzn_depth = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops$hzn_depth<0, 10, [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops$hzn_depth>800, 800, tot_sprops$hzn_depth))
#hist(tot_sprops$hzn_depth, breaks=45)
```
Summary number of points per data source:
```
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$source_db))
#> AfSIS1 AfSPDB Alaska_interior BZE_LW
#> 4162 60277 3880 17187
#> Canada_CUFS Canada_NPDB Canada_subarctic Chilean_SOCDB
#> 15162 14900 1180 16358
#> China_SOTER CIFOR CostaRica Croatian_Soil_Pedon
#> 5105 561 2029 5746
#> CSIRO_NatSoil FEBR FIADB FRED
#> 70688 7804 23208 625
#> GEMAS_2009 GROOT Iran_SPDB ISCND
#> 4131 718 4677 3977
#> ISRIC_WISE LandPKS LUCAS_2009 LUCAS_2015
#> 23278 41644 21272 21859
#> MangrovesDB NAMSOTER NCSCD PRONASOLOS
#> 7733 2941 7082 31655
#> QuebecTEX RaCA2016 Russia_EGRPR ScotlandNSIS1
#> 26648 53663 4437 2977
#> Simulated SISLAC SOCPDB SoDaH
#> 718 49416 1526 17766
#> SoilHealthDB SRDB USDA_NCSS USGS.NGS
#> 120 1596 135671 9398
#> Vlaanderen WHRC_remnant_SOC
#> 41310 1604
```
Add unique row identifier
```
tot_sprops$uuid = openssl::[md5](https://rdrr.io/pkg/openssl/man/hash.html)([make.unique](https://rdrr.io/r/base/make.unique.html)([paste](https://rdrr.io/r/base/paste.html)("OpenLandMap", tot_sprops$site_key, tot_sprops$layer_sequence, sep="_")))
```
and unique location based on the [Open Location Code](https://cran.r-project.org/web/packages/olctools/vignettes/Introduction_to_olctools.html):
```
tot_sprops$olc_id = olctools::[encode_olc](https://rdrr.io/pkg/olctools/man/encode_olc.html)(tot_sprops$latitude_decimal_degrees, tot_sprops$longitude_decimal_degrees, 11)
[length](https://rdrr.io/r/base/length.html)([levels](https://rdrr.io/r/base/levels.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$olc_id)))
#> [1] 205687
## 205,620
```
```
tot_sprops.pnts = tot_sprops[]
coordinates(tot_sprops.pnts) <- ~ longitude_decimal_degrees + latitude_decimal_degrees
proj4string(tot_sprops.pnts) <- "EPSG:4326"
```
Remove points falling in the sea or similar:
```
if({
#mask = terra::rast("./layers1km/lcv_landmask_esacci.lc.l4_c_1km_s0..0cm_2000..2015_v1.0.tif")
mask = terra::[rast](https://rdrr.io/pkg/terra/man/rast.html)("/mnt/diskstation/data/LandGIS/layers250m/lcv_landmask_esacci.lc.l4_c_250m_s0..0cm_2000..2015_v1.0.tif")
ov.sprops <- terra::[extract](https://rdrr.io/pkg/terra/man/extract.html)(mask, terra::[vect](https://rdrr.io/pkg/terra/man/vect.html)(tot_sprops.pnts)) ## TAKES 2 mins
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(ov.sprops[,2]))
if([sum](https://rdrr.io/r/base/sum.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]))>0 | [sum](https://rdrr.io/r/base/sum.html)(ov.sprops[,2]==2)>0){
rem.lst = [which](https://rdrr.io/r/base/which.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]) | ov.sprops[,2]==2 | ov.sprops[,2]==4)
rem.sp = tot_sprops.pnts$site_key[rem.lst]
tot_sprops.pnts = tot_sprops.pnts[-rem.lst,]
}
}
## final number of unique spatial locations:
[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts)
#> [1] 205687
## 203,107
```
#### 5\.4\.0\.2 Histogram plots
Have in mind that some datasets only represent top\-soil (e.g. LUCAS) while other
cover the whole soil depth, hence higher mean values for some regions (Europe) should
be considered within the context of diverse soil depths.
```
[library](https://rdrr.io/r/base/library.html)([ggplot2](http://ggplot2.tidyverse.org))
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(oc))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 195328 rows containing non-finite values (stat_boxplot).
```
```
sel.db = [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "Canada_CUFS", "USDA_NCSS", "AfSPDB", "Canada_NPDB", "FIADB", "PRONASOLOS", "CSIRO_NatSoil")
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db,], x = "hzn_depth", y = "oc", method = "hexbin",
col = "increment", type = "source_db", log.x = TRUE, log.y=TRUE, ylab="SOC wprm", xlab="depth in cm")
```
Note: FIADB includes litter and the soil samples are taken at fixed depths. Canada
dataset also shows SOC content for peatlands.
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=db_od)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 537198 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db,], x = "oc", y = "db_od", method = "hexbin",
col = "increment", type = "source_db", log.x=TRUE, ylab="Bulk density", xlab="SOC wprm")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(oc_d))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(oc_d): NaNs produced
#> Warning in log1p(oc_d): NaNs produced
#> Warning: Removed 561232 rows containing non-finite values (stat_boxplot).
```
```
sel.db0 = [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "Canada_CUFS", "USDA_NCSS", "AfSPDB", "PRONASOLOS", "CSIRO_NatSoil")
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "oc", y = "oc_d", method = "hexbin",
col = "increment", type = "source_db", log.x=TRUE, log.y=TRUE, xlab="SOC wprm", ylab="SOC kg/m3")
```
Note: SOC (%) and SOC density (kg/m3\) are almost linear relationship (curving toward fixed value),
except for organic soils (especially litter) where relationship is slightly shifted.
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "oc", y = "n_tot", method = "hexbin",
col = "increment", type = "source_db", log.x=TRUE, log.y=TRUE, xlab="SOC wprm", ylab="N total wprm")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=ph_h2o)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 270562 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "USDA_NCSS", "AfSPDB"),], x = "ph_kcl", y = "ph_h2o", method = "hexbin",
col = "increment", type = "source_db", xlab="soil pH KCl", ylab="soil pH H2O")
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "hzn_depth", y = "ph_h2o", method = "hexbin",
col = "increment", type = "source_db", log.x = TRUE, log.y=TRUE, ylab="soil pH H2O", xlab="depth in cm")
```
Note: there seems to be no apparent correlation between soil pH and soil depth.
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("ISRIC_WISE", "USDA_NCSS", "AfSPDB", "PRONASOLOS"),], x = "ph_h2o", y = "cec_sum", method = "hexbin",
col = "increment", type = "source_db", log.y=TRUE, ylab="CEC", xlab="soil pH H2O")
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], x = "ph_h2o", y = "oc", method = "hexbin",
col = "increment", type = "source_db", log.y=TRUE, ylab="SOC wprm", xlab="soil pH H2O")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=clay_tot_psa)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 279635 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(tot_sprops[tot_sprops$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], y = "clay_tot_psa", x = "sand_tot_psa", method = "hexbin",
col = "increment", type = "source_db", ylab="clay %", xlab="sand %")
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(cec_sum))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(cec_sum): NaNs produced
#> Warning in log1p(cec_sum): NaNs produced
#> Warning: Removed 514240 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(n_tot))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(n_tot): NaNs produced
#> Warning in log1p(n_tot): NaNs produced
#> Warning: Removed 446028 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(k_ext))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning in log1p(k_ext): NaNs produced
#> Warning in log1p(k_ext): NaNs produced
#> Warning: Removed 523010 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(ec_satp))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 608509 rows containing non-finite values (stat_boxplot).
```
```
sprops_yrs = tot_sprops[]
sprops_yrs$year = [as.numeric](https://rdrr.io/r/base/numeric.html)([substr](https://rdrr.io/r/base/substr.html)(x=sprops_yrs$site_obsdate, 1, 4))
#> Warning: NAs introduced by coercion
sprops_yrs$year = [ifelse](https://rdrr.io/r/base/ifelse.html)(sprops_yrs$year <1960, NA, [ifelse](https://rdrr.io/r/base/ifelse.html)(sprops_yrs$year>2024, NA, sprops_yrs$year))
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(sprops_yrs, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=year)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 70644 rows containing non-finite values (stat_boxplot).
```
```
openair::[scatterPlot](https://rdrr.io/pkg/openair/man/scatterPlot.html)(sprops_yrs[sprops_yrs$source_db [%in%](https://rdrr.io/r/base/match.html) sel.db0,], y = "oc", x = "year", method = "hexbin",
col = "increment", type = "source_db", log.y=TRUE, ylab="SOC wprm", xlab="Sampling year")
```
#### 5\.4\.0\.3 Convert to wide format
Add `layer_sequence` where missing since this is needed to be able to convert to wide
format:
```
#summary(tot_sprops$layer_sequence)
tot_sprops$dsiteid = [paste](https://rdrr.io/r/base/paste.html)(tot_sprops$source_db, tot_sprops$site_key, tot_sprops$site_obsdate, sep="_")
if({
[library](https://rdrr.io/r/base/library.html)([dplyr](https://dplyr.tidyverse.org))
## Note: takes >2 mins
l.s1 <- tot_sprops[,[c](https://rdrr.io/r/base/c.html)("dsiteid","hzn_depth")] [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [group_by](https://dplyr.tidyverse.org/reference/group_by.html)(dsiteid) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(layer_sequence.f = data.table::[frank](https://Rdatatable.gitlab.io/data.table/reference/frank.html)(hzn_depth, ties.method = "first"))
tot_sprops$layer_sequence.f = [ifelse](https://rdrr.io/r/base/ifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(tot_sprops$layer_sequence), l.s1$layer_sequence.f, tot_sprops$layer_sequence)
tot_sprops$layer_sequence.f = [ifelse](https://rdrr.io/r/base/ifelse.html)(tot_sprops$layer_sequence.f>6, 6, tot_sprops$layer_sequence.f)
}
```
Convert long table to [wide table format](https://ncss-tech.github.io/AQP/aqp/aqp-intro.html) so that each depth gets unique column
(note: the most computational / time\-consuming step usually):
```
if({
[library](https://rdrr.io/r/base/library.html)([data.table](http://r-datatable.com))
tot_sprops.w = data.table::[dcast](https://Rdatatable.gitlab.io/data.table/reference/dcast.data.table.html)( [as.data.table](https://Rdatatable.gitlab.io/data.table/reference/as.data.table.html)(tot_sprops),
formula = olc_id ~ layer_sequence.f,
value.var = [c](https://rdrr.io/r/base/c.html)("uuid", hor.names[-[which](https://rdrr.io/r/base/which.html)(hor.names [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("site_key", "layer_sequence"))]),
## "labsampnum", "hzn_desgn", "tex_psda"
#fun=function(x){ mean(x, na.rm=TRUE) },
## Note: does not work for characters
fun=function(x){ x[1] },
verbose = FALSE)
## remove "0" layers added automatically but containing no values
tot_sprops.w = tot_sprops.w[,[grep](https://rdrr.io/r/base/grep.html)("*_0$", [colnames](https://rdrr.io/r/base/colnames.html)(tot_sprops.w)):=NULL]
}
tot_sprops_w.pnts = tot_sprops.pnts
tot_sprops_w.pnts@data = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tot_sprops.pnts@data, tot_sprops.w)
#> Joining by: olc_id
```
Write all soil profiles using a wide format:
```
sel.rm.pnts <- tot_sprops_w.pnts$source_db=="LUCAS_2009" | tot_sprops_w.pnts$source_db=="LUCAS_2015" | tot_sprops_w.pnts$site_key [%in%](https://rdrr.io/r/base/match.html) mng.rm | tot_sprops_w.pnts$site_key [%in%](https://rdrr.io/r/base/match.html) rem.sp
out.gpkg = "./out/gpkg/sol_chem.pnts_horizons.gpkg"
#unlink(out.gpkg)
if({
writeOGR(tot_sprops_w.pnts[!sel.rm.pnts,], "./out/gpkg/sol_chem.pnts_horizons.gpkg", "sol_chem.pnts_horizons", drive="GPKG")
}
```
#### 5\.4\.0\.4 Save RDS files
Remove points that are not allowed to be distributed publicly:
```
sel.rm <- tot_sprops$source_db=="LUCAS_2009" | tot_sprops$source_db=="LUCAS_2015" | tot_sprops$site_key [%in%](https://rdrr.io/r/base/match.html) mng.rm | tot_sprops$site_key [%in%](https://rdrr.io/r/base/match.html) rem.sp
tot_sprops.s = tot_sprops[!sel.rm,]
```
Plot in Goode Homolozine projection and save final objects:
```
if({
tot_sprops.pnts_sf <- st_as_sf(tot_sprops.pnts[1], crs=4326)
plot_gh(tot_sprops.pnts_sf, out.pdf="./img/sol_chem.pnts_sites.pdf")
## extremely slow --- takes 15mins
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_chem.pnts_sites.pdf ./img/sol_chem.pnts_sites -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_chem.pnts_sites.png ./img/sol_chem.pnts_sites.png")
}
```
(\#fig:sol\_chem.pnts\_sites)Soil profiles and soil samples with chemical and physical properties global compilation.
Fig. 1: Soil profiles and soil samples with chemical and physical properties global compilation.
5\.5 Save final analysis\-ready objects:
----------------------------------------
```
saveRDS.gz(tot_sprops.s, "./out/rds/sol_chem.pnts_horizons.rds")
saveRDS.gz(tot_sprops, "/mnt/diskstation/data/Soil_points/sol_chem.pnts_horizons.rds")
saveRDS.gz(tot_sprops.pnts, "/mnt/diskstation/data/Soil_points/sol_chem.pnts_sites.rds")
#library(farff)
#writeARFF(tot_sprops.s, "./out/arff/sol_chem.pnts_horizons.arff", overwrite = TRUE)
## compressed CSV
[write.csv](https://rdrr.io/r/utils/write.table.html)(tot_sprops.s, file=[gzfile](https://rdrr.io/r/base/connections.html)("./out/csv/sol_chem.pnts_horizons.csv.gz"))
## regression matrix:
#saveRDS.gz(rm.sol, "./out/rds/sol_chem.pnts_horizons_rm.rds")
```
Save temp object:
```
save.image.pigz(file="soilchem.RData")
## rmarkdown::render("Index.rmd")
```
| Life Sciences |
opengeohub.github.io | https://opengeohub.github.io/SoilSamples/soil-physical-and-hydrological-properties.html |
6 Soil physical and hydrological properties
===========================================
You are reading the work\-in\-progress An Open Compendium of Soil Sample and Soil Profile Datasets. This chapter is currently draft version, a peer\-review publication is pending. You can find the polished first edition at <https://opengeohub.github.io/SoilSamples/>.
Last update: 2023\-05\-10
6\.1 Overview
-------------
This section describes import steps used to produce a global compilation of soil
laboratory data with physicals and hydraulic soil properties that can be then
used for predictive soil mapping / modeling at global and regional scales.
Read more about computing with soil hydraulic / physical properties in R:
* Gupta, S., Hengl, T., Lehmann, P., Bonetti, S., and Or, D. [**SoilKsatDB: global soil saturated hydraulic conductivity measurements for geoscience applications**](https://doi.org/10.5194/essd-2020-149). Earth Syst. Sci. Data Discuss., [https://doi.org/10\.5194/essd\-2020\-149](https://doi.org/10.5194/essd-2020-149), in review, 2021\.
* de Sousa, D. F., Rodrigues, S., de Lima, H. V., \& Chagas, L. T. (2020\). [R software packages as a tool for evaluating soil physical and hydraulic properties](https://doi.org/10.1016/j.compag.2019.105077). Computers and Electronics in Agriculture, 168, 105077\.
6\.2 alt text Specifications
----------------------------
#### 6\.2\.0\.1 Data standards
* Metadata information: [“Soil Survey Investigation Report No. 42\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1253872.pdf) and [“Soil Survey Investigation Report No. 45\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/nrcs142p2_052226.pdf)
* Model DB: [National Cooperative Soil Survey (NCSS) Soil Characterization Database](https://ncsslabdatamart.sc.egov.usda.gov/)
#### 6\.2\.0\.2 *Target variables:*
```
site.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max")
hor.names = [c](https://rdrr.io/r/base/c.html)("labsampnum","site_key","layer_sequence","hzn_top","hzn_bot","hzn_desgn","db_13b", "db_od", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
## target structure:
col.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "db_13b", "db_od", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field", "source_db", "confidence_degree", "project_url", "citation_url")
```
* `db_13b`: Bulk density (33kPa) in g/cm3 for \<2mm soil fraction,
* `db`: Bulk density (unknown method) in g/cm3 for \<2mm soil fraction,
* `COLEws`: Coefficient of Linear Extensibility (COLE) whole soil in ratio for \<2mm soil fraction,
* `w6clod`: Water Content 6 kPa \<2mm in % wt for \<2mm soil fraction,
* `w10cld`: Water Content 10 kPa \<2mm in % wt for \<2mm soil fraction,
* `w3cld`: Water Content 33 kPa \<2mm in % vol for \<2mm soil fraction (Field Capacity),
* `w15l2`: Water Content 1500 kPa \<2mm in % vol for \<2mm soil fraction (Permanent Wilting Point),
* `w15bfm`: Water Content 1500 kPa moist \<2mm in % wt for \<2mm soil fraction,
* `adod`: Air\-Dry/Oven\-Dry in ratio for \<2mm soil fraction,
* `wrd_ws13`: Water Retention Difference whole soil, 1500\-kPa suction and an upper limit of usually 33\-kPa in cm3 / cm\-3 for \<2mm soil fraction,
* `cec7_cly`: CEC\-7/Clay ratio in ratio for \<2mm soil fraction,
* `w15cly`: CEC/Clay ratio at 1500 kPa in ratio for \<2mm soil fraction,
* `tex_psda`: Texture Determined, PSDA in factor for \<2mm soil fraction,
* `clay_tot_psa`: Total Clay, \<0\.002 mm (\<2 µm) in % wt for \<2mm soil fraction,
* `silt_tot_psa`: Total Silt, 0\.002\-0\.05 mm in % wt for \<2mm soil fraction,
* `sand_tot_psa`: Total Sand, 0\.05\-2\.0 mm in % wt for \<2mm soil fraction,
* `wpg2`: Coarse fragments \>2\-mm weight fraction in % wt for \<2mm soil fraction,
* `hzn_top`: The top (upper) depth of the layer in centimeters. in cm for \<2mm soil fraction,
* `hzn_bot`: The bottom (lower) depth of the layer in centimeters. in cm for \<2mm soil fraction,
* `oc_v`: Organic carbon (unknown method) in % wt for \<2mm soil fraction,
* `ph_kcl`: pH, 1N KCl in ratio for \<2mm soil fraction,
* `ph_h2o_v`: pH in water (unknown method) for \<2mm soil fraction,
* `cec_sum`: Sum of Cations (CEC\-8\.2\) in cmol(\+)/kg for \<2mm soil fraction,
* `cec_nh4`: NH4OAc, pH 7 (CEC\-7\) in cmol(\+)/kg for \<2mm soil fraction,
* `ksat_field`: Field\-estimated Saturated Hydraulic Conductivity in cm/day for \<2mm soil fraction,
* `ksat_lab`: Laboratory\-estimated Saturated Hydraulic Conductivity in cm/day for \<2mm soil fraction,
Same variable names have been adjusted (e.g. `ph_h2o` to `ph_h2o_na`) to include `unknown method` so that variables from different laboratory methods can be seamlessly merged.
Conversion between VWC and MWC is based on the formula ([**landon1991handbook?**](#ref-landon1991handbook); [**benham1998field?**](#ref-benham1998field); [**vanReeuwijk1993procedures?**](#ref-vanReeuwijk1993procedures)):
* VWC (%v/v) \= MWC (% by weight ) \* bulk density (kg/m3\)
6\.3 alt text Data import
-------------------------
#### 6\.3\.0\.1 NCSS Characterization Database
* National Cooperative Soil Survey, (2020\). [National Cooperative Soil Survey Characterization Database](http://ncsslabdatamart.sc.egov.usda.gov/). <http://ncsslabdatamart.sc.egov.usda.gov/>
```
if({
ncss.site <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Site_Location.csv", stringsAsFactors = FALSE)
#str(ncss.site)
## Location accuracy unknown but we assume 100m
ncss.site$location_accuracy_max = NA
ncss.site$location_accuracy_min = 100
ncss.layer <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Layer.csv", stringsAsFactors = FALSE)
ncss.bdm <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Bulk_Density_and_Moisture.csv", stringsAsFactors = FALSE)
#summary(as.factor(ncss.bdm$prep_code))
ncss.bdm.0 <- ncss.bdm[ncss.bdm$prep_code=="S",]
#summary(ncss.bdm.0$db_od)
## 0 values --- error!
ncss.carb <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Carbon_and_Extractions.csv", stringsAsFactors = FALSE)
ncss.organic <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Organic.csv", stringsAsFactors = FALSE)
ncss.pH <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_pH_and_Carbonates.csv", stringsAsFactors = FALSE)
#str(ncss.pH)
#summary(!is.na(ncss.pH$ph_h2o))
ncss.PSDA <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_PSDA_and_Rock_Fragments.csv", stringsAsFactors = FALSE)
ncss.CEC <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_CEC_and_Bases.csv")
ncss.horizons <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(ncss.bdm.0, ncss.layer, ncss.carb, ncss.organic, ncss.pH, ncss.PSDA, ncss.CEC), type = "left", by="labsampnum")
#head(ncss.horizons)
[nrow](https://rdrr.io/r/base/nrow.html)(ncss.horizons)
ncss.horizons$ksat_lab = NA; ncss.horizons$ksat_field = NA
hydrosprops.NCSS = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(ncss.site[,site.names], ncss.horizons[,hor.names], by="site_key")
## soil organic carbon:
#summary(!is.na(hydrosprops.NCSS$oc))
#summary(!is.na(hydrosprops.NCSS$ph_h2o))
#summary(!is.na(hydrosprops.NCSS$ph_kcl))
hydrosprops.NCSS$source_db = "USDA_NCSS"
#str(hydrosprops.NCSS)
#hist(hydrosprops.NCSS$w3cld[hydrosprops.NCSS$w3cld<150], breaks=45, col="gray")
## ERROR: MANY VALUES >100%
## fills in missing BD values using formula from Köchy, Hiederer, and Freibauer (2015)
db.f = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(hydrosprops.NCSS$db_13b), -0.31*[log](https://rdrr.io/r/base/Log.html)(hydrosprops.NCSS$oc)+1.38, hydrosprops.NCSS$db_13b)
db.f[db.f<0.02 | db.f>2.87] = NA
## Convert to volumetric % to match most of world data sets:
hydrosprops.NCSS$w3cld = hydrosprops.NCSS$w3cld * db.f
hydrosprops.NCSS$w15l2 = hydrosprops.NCSS$w15l2 * db.f
hydrosprops.NCSS$w10cld = hydrosprops.NCSS$w10cld * db.f
#summary(as.factor(hydrosprops.NCSS$tex_psda))
## texture classes need to be cleaned up!
## check WRC values for sandy soils
#hydrosprops.NCSS[which(!is.na(hydrosprops.NCSS$w3cld) & hydrosprops.NCSS$sand_tot_psa>95)[1:10],]
## check WRC values for ORGANIC soils
#hydrosprops.NCSS[which(!is.na(hydrosprops.NCSS$w3cld) & hydrosprops.NCSS$oc>12)[1:10],]
## w3cld > 100?
hydrosprops.NCSS$confidence_degree = 1
hydrosprops.NCSS$project_url = "http://ncsslabdatamart.sc.egov.usda.gov/"
hydrosprops.NCSS$citation_url = "https://doi.org/10.2136/sssaj2016.11.0386n"
hydrosprops.NCSS = complete.vars(hydrosprops.NCSS)
saveRDS.gz(hydrosprops.NCSS, "/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/hydrosprops.NCSS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.NCSS)
#> [1] 113991 40
```
#### 6\.3\.0\.2 Africa soil profiles database
* Leenaars, J. G., Van OOstrum, A. J. M., \& Ruiperez Gonzalez, M. (2014\). [Africa soil profiles database version 1\.2\. A compilation of georeferenced and standardized legacy soil profile data for Sub\-Saharan Africa (with dataset)](https://www.isric.org/projects/africa-soil-profiles-database-afsp). Wageningen: ISRIC Report 2014/01; 2014\.
```
if({
[require](https://rdrr.io/r/base/library.html)([foreign](https://svn.r-project.org/R-packages/trunk/foreign))
afspdb.profiles <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Profiles.dbf", as.is=TRUE)
## approximate location error
afspdb.profiles$location_accuracy_min = afspdb.profiles$XYAccur * 1e5
afspdb.profiles$location_accuracy_min = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(afspdb.profiles$location_accuracy_min < 20, NA, afspdb.profiles$location_accuracy_min)
afspdb.profiles$location_accuracy_max = NA
afspdb.layers <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Layers.dbf", as.is=TRUE)
## select columns of interest:
afspdb.s.lst <- [c](https://rdrr.io/r/base/c.html)("ProfileID", "usiteid", "T_Year", "X_LonDD", "Y_LatDD", "location_accuracy_min", "location_accuracy_max")
## Convert to weight content
#summary(afspdb.layers$BlkDens)
## select layers
afspdb.h.lst <- [c](https://rdrr.io/r/base/c.html)("LayerID", "ProfileID", "LayerNr", "UpDpth", "LowDpth", "HorDes", "db_13b", "BlkDens", "COLEws", "VMCpF18", "VMCpF20", "VMCpF25", "VMCpF42", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "LabTxtr", "Clay", "Silt", "Sand", "OrgC", "PHKCl", "PHH2O", "CecSoil", "cec_nh4", "CfPc", "ksat_lab", "ksat_field")
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid")){ afspdb.profiles[,j] = NA }
for(j in [c](https://rdrr.io/r/base/c.html)("db_13b", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_nh4", "ksat_lab", "ksat_field")){ afspdb.layers[,j] = NA }
hydrosprops.AfSPDB = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(afspdb.profiles[,afspdb.s.lst], afspdb.layers[,afspdb.h.lst])
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(hydrosprops.AfSPDB)){
if([is.numeric](https://rdrr.io/r/base/numeric.html)(hydrosprops.AfSPDB[,j])) { hydrosprops.AfSPDB[,j] <- [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(hydrosprops.AfSPDB[,j] < -200, NA, hydrosprops.AfSPDB[,j]) }
}
hydrosprops.AfSPDB$source_db = "AfSPDB"
hydrosprops.AfSPDB$confidence_degree = 5
hydrosprops.AfSPDB$OrgC = hydrosprops.AfSPDB$OrgC/10
#summary(hydrosprops.AfSPDB$OrgC)
hydrosprops.AfSPDB$project_url = "https://www.isric.org/projects/africa-soil-profiles-database-afsp"
hydrosprops.AfSPDB$citation_url = "https://www.isric.org/sites/default/files/isric_report_2014_01.pdf"
hydrosprops.AfSPDB = complete.vars(hydrosprops.AfSPDB, sel = [c](https://rdrr.io/r/base/c.html)("VMCpF25", "VMCpF42"), coords = [c](https://rdrr.io/r/base/c.html)("X_LonDD", "Y_LatDD"))
saveRDS.gz(hydrosprops.AfSPDB, "/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/hydrosprops.AfSPDB.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.AfSPDB)
#> [1] 10720 40
```
#### 6\.3\.0\.3 ISRIC ISIS
* Batjes, N. H. (1995\). [A homogenized soil data file for global environmental research: A subset of FAO, ISRIC and NRCS profiles (Version 1\.0\) (No. 95/10b)](https://www.isric.org/sites/default/files/isric_report_1995_10b.pdf). ISRIC.
* Van de Ven, T., \& Tempel, P. (1994\). ISIS 4\.0: ISRIC Soil Information System: User Manual. ISRIC.
```
if({
isis.xy <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/Sites.csv", stringsAsFactors = FALSE)
#str(isis.xy)
isis.des <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/SitedescriptionResults.csv", stringsAsFactors = FALSE)
isis.site <- [data.frame](https://rdrr.io/r/base/data.frame.html)(site_key=isis.xy$Id, usiteid=[paste](https://rdrr.io/r/base/paste.html)(isis.xy$CountryISO, isis.xy$SiteNumber, sep=""))
id0.lst = [c](https://rdrr.io/r/base/c.html)(236,235,224)
nm0.lst = [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees", "site_obsdate")
isis.site.l = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)(1:[length](https://rdrr.io/r/base/length.html)(id0.lst), function(i){plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)([subset](https://Rdatatable.gitlab.io/data.table/reference/subset.data.table.html)(isis.des, ValueId==id0.lst[i])[,[c](https://rdrr.io/r/base/c.html)("SampleId","Value")], replace=[c](https://rdrr.io/r/base/c.html)("SampleId"="site_key", "Value"=[paste](https://rdrr.io/r/base/paste.html)(nm0.lst[i])))}), type = "full")
isis.site.df = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(isis.site, isis.site.l)
for(j in nm0.lst){ isis.site.df[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(isis.site.df[,j]) }
isis.site.df[isis.site.df$usiteid=="CI2","latitude_decimal_degrees"] = 5.883333
#str(isis.site.df)
isis.site.df$location_accuracy_min = 100
isis.site.df$location_accuracy_max = NA
isis.smp <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/AnalyticalSamples.csv", stringsAsFactors = FALSE)
isis.ana <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/AnalyticalResults.csv", stringsAsFactors = FALSE)
#str(isis.ana)
isis.class <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/ClassificationResults.csv", stringsAsFactors = FALSE)
isis.hor <- [data.frame](https://rdrr.io/r/base/data.frame.html)(labsampnum=isis.smp$Id, hzn_top=isis.smp$Top, hzn_bot=isis.smp$Bottom, site_key=isis.smp$SiteId)
isis.hor$hzn_bot <- [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)(">", "", isis.hor$hzn_bot))
#str(isis.hor)
id.lst = [c](https://rdrr.io/r/base/c.html)(1,2,22,4,28,31,32,14,34,38,39,42)
nm.lst = [c](https://rdrr.io/r/base/c.html)("ph_h2o","ph_kcl","wpg2","oc","sand_tot_psa","silt_tot_psa","clay_tot_psa","cec_sum","db_od","w10cld","w3cld", "w15l2")
#str(as.numeric(isis.ana$Value[isis.ana$ValueId==38]))
isis.hor.l = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)(1:[length](https://rdrr.io/r/base/length.html)(id.lst), function(i){plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)([subset](https://Rdatatable.gitlab.io/data.table/reference/subset.data.table.html)(isis.ana, ValueId==id.lst[i])[,[c](https://rdrr.io/r/base/c.html)("SampleId","Value")], replace=[c](https://rdrr.io/r/base/c.html)("SampleId"="labsampnum", "Value"=[paste](https://rdrr.io/r/base/paste.html)(nm.lst[i])))}), type = "full")
#summary(as.numeric(isis.hor.l$w3cld))
isis.hor.df = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(isis.hor, isis.hor.l)
isis.hor.df = isis.hor.df[,]
#summary(as.numeric(isis.hor.df$w3cld))
for(j in nm.lst){ isis.hor.df[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(isis.hor.df[,j]) }
#str(isis.hor.df)
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("layer_sequence", "hzn_desgn", "tex_psda", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_nh4", "db_13b", "w6clod", "ksat_lab", "ksat_field")){ isis.hor.df[,j] = NA }
[which](https://rdrr.io/r/base/which.html)(!hor.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(isis.hor.df))
hydrosprops.ISIS <- [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(isis.site.df[,site.names], isis.hor.df[,hor.names], type="left")
hydrosprops.ISIS$source_db = "ISRIC_ISIS"
hydrosprops.ISIS$confidence_degree = 1
hydrosprops.ISIS$project_url = "https://isis.isric.org"
hydrosprops.ISIS$citation_url = "https://www.isric.org/sites/default/files/isric_report_1995_10b.pdf"
hydrosprops.ISIS = complete.vars(hydrosprops.ISIS)
saveRDS.gz(hydrosprops.ISIS, "/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/hydrosprops.ISIS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.ISIS)
#> [1] 1176 40
```
#### 6\.3\.0\.4 ISRIC WISE
* Batjes, N.H. (2019\). [Harmonized soil profile data for applications at global and continental scales: updates to the WISE database](http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x). Soil Use and Management 5:124–127\.
```
if({
wise.SITE <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_SITE.csv", stringsAsFactors=FALSE)
#summary(as.factor(wise.SITE$LONLAT_ACC))
wise.SITE$location_accuracy_min = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(wise.SITE$LONLAT_ACC=="D", 1e5/2, [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(wise.SITE$LONLAT_ACC=="S", 30, [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(wise.SITE$LONLAT_ACC=="M", 1800/2, NA)))
wise.SITE$location_accuracy_max = NA
wise.HORIZON <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_HORIZON.csv")
wise.s.lst <- [c](https://rdrr.io/r/base/c.html)("WISE3_id", "SOURCE_ID", "DATEYR", "LONDD", "LATDD", "location_accuracy_min", "location_accuracy_max")
## Volumetric values
#summary(wise.HORIZON$BULKDENS)
#summary(wise.HORIZON$VMC1)
wise.HORIZON$WISE3_id = wise.HORIZON$WISE3_ID
wise.h.lst <- [c](https://rdrr.io/r/base/c.html)("labsampnum", "WISE3_id", "HONU", "TOPDEP", "BOTDEP", "DESIG", "db_13b", "BULKDENS", "COLEws", "w6clod", "VMC1", "VMC2", "VMC3", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "CLAY", "SILT", "SAND", "ORGC", "PHKCL", "PHH2O", "CECSOIL", "cec_nh4", "GRAVEL", "ksat_lab", "ksat_field")
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "ksat_lab", "ksat_field")){ wise.HORIZON[,j] = NA }
hydrosprops.WISE = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(wise.SITE[,wise.s.lst], wise.HORIZON[,wise.h.lst])
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(hydrosprops.WISE)){
if([is.numeric](https://rdrr.io/r/base/numeric.html)(hydrosprops.WISE[,j])) { hydrosprops.WISE[,j] <- [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(hydrosprops.WISE[,j] < -200, NA, hydrosprops.WISE[,j]) }
}
hydrosprops.WISE$ORGC = hydrosprops.WISE$ORGC/10
hydrosprops.WISE$source_db = "ISRIC_WISE"
hydrosprops.WISE$project_url = "https://isric.org"
hydrosprops.WISE$citation_url = "http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x"
hydrosprops.WISE <- complete.vars(hydrosprops.WISE, sel=[c](https://rdrr.io/r/base/c.html)("VMC2", "VMC3"), coords = [c](https://rdrr.io/r/base/c.html)("LONDD", "LATDD"))
hydrosprops.WISE$confidence_degree = 5
#summary(hydrosprops.WISE$VMC3)
saveRDS.gz(hydrosprops.WISE, "/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/hydrosprops.WISE.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.WISE)
#> [1] 1325 40
```
#### 6\.3\.0\.5 Fine Root Ecology Database (FRED)
* Iversen CM, Powell AS, McCormack ML, Blackwood CB, Freschet GT, Kattge J, Roumet C, Stover DB, Soudzilovskaia NA, Valverde\-Barrantes OJ, van Bodegom PM, Violle C. 2018\. Fine\-Root Ecology Database (FRED): A Global Collection of Root Trait Data with Coincident Site, Vegetation, Edaphic, and Climatic Data, Version 2\. Oak Ridge National Laboratory, TES SFA, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A. Access on\-line at: [https://doi.org/10\.25581/ornlsfa.012/1417481](https://doi.org/10.25581/ornlsfa.012/1417481).
```
if({
fred = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED2_20180518.csv", skip = 5, header=FALSE)
[names](https://rdrr.io/r/base/names.html)(fred) = [names](https://rdrr.io/r/base/names.html)([read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED2_20180518.csv", nrows=1, header=TRUE))
fred.h.lst = [c](https://rdrr.io/r/base/c.html)("Notes_Row.ID", "Data.source_DOI", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "Soil.horizon", "db_13b", "Soil.bulk.density", "COLEws", "w6clod", "w10cld", "Soil.water_Volumetric.content", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "Soil.texture", "Soil.texture_Fraction.clay", "Soil.texture_Fraction.silt", "Soil.texture_Fraction.sand", "Soil.organic.C.content", "ph_kcl", "Soil.pH_Water", "Soil.cation.exchange.capacity..CEC.", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
#summary(fred$Soil.water_Volumetric.content)
#summary(fred$Soil.water_Storage.capacity)
fred$site_obsdate = [rowMeans](https://rdrr.io/r/base/colSums.html)(fred[,[c](https://rdrr.io/r/base/c.html)("Sample.collection_Year.ending.collection", "Sample.collection_Year.beginning.collection")], na.rm=TRUE)
#summary(fred$site_obsdate)
fred$longitude_decimal_degrees = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Longitude), fred$Longitude_Estimated, fred$Longitude)
fred$latitude_decimal_degrees = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Latitude), fred$Latitude_Estimated, fred$Latitude)
#summary(as.factor(fred$Soil.horizon))
fred$hzn_bot = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Lower.sampling.depth), fred$Soil.depth - 5, fred$Soil.depth_Lower.sampling.depth)
fred$hzn_top = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Upper.sampling.depth), fred$Soil.depth + 5, fred$Soil.depth_Upper.sampling.depth)
x.na = fred.h.lst[[which](https://rdrr.io/r/base/which.html)(!fred.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(fred))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ fred[,i] = NA } }
hydrosprops.FRED = fred[,fred.h.lst]
#plot(hydrosprops.FRED[,4:5])
hydrosprops.FRED$source_db = "FRED"
hydrosprops.FRED$confidence_degree = 5
hydrosprops.FRED$project_url = "https://roots.ornl.gov/"
hydrosprops.FRED$citation_url = "https://doi.org/10.25581/ornlsfa.012/1417481"
hydrosprops.FRED = complete.vars(hydrosprops.FRED, sel = [c](https://rdrr.io/r/base/c.html)("Soil.water_Volumetric.content", "Soil.texture_Fraction.clay"))
saveRDS.gz(hydrosprops.FRED, "/mnt/diskstation/data/Soil_points/INT/FRED/hydrosprops.FRED.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.FRED)
#> [1] 3761 40
```
#### 6\.3\.0\.6 EGRPR
* [Russian Federation: The Unified State Register of Soil Resources (EGRPR)](http://egrpr.esoil.ru/).
```
if({
russ.HOR = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Russia/EGRPR/Russia_EGRPR_soil_pedons.csv")
russ.HOR$SOURCEID = [paste](https://rdrr.io/r/base/paste.html)(russ.HOR$CardID, russ.HOR$SOIL_ID, sep="_")
russ.HOR$SNDPPT <- russ.HOR$TEXTSAF + russ.HOR$TEXSCM
russ.HOR$SLTPPT <- russ.HOR$TEXTSIC + russ.HOR$TEXTSIM + 0.8 * russ.HOR$TEXTSIF
russ.HOR$CLYPPT <- russ.HOR$TEXTCL + 0.2 * russ.HOR$TEXTSIF
## Correct texture fractions:
sumTex <- [rowSums](https://rdrr.io/r/base/colSums.html)(russ.HOR[,[c](https://rdrr.io/r/base/c.html)("SLTPPT","CLYPPT","SNDPPT")])
russ.HOR$SNDPPT <- russ.HOR$SNDPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$SLTPPT <- russ.HOR$SLTPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$oc <- russ.HOR$ORGMAT/1.724
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("site_obsdate", "location_accuracy_min", "location_accuracy_max", "labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")){ russ.HOR[,j] = NA }
russ.sel.h = [c](https://rdrr.io/r/base/c.html)("SOURCEID", "SOIL_ID", "site_obsdate", "LONG", "LAT", "location_accuracy_min", "location_accuracy_max", "labsampnum", "HORNMB", "HORTOP", "HORBOT", "HISMMN", "db_13b", "DVOL", "COLEws", "w6clod", "WR10", "WR33", "WR1500", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "CLYPPT", "SLTPPT", "SNDPPT", "oc", "PHSLT", "PHH2O", "CECST", "cec_nh4", "wpg2","ksat_lab", "ksat_field")
hydrosprops.EGRPR = russ.HOR[,russ.sel.h]
hydrosprops.EGRPR$source_db = "Russia_EGRPR"
hydrosprops.EGRPR$confidence_degree = 2
hydrosprops.EGRPR$project_url = "http://egrpr.esoil.ru/"
hydrosprops.EGRPR$citation_url = "https://doi.org/10.19047/0136-1694-2016-86-115-123"
hydrosprops.EGRPR <- complete.vars(hydrosprops.EGRPR, sel=[c](https://rdrr.io/r/base/c.html)("WR33", "WR1500"), coords = [c](https://rdrr.io/r/base/c.html)("LONG", "LAT"))
#summary(hydrosprops.EGRPR$WR1500)
saveRDS.gz(hydrosprops.EGRPR, "/mnt/diskstation/data/Soil_points/Russia/EGRPR/hydrosprops.EGRPR.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.EGRPR)
#> [1] 1138 40
```
#### 6\.3\.0\.7 SPADE\-2
* Hannam J.A., Hollis, J.M., Jones, R.J.A., Bellamy, P.H., Hayes, S.E., Holden, A., Van Liedekerke, M.H. and Montanarella, L. (2009\). [SPADE\-2: The soil profile analytical database for Europe, Version 2\.0 Beta Version March 2009](https://esdac.jrc.ec.europa.eu/content/soil-profile-analytical-database-2). Unpublished Report, 27pp.
* Kristensen, J. A., Balstrøm, T., Jones, R. J. A., Jones, A., Montanarella, L., Panagos, P., and Breuning\-Madsen, H.: Development of a harmonised soil profile analytical database for Europe: a resource for supporting regional soil management, SOIL, 5, 289–301, [https://doi.org/10\.5194/soil\-5\-289\-2019](https://doi.org/10.5194/soil-5-289-2019), 2019\.
```
if({
spade.PLOT <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/EU/SPADE/DAT_PLOT.csv")
#str(spade.PLOT)
spade.HOR <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/EU/SPADE/DAT_HOR.csv")
spade.PLOT = spade.PLOT[!spade.PLOT$LON_COOR_V>180 & spade.PLOT$LAT_COOR_V>20,]
#plot(spade.PLOT[,c("LON_COOR_V","LAT_COOR_V")])
spade.PLOT$location_accuracy_min = 100
spade.PLOT$location_accuracy_max = NA
#site.names = c("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees")
spade.PLOT$ProfileID = [paste](https://rdrr.io/r/base/paste.html)(spade.PLOT$CNTY_C, spade.PLOT$PLOT_ID, sep="_")
spade.PLOT$T_Year = 2009
spade.s.lst <- [c](https://rdrr.io/r/base/c.html)("PLOT_ID", "ProfileID", "T_Year", "LON_COOR_V", "LAT_COOR_V", "location_accuracy_min", "location_accuracy_max")
## standardize:
spade.HOR$SLTPPT <- spade.HOR$SILT1_V + spade.HOR$SILT2_V
spade.HOR$SNDPPT <- spade.HOR$SAND1_V + spade.HOR$SAND2_V + spade.HOR$SAND3_V
spade.HOR$PHIKCL <- NA
spade.HOR$PHIKCL[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A14")] <- spade.HOR$PH_V[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A14")]
spade.HOR$PHIHO5 <- NA
spade.HOR$PHIHO5[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A12")] <- spade.HOR$PH_V[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A12")]
#summary(spade.HOR$BD_V)
for(j in [c](https://rdrr.io/r/base/c.html)("site_obsdate", "layer_sequence", "db_13b", "COLEws", "w15bfm", "w6clod", "w10cld", "adod", "wrd_ws13", "w15bfm", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "ksat_lab", "ksat_field")){ spade.HOR[,j] = NA }
spade.h.lst = [c](https://rdrr.io/r/base/c.html)("HOR_ID","PLOT_ID","layer_sequence","HOR_BEG_V","HOR_END_V","HOR_NAME","db_13b", "BD_V", "COLEws", "w6clod", "w10cld", "WCFC_V", "WC4_V", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "CLAY_V", "SLTPPT", "SNDPPT", "OC_V", "PHIKCL", "PHIHO5", "CEC_V", "cec_nh4", "GRAV_C", "ksat_lab", "ksat_field")
hydrosprops.SPADE2 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(spade.PLOT[,spade.s.lst], spade.HOR[,spade.h.lst])
hydrosprops.SPADE2$source_db = "SPADE2"
hydrosprops.SPADE2$confidence_degree = 15
hydrosprops.SPADE2$project_url = "https://esdac.jrc.ec.europa.eu/content/soil-profile-analytical-database-2"
hydrosprops.SPADE2$citation_url = "https://doi.org/10.1016/j.landusepol.2011.07.003"
hydrosprops.SPADE2 <- complete.vars(hydrosprops.SPADE2, sel=[c](https://rdrr.io/r/base/c.html)("WCFC_V", "WC4_V"), coords = [c](https://rdrr.io/r/base/c.html)("LON_COOR_V","LAT_COOR_V"))
#summary(hydrosprops.SPADE2$WC4_V)
#summary(is.na(hydrosprops.SPADE2$WC4_V))
#hist(hydrosprops.SPADE2$WC4_V, breaks=45, col="gray")
saveRDS.gz(hydrosprops.SPADE2, "/mnt/diskstation/data/Soil_points/EU/SPADE/hydrosprops.SPADE2.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.SPADE2)
#> [1] 1182 40
```
#### 6\.3\.0\.8 Canada National Pedon Database
* [Agriculture and Agri\-Food Canada National Pedon Database](https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0).
```
if({
NPDB.nm = [c](https://rdrr.io/r/base/c.html)("NPDB_V2_sum_source_info.csv","NPDB_V2_sum_chemical.csv", "NPDB_V2_sum_horizons_raw.csv", "NPDB_V2_sum_physical.csv")
NPDB.HOR = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)([paste0](https://rdrr.io/r/base/paste.html)("/mnt/diskstation/data/Soil_points/Canada/NPDB/", NPDB.nm), read.csv), type = "full")
#str(NPDB.HOR)
#summary(NPDB.HOR$BULK_DEN)
## 0 values -> ERROR!
## add missing columns
NPDB.HOR$HISMMN = [paste0](https://rdrr.io/r/base/paste.html)(NPDB.HOR$HZN_MAS, NPDB.HOR$HZN_SUF, NPDB.HOR$HZN_MOD)
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "location_accuracy_max", "layer_sequence", "labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "w10cld", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "ph_kcl", "ksat_lab", "ksat_field")){ NPDB.HOR[,j] = NA }
npdb.sel.h = [c](https://rdrr.io/r/base/c.html)("PEDON_ID", "usiteid", "CAL_YEAR", "DD_LONG", "DD_LAT", "CONF_METRS", "location_accuracy_max", "labsampnum", "layer_sequence", "U_DEPTH", "L_DEPTH", "HISMMN", "db_13b", "BULK_DEN", "COLEws", "w6clod", "w10cld", "RETN_33KP", "RETN_1500K", "RETN_HYGR", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "T_CLAY", "T_SILT", "T_SAND", "CARB_ORG", "ph_kcl", "PH_H2O", "CEC", "cec_nh4", "VC_SAND", "ksat_lab", "ksat_field")
hydrosprops.NPDB = NPDB.HOR[,npdb.sel.h]
hydrosprops.NPDB$source_db = "Canada_NPDB"
hydrosprops.NPDB$confidence_degree = 1
hydrosprops.NPDB$project_url = "https://open.canada.ca/data/en/"
hydrosprops.NPDB$citation_url = "https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0"
hydrosprops.NPDB <- complete.vars(hydrosprops.NPDB, sel=[c](https://rdrr.io/r/base/c.html)("RETN_33KP", "RETN_1500K"), coords = [c](https://rdrr.io/r/base/c.html)("DD_LONG", "DD_LAT"))
saveRDS.gz(hydrosprops.NPDB, "/mnt/diskstation/data/Soil_points/Canada/NPDB/hydrosprops.NPDB.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.NPDB)
#> [1] 404 40
```
#### 6\.3\.0\.9 ETH imported data from literature
* Digitized soil hydraulic measurements from the literature by the [ETH Soil and Terrestrial Environmental Physics](https://step.ethz.ch/).
```
if({
xlsxFile = [list.files](https://rdrr.io/r/base/list.files.html)(pattern="Global_soil_water_tables.xlsx", full.names = TRUE, recursive = TRUE)
wb = openxlsx::[getSheetNames](https://rdrr.io/pkg/openxlsx/man/getSheetNames.html)(xlsxFile)
eth.tbl = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)(
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "ETH_imported_literature"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "ETH_imported_literature_more"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "ETH_extra_data set"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Tibetan_plateau"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Belgium_Vereecken_data"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Australia_dataset"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Florida_Soils_Ksat"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "China_dataset"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Sand_dunes_Siberia_database"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "New_data_4_03")
)
#dim(eth.tbl)
#summary(as.factor(eth.tbl$reference_source))
## Data quality tables
lab.ql = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Quality_per_site_key")
lab.cd = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(eth.tbl["site_key"], lab.ql)$confidence_degree
eth.tbl$confidence_degree = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(eth.tbl$confidence_degree), lab.cd, eth.tbl$confidence_degree)
#summary(as.factor(eth.tbl$confidence_degree))
## missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "labsampnum", "layer_sequence", "db_13b", "COLEws", "adod", "wrd_ws13", "w15bfm", "w15cly", "cec7_cly", "w6clod", "w10cld", "ph_kcl", "cec_sum", "cec_nh4", "wpg2", "project_url", "citation_url")){ eth.tbl[,j] = NA }
hydrosprops.ETH = eth.tbl[,col.names]
col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(eth.tbl))]
hydrosprops.ETH$project_url = "https://step.ethz.ch/"
hydrosprops.ETH$citation_url = "https://doi.org/10.5194/essd-2020-149"
hydrosprops.ETH = complete.vars(hydrosprops.ETH)
#hist(hydrosprops.ETH$w15l2, breaks=45, col="gray")
#hist(log1p(hydrosprops.ETH$ksat_lab), breaks=45, col="gray")
saveRDS.gz(hydrosprops.ETH, "/mnt/diskstation/data/Soil_points/INT/hydrosprops.ETH.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.ETH)
#> [1] 9023 40
```
#### 6\.3\.0\.10 HYBRAS
* Ottoni, M. V., Ottoni Filho, T. B., Schaap, M. G., Lopes\-Assad, M. L. R., \& Rotunno Filho, O. C. (2018\). [Hydrophysical database for Brazilian soils (HYBRAS) and pedotransfer functions for water retention](http://www.cprm.gov.br/en/Hydrology/Research-and-Innovation/HYBRAS-4208.html). Vadose Zone Journal, 17(1\).
```
if({
hybras.HOR = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "HYBRAS.V1_integrated_tables_RAW")
#str(hybras.HOR)
## some points had only UTM coordinates and had to be manually coorected
## subset to unique values:
hybras.HOR = hybras.HOR[,]
#summary(hybras.HOR$bulk_den)
#hist(hybras.HOR$ksat, breaks=35, col="grey")
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "layer_sequence", "labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "w10cld", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_sum", "cec_nh4", "ph_kcl", "ph_h2o", "ksat_field", "uuid")){ hybras.HOR[,j] = NA }
hybras.HOR$w3cld = [rowMeans](https://rdrr.io/r/base/colSums.html)(hybras.HOR[,[c](https://rdrr.io/r/base/c.html)("theta20","theta50")], na.rm = TRUE)
hybras.sel.h = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "year", "LongitudeOR", "LatitudeOR", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "top_depth", "bot_depth", "horizon", "db_13b", "bulk_den", "COLEws", "w6clod", "theta10", "w3cld", "theta15000", "satwat", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay", "silt", "sand", "org_carb", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "vc_sand", "ksat", "ksat_field")
hydrosprops.HYBRAS = hybras.HOR[,hybras.sel.h]
hydrosprops.HYBRAS$source_db = "HYBRAS"
hydrosprops.HYBRAS$confidence_degree = 1
for(i in [c](https://rdrr.io/r/base/c.html)("theta10", "w3cld", "theta15000", "satwat")){ hydrosprops.HYBRAS[,i] = hydrosprops.HYBRAS[,i]*100 }
#summary(hydrosprops.HYBRAS$theta10)
#summary(hydrosprops.HYBRAS$satwat)
#hist(hydrosprops.HYBRAS$theta10, breaks=45, col="gray")
#hist(log1p(hydrosprops.HYBRAS$ksat), breaks=45, col="gray")
#summary(!is.na(hydrosprops.HYBRAS$ksat))
hydrosprops.HYBRAS$project_url = "http://www.cprm.gov.br/en/Hydrology/Research-and-Innovation/HYBRAS-4208.html"
hydrosprops.HYBRAS$citation_url = "https://doi.org/10.2136/vzj2017.05.0095"
hydrosprops.HYBRAS <- complete.vars(hydrosprops.HYBRAS, sel=[c](https://rdrr.io/r/base/c.html)("w3cld", "theta15000", "ksat", "ksat_field"), coords = [c](https://rdrr.io/r/base/c.html)("LongitudeOR", "LatitudeOR"))
saveRDS.gz(hydrosprops.HYBRAS, "/mnt/diskstation/data/Soil_points/INT/HYBRAS/hydrosprops.HYBRAS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.HYBRAS)
#> [1] 814 40
```
#### 6\.3\.0\.11 UNSODA
* Nemes, Attila; Schaap, Marcel; Leij, Feike J.; Wösten, J. Henk M. (2015\). [UNSODA 2\.0: Unsaturated Soil Hydraulic Database](https://data.nal.usda.gov/dataset/unsoda-20-unsaturated-soil-hydraulic-database-database-and-program-indirect-methods-estimating-unsaturated-hydraulic-properties). Database and program for indirect methods of estimating unsaturated hydraulic properties. US Salinity Laboratory \- ARS \- USDA. [https://doi.org/10\.15482/USDA.ADC/1173246](https://doi.org/10.15482/USDA.ADC/1173246). Accessed 2020\-06\-08\.
```
if({
unsoda.LOC = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/general_c.csv")
#unsoda.LOC = unsoda.LOC[!unsoda.LOC$Lat==0,]
#plot(unsoda.LOC[,c("Long","Lat")])
unsoda.SOIL = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/soil_properties.csv")
#summary(unsoda.SOIL$k_sat)
## Soil water retention in lab:
tmp.hyd = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/lab_drying_h-t.csv")
#str(tmp.hyd)
tmp.hyd = tmp.hyd[,]
tmp.hyd$theta = tmp.hyd$theta*100
#head(tmp.hyd)
pr.lst = [c](https://rdrr.io/r/base/c.html)(6,10,33,15000)
cl.lst = [c](https://rdrr.io/r/base/c.html)("w6clod", "w10cld", "w3cld", "w15l2")
tmp.hyd.tbl = [data.frame](https://rdrr.io/r/base/data.frame.html)(code=[unique](https://Rdatatable.gitlab.io/data.table/reference/duplicated.html)(tmp.hyd$code), w6clod=NA, w10cld=NA, w3cld=NA, w15l2=NA)
for(i in 1:[length](https://rdrr.io/r/base/length.html)(pr.lst)){
tmp.hyd.tbl[,cl.lst[i]] = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tmp.hyd.tbl, tmp.hyd[[which](https://rdrr.io/r/base/which.html)(tmp.hyd$preshead==pr.lst[i]),[c](https://rdrr.io/r/base/c.html)("code","theta")], match="first")$theta
}
#head(tmp.hyd.tbl)
## ksat
kst.lev = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/comment_lab_sat_cond.csv", na.strings=[c](https://rdrr.io/r/base/c.html)("","NA","No comment"))
kst.met = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/methodology.csv", na.strings=[c](https://rdrr.io/r/base/c.html)("","NA","No comment"))
kst.met$comment_lsc = [paste](https://rdrr.io/r/base/paste.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(kst.met[[c](https://rdrr.io/r/base/c.html)("comment_lsc_ID")], kst.lev)$comment_lsc)
kst.met$comment_lsc[[which](https://rdrr.io/r/base/which.html)(kst.met$comment_lsc=="NA")] = NA
kst.fld = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/comment_field_sat_cond.csv", na.strings=[c](https://rdrr.io/r/base/c.html)("","NA","No comment"))
kst.met$comment_fsc = [paste](https://rdrr.io/r/base/paste.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(kst.met[[c](https://rdrr.io/r/base/c.html)("comment_fsc_ID")], kst.fld)$comment_fsc)
kst.met$comment_fsc[[which](https://rdrr.io/r/base/which.html)(kst.met$comment_fsc=="NA")] = NA
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(kst.met$comment_lsc))
kst.met$comment_met = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(kst.met$comment_lsc)&
unsoda.SOIL$comment_met = [paste](https://rdrr.io/r/base/paste.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(unsoda.SOIL[[c](https://rdrr.io/r/base/c.html)("code")], kst.met)$comment_met)
#summary(as.factor(unsoda.SOIL$comment_met))
sel.fld = unsoda.SOIL$comment_met [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("field Double ring infiltrometer","field Ponding", "field Steady infiltration")
unsoda.SOIL$ksat_lab[[which](https://rdrr.io/r/base/which.html)(!sel.fld)] = unsoda.SOIL$k_sat[[which](https://rdrr.io/r/base/which.html)(!sel.fld)]
unsoda.SOIL$ksat_field[[is.na](https://rdrr.io/r/base/NA.html)(unsoda.SOIL$ksat_lab)] = unsoda.SOIL$k_sat[[is.na](https://rdrr.io/r/base/NA.html)(unsoda.SOIL$ksat_lab)]
unsoda.col = join_all([list](https://rdrr.io/r/base/list.html)(unsoda.LOC, unsoda.SOIL, tmp.hyd.tbl))
#head(unsoda.col)
#summary(unsoda.col$OM_content)
unsoda.col$oc = [signif](https://rdrr.io/r/base/Round.html)(unsoda.col$OM_content/1.724, 4)
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "location_accuracy_min", "location_accuracy_max", "layer_sequence", "labsampnum", "db_13b", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_nh4", "ph_kcl", "wpg2")){ unsoda.col[,j] = NA }
unsoda.sel.h = [c](https://rdrr.io/r/base/c.html)("code", "usiteid", "date", "Long", "Lat", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "depth_upper", "depth_lower", "horizon", "db_13b", "bulk_density", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "Texture", "Clay", "Silt", "Sand", "oc", "ph_kcl", "pH", "CEC", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
hydrosprops.UNSODA = unsoda.col[,unsoda.sel.h]
hydrosprops.UNSODA$source_db = "UNSODA"
## corrected coordinates:
unsoda.ql = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "UNSODA_degree")
hydrosprops.UNSODA$confidence_degree = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$confidence_degree
hydrosprops.UNSODA$Texture = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$tex_psda
hydrosprops.UNSODA$location_accuracy_min = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$location_accuracy_min
hydrosprops.UNSODA$location_accuracy_max = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$location_accuracy_max
## replace coordinates
unsoda.Long = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$Improved_long
unsoda.Lat = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$Improved_lat
hydrosprops.UNSODA$Long = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(unsoda.Long), hydrosprops.UNSODA$Long, unsoda.Long)
hydrosprops.UNSODA$Lat = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(unsoda.Long), hydrosprops.UNSODA$Lat, unsoda.Lat)
#hist(hydrosprops.UNSODA$w15l2, breaks=45, col="gray")
#hist(hydrosprops.UNSODA$ksat_lab, breaks=45, col="gray")
unsoda.rem = hydrosprops.UNSODA$code [%in%](https://rdrr.io/r/base/match.html) unsoda.ql$code[[is.na](https://rdrr.io/r/base/NA.html)(unsoda.ql$additional_information)]
#summary(unsoda.rem)
hydrosprops.UNSODA = hydrosprops.UNSODA[unsoda.rem,]
## texture fractions sometimes need to be multiplied by 100!
#hydrosprops.UNSODA[hydrosprops.UNSODA$code==2220,]
sum.tex.1 = [rowSums](https://rdrr.io/r/base/colSums.html)(hydrosprops.UNSODA[,[c](https://rdrr.io/r/base/c.html)("Clay", "Silt", "Sand")], na.rm = TRUE)
sum.tex.r = [which](https://rdrr.io/r/base/which.html)(sum.tex.1<1.2 & sum.tex.1>0)
for(j in [c](https://rdrr.io/r/base/c.html)("Clay", "Silt", "Sand")){
hydrosprops.UNSODA[sum.tex.r,j] = hydrosprops.UNSODA[sum.tex.r,j] * 100
}
hydrosprops.UNSODA$project_url = "https://data.nal.usda.gov/dataset/unsoda-20-unsaturated-soil-hydraulic-database-database-and-program-indirect-methods-estimating-unsaturated-hydraulic-properties"
hydrosprops.UNSODA$citation_url = "https://doi.org/10.15482/USDA.ADC/1173246"
hydrosprops.UNSODA <- complete.vars(hydrosprops.UNSODA, coords = [c](https://rdrr.io/r/base/c.html)("Long", "Lat"))
saveRDS.gz(hydrosprops.UNSODA, "/mnt/diskstation/data/Soil_points/INT/UNSODA/hydrosprops.UNSODA.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.UNSODA)
#> [1] 298 40
```
#### 6\.3\.0\.12 HYDROS
* Schindler, Uwe; Müller, Lothar (2015\): [Soil hydraulic functions of international soils measured with the Extended Evaporation Method (EEM) and the HYPROP device](http://dx.doi.org/10.4228/ZALF.2003.273), Leibniz\-Zentrum für Agrarlandschaftsforschung (ZALF) e.V.\[doi: 10\.4228/ZALF.2003\.273]
```
if({
hydros.tbl = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/HydroS/int_rawret.csv", sep="\t", stringsAsFactors = FALSE, dec = ",")
hydros.tbl = hydros.tbl[,]
#summary(hydros.tbl$TENSION)
hydros.tbl$TENSIONc = [cut](https://rdrr.io/r/base/cut.html)(hydros.tbl$TENSION, breaks=[c](https://rdrr.io/r/base/c.html)(1,5,8,15,30,40,1000,15001))
#summary(hydros.tbl$TENSIONc)
hydros.tbl$WATER_CONTENT = hydros.tbl$WATER_CONTENT
#summary(hydros.tbl$WATER_CONTENT)
#head(hydros.tbl)
pr2.lst = [c](https://rdrr.io/r/base/c.html)("(5,8]", "(8,15]","(30,40]","(1e+03,1.5e+04]")
cl.lst = [c](https://rdrr.io/r/base/c.html)("w6clod", "w10cld", "w3cld", "w15l2")
hydros.tbl.df = [data.frame](https://rdrr.io/r/base/data.frame.html)(SITE_ID=[unique](https://Rdatatable.gitlab.io/data.table/reference/duplicated.html)(hydros.tbl$SITE_ID), w6clod=NA, w10cld=NA, w3cld=NA, w15l2=NA)
for(i in 1:[length](https://rdrr.io/r/base/length.html)(pr2.lst)){
hydros.tbl.df[,cl.lst[i]] = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydros.tbl.df, hydros.tbl[[which](https://rdrr.io/r/base/which.html)(hydros.tbl$TENSIONc==pr2.lst[i]),[c](https://rdrr.io/r/base/c.html)("SITE_ID","WATER_CONTENT")], match="first")$WATER_CONTENT
}
#head(hydros.tbl.df)
## properties:
hydros.soil = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/HydroS/int_basicdata.csv", sep="\t", stringsAsFactors = FALSE, dec = ",")
#head(hydros.soil)
#plot(hydros.soil[,c("H","R")])
hydros.col = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydros.soil, hydros.tbl.df)
#summary(hydros.col$OMC)
hydros.col$oc = hydros.col$OMC/1.724
hydros.col$location_accuracy_min = 100
hydros.col$location_accuracy_max = NA
for(j in [c](https://rdrr.io/r/base/c.html)("layer_sequence", "db_13b", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")){ hydros.col[,j] = NA }
hydros.sel.h = [c](https://rdrr.io/r/base/c.html)("SITE_ID", "SITE", "SAMP_DATE", "H", "R", "location_accuracy_min", "location_accuracy_max", "SAMP_NO", "layer_sequence", "TOP_DEPTH", "BOT_DEPTH", "HORIZON", "db_13b", "BULK_DENSITY", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
hydros.sel.h[[which](https://rdrr.io/r/base/which.html)(!hydros.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(hydros.col))]
hydrosprops.HYDROS = hydros.col[,hydros.sel.h]
hydrosprops.HYDROS$source_db = "HydroS"
hydrosprops.HYDROS$confidence_degree = 1
hydrosprops.HYDROS$project_url = "http://dx.doi.org/10.4228/ZALF.2003.273"
hydrosprops.HYDROS$citation_url = "https://doi.org/10.18174/odjar.v3i1.15763"
hydrosprops.HYDROS <- complete.vars(hydrosprops.HYDROS, coords = [c](https://rdrr.io/r/base/c.html)("H","R"))
saveRDS.gz(hydrosprops.HYDROS, "/mnt/diskstation/data/Soil_points/INT/HYDROS/hydrosprops.HYDROS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.HYDROS)
#> [1] 153 40
```
#### 6\.3\.0\.13 SWIG
* Rahmati, M., Weihermüller, L., Vanderborght, J., Pachepsky, Y. A., Mao, L., Sadeghi, S. H., … \& Toth, B. (2018\). [Development and analysis of the Soil Water Infiltration Global database](https://doi.org/10.5194/essd-10-1237-2018). Earth Syst. Sci. Data, 10, 1237–1263\.
```
if({
meta.tbl = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SWIG/Metadata.csv", skip = 1, fill = TRUE, blank.lines.skip=TRUE, flush=TRUE, stringsAsFactors=FALSE)
swig.xy = [read.table](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SWIG/Locations.csv", sep=";", dec = ",", stringsAsFactors=FALSE, header=TRUE, na.strings = [c](https://rdrr.io/r/base/c.html)("-",""," "), fill = TRUE)
swig.xy$x = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)(",", ".", swig.xy$x))
swig.xy$y = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)(",", ".", swig.xy$y))
swig.xy = swig.xy[,1:8]
[names](https://rdrr.io/r/base/names.html)(swig.xy)[3] = "EndDataset"
[library](https://rdrr.io/r/base/library.html)([tidyr](https://tidyr.tidyverse.org))
swig.xyf = tidyr::[fill](https://tidyr.tidyverse.org/reference/fill.html)(swig.xy, [c](https://rdrr.io/r/base/c.html)("Dataset","EndDataset"))
swig.xyf$N = swig.xyf$EndDataset - swig.xyf$Dataset + 1
swig.xyf$N = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(swig.xyf$N<1,1,swig.xyf$N)
swig.xyf = swig.xyf[,]
#plot(swig.xyf[,c("x","y")])
swig.xyf.df = swig.xyf[[rep](https://rdrr.io/r/base/rep.html)([seq_len](https://rdrr.io/r/base/seq.html)([nrow](https://rdrr.io/r/base/nrow.html)(swig.xyf)), swig.xyf$N),]
rn = [sapply](https://rdrr.io/r/base/lapply.html)([row.names](https://rdrr.io/r/base/row.names.html)(swig.xyf.df), function(i){[as.numeric](https://rdrr.io/r/base/numeric.html)([strsplit](https://Rdatatable.gitlab.io/data.table/reference/tstrsplit.html)(i, "\\.")[[1]][2])})
swig.xyf.df$Code = [rowSums](https://rdrr.io/r/base/colSums.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(rn, swig.xyf.df$Dataset), na.rm = TRUE)
## bind together
swig.col = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(swig.xyf.df[,[c](https://rdrr.io/r/base/c.html)("Code","x","y")], meta.tbl)
## aditional values for ksat
swig2.tbl = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SWIG/Statistics.csv", fill = TRUE, blank.lines.skip=TRUE, sep=";", dec = ",", flush=TRUE, stringsAsFactors=FALSE)
#hist(log1p(as.numeric(swig2.tbl$Ks..cm.hr.)), breaks=45, col="gray")
swig.col$Ks..cm.hr. = [as.numeric](https://rdrr.io/r/base/numeric.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(swig.col["Code"], swig2.tbl[[c](https://rdrr.io/r/base/c.html)("Code","Ks..cm.hr.")])$Ks..cm.hr.)
swig.col$Ks..cm.hr. = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(swig.col$Ks..cm.hr. * 24 <= 0.01, NA, swig.col$Ks..cm.hr.)
swig.col$Ksat = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(swig.col$Ksat), swig.col$Ks..cm.hr., swig.col$Ksat)
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "site_obsdate", "labsampnum", "layer_sequence", "hzn_desgn", "db_13b", "COLEws", "adod", "wrd_ws13", "w15bfm", "w15cly", "cec7_cly", "w6clod", "w10cld", "ph_kcl", "cec_nh4", "ksat_lab")){ swig.col[,j] = NA }
## depths are missing?
swig.col$hzn_top = 0
swig.col$hzn_bot = 20
swig.col$location_accuracy_min = NA
swig.col$location_accuracy_max = NA
swig.col$w15l2 = swig.col$PWP * 100
swig.col$w3cld = swig.col$FC * 100
swig.sel.h = [c](https://rdrr.io/r/base/c.html)("Code", "usiteid", "site_obsdate", "x", "y", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "db_13b", "Db", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "Texture.Class", "Clay", "Silt", "Sand", "OC", "ph_kcl", "pH", "CEC", "cec_nh4", "Gravel", "ksat_lab", "Ksat")
swig.sel.h[[which](https://rdrr.io/r/base/which.html)(!swig.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(swig.col))]
hydrosprops.SWIG = swig.col[,swig.sel.h]
hydrosprops.SWIG$source_db = "SWIG"
hydrosprops.SWIG$Ksat = hydrosprops.SWIG$Ksat * 24 ## convert to days
#hist(hydrosprops.SWIG$w3cld, breaks=45, col="gray")
#hist(log1p(hydrosprops.SWIG$Ksat), breaks=25, col="gray")
#summary(hydrosprops.SWIG$Ksat); summary(hydrosprops.UNSODA$ksat_lab)
## confidence degree
SWIG.ql = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "SWIG_database_Confidence_degree")
hydrosprops.SWIG$confidence_degree = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$confidence_degree
hydrosprops.SWIG$location_accuracy_min = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$location_accuracy_min
hydrosprops.SWIG$location_accuracy_max = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$location_accuracy_max
#summary(as.factor(hydrosprops.SWIG$confidence_degree))
## replace coordinates
SWIG.Long = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$Improved_long
SWIG.Lat = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$Improved_lat
hydrosprops.SWIG$x = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(SWIG.Long), hydrosprops.SWIG$x, SWIG.Long)
hydrosprops.SWIG$y = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(SWIG.Long), hydrosprops.SWIG$y, SWIG.Lat)
hydrosprops.SWIG$Texture.Class = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$tex_psda
swig.lab = SWIG.ql$Code[[which](https://rdrr.io/r/base/which.html)(SWIG.ql$Ksat_Method [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("Constant head method", "Constant Head Method", "Falling head method"))]
hydrosprops.SWIG$ksat_lab[hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) swig.lab] = hydrosprops.SWIG$Ksat[hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) swig.lab]
hydrosprops.SWIG$Ksat[hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) swig.lab] = NA
## remove duplicates
swig.rem = hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) SWIG.ql$Code[[is.na](https://rdrr.io/r/base/NA.html)(SWIG.ql$additional_information)]
#summary(swig.rem)
#Mode FALSE TRUE
#logical 200 6921
hydrosprops.SWIG = hydrosprops.SWIG[swig.rem,]
hydrosprops.SWIG = hydrosprops.SWIG[,]
## remove all ksat values < 0.01 ?
#summary(hydrosprops.SWIG$Ksat < 0.01)
hydrosprops.SWIG$project_url = "https://soil-modeling.org/resources-links/data-portal/swig"
hydrosprops.SWIG$citation_url = "https://doi.org/10.5194/essd-10-1237-2018"
hydrosprops.SWIG <- complete.vars(hydrosprops.SWIG, sel=[c](https://rdrr.io/r/base/c.html)("w15l2","w3cld","ksat_lab","Ksat"), coords=[c](https://rdrr.io/r/base/c.html)("x","y"))
saveRDS.gz(hydrosprops.SWIG, "/mnt/diskstation/data/Soil_points/INT/SWIG/hydrosprops.SWIG.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.SWIG)
#> [1] 3676 40
```
#### 6\.3\.0\.14 Pseudo\-points
* Pseudo\-observations using simulated points (world deserts)
```
if({
## 0 soil organic carbon + 98% sand content (deserts)
sprops.SIM = [readRDS](https://rdrr.io/r/base/readRDS.html)("/mnt/diskstation/data/LandGIS/training_points/soil_props/sprops.SIM.rds")
sprops.SIM$w10cld = 3.1
sprops.SIM$w3cld = 1.2
sprops.SIM$w15l2 = 0.8
sprops.SIM$tex_psda = "sand"
sprops.SIM$usiteid = sprops.SIM$lcv_admin0_fao.gaul_c_250m_s0..0cm_2015_v1.0
sprops.SIM$longitude_decimal_degrees = sprops.SIM$x
sprops.SIM$latitude_decimal_degrees = sprops.SIM$y
## Very approximate values for Ksat for shifting sand:
tax.r = raster::[extract](https://rdrr.io/pkg/raster/man/extract.html)(raster("/mnt/diskstation/data/LandGIS/archive/predicted250m/sol_grtgroup_usda.soiltax_c_250m_s0..0cm_1950..2017_v0.1.tif"), sprops.SIM[,[c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees","latitude_decimal_degrees")])
tax.leg = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/LandGIS/archive/predicted250m/sol_grtgroup_usda.soiltax_c_250m_s0..0cm_1950..2017_v0.1.tif.csv")
tax.ksat_lab = [aggregate](https://rdrr.io/r/stats/aggregate.html)(eth.tbl$ksat_lab, by=[list](https://rdrr.io/r/base/list.html)(Group=eth.tbl$tax_grtgroup), FUN=mean, na.rm=TRUE)
tax.ksat_lab.sd = [aggregate](https://rdrr.io/r/stats/aggregate.html)(eth.tbl$ksat_lab, by=[list](https://rdrr.io/r/base/list.html)(Group=eth.tbl$tax_grtgroup), FUN=sd, na.rm=TRUE)
tax.ksat_field = [aggregate](https://rdrr.io/r/stats/aggregate.html)(eth.tbl$ksat_field, by=[list](https://rdrr.io/r/base/list.html)(Group=eth.tbl$tax_grtgroup), FUN=mean, na.rm=TRUE)
tax.leg$ksat_lab = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(tax.leg, tax.ksat_lab)$x
tax.leg$ksat_field = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(tax.leg, tax.ksat_field)$x
tax.sel = [c](https://rdrr.io/r/base/c.html)("cryochrepts","cryorthods","torripsamments","haplustolls","torrifluvents")
sprops.SIM$ksat_field = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(site_key=sprops.SIM$site_key, Number=tax.r), tax.leg[tax.leg$Group [%in%](https://rdrr.io/r/base/match.html) tax.sel,])$ksat_field
sprops.SIM$ksat_lab = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(site_key=sprops.SIM$site_key, Number=tax.r), tax.leg[tax.leg$Group [%in%](https://rdrr.io/r/base/match.html) tax.sel,])$ksat_lab
#summary(sprops.SIM$ksat_lab)
#summary(sprops.SIM$ksat_field)
#View(sprops.SIM)
for(j in col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sprops.SIM))]){ sprops.SIM[,j] <- NA }
sprops.SIM$project_url = "https://gitlab.com/openlandmap/global-layers"
sprops.SIM$citation_url = ""
hydrosprops.SIM = sprops.SIM[,col.names]
hydrosprops.SIM$confidence_degree = 30
saveRDS.gz(hydrosprops.SIM, "/mnt/diskstation/data/Soil_points/INT/hydrosprops.SIM.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.SIM)
#> [1] 8133 40
```
6\.4 alt text Bind all datasets
-------------------------------
#### 6\.4\.0\.1 Bind and clean\-up
Bind all tables / rename columns where necessary:
```
[ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("hydrosprops.*"))
#> [1] "hydrosprops.AfSPDB" "hydrosprops.EGRPR" "hydrosprops.ETH"
#> [4] "hydrosprops.FRED" "hydrosprops.HYBRAS" "hydrosprops.HYDROS"
#> [7] "hydrosprops.ISIS" "hydrosprops.NCSS" "hydrosprops.NPDB"
#> [10] "hydrosprops.SIM" "hydrosprops.SPADE2" "hydrosprops.SWIG"
#> [13] "hydrosprops.UNSODA" "hydrosprops.WISE"
tot_sprops = dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)([lapply](https://rdrr.io/r/base/lapply.html)([ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("hydrosprops.*")), function(i){ [mutate_all](https://dplyr.tidyverse.org/reference/mutate_all.html)([setNames](https://rdrr.io/r/stats/setNames.html)([get](https://rdrr.io/r/base/get.html)(i), col.names), as.character) }))
## convert to numeric:
for(j in [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max", "layer_sequence", "hzn_top","hzn_bot", "oc", "ph_h2o", "ph_kcl", "db_od", "clay_tot_psa", "sand_tot_psa","silt_tot_psa", "wpg2", "db_13b", "COLEws", "w15cly", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13","cec7_cly", "cec_sum", "cec_nh4", "ksat_lab","ksat_field")){
tot_sprops[,j] = [as.numeric](https://rdrr.io/r/base/numeric.html)(tot_sprops[,j])
}
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#head(tot_sprops)
## rename some columns:
tot_sprops = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(tot_sprops, replace = [c](https://rdrr.io/r/base/c.html)("db_od" = "db", "ph_h2o" = "ph_h2o_v", "oc" = "oc_v"))
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$source_db))
#> AfSPDB Australian_ksat_data Belgian_ksat_data
#> 10720 118 145
#> Canada_NPDB China_ksat_data ETH_literature
#> 404 209 1954
#> Florida_ksat_data FRED HYBRAS
#> 6532 3761 814
#> HydroS ISRIC_ISIS ISRIC_WISE
#> 153 1176 1325
#> Russia_EGRPR SIMULATED SPADE2
#> 1138 8133 1182
#> SWIG Tibetan_plateau_ksat_data UNSODA
#> 3676 65 298
#> USDA_NCSS
#> 113991
```
Add unique row identifier
```
tot_sprops$uuid = uuid::[UUIDgenerate](https://rdrr.io/pkg/uuid/man/UUIDgenerate.html)(use.time=TRUE, n=[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops))
```
and unique location based on the [Open Location Code](https://cran.r-project.org/web/packages/olctools/vignettes/Introduction_to_olctools.html):
```
tot_sprops$olc_id = olctools::[encode_olc](https://rdrr.io/pkg/olctools/man/encode_olc.html)(tot_sprops$latitude_decimal_degrees, tot_sprops$longitude_decimal_degrees, 11)
[length](https://rdrr.io/r/base/length.html)([levels](https://rdrr.io/r/base/levels.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$olc_id)))
#> [1] 25075
```
#### 6\.4\.0\.2 Quality\-control spatial locations
Unique locations:
```
tot_sprops.pnts = tot_sprops[]
coordinates(tot_sprops.pnts) <- ~ longitude_decimal_degrees + latitude_decimal_degrees
proj4string(tot_sprops.pnts) <- "+init=epsg:4326"
```
Remove points falling in the sea or similar:
```
if({
#mask = terra::rast("./layers1km/lcv_landmask_esacci.lc.l4_c_1km_s0..0cm_2000..2015_v1.0.tif")
mask = terra::[rast](https://rdrr.io/pkg/terra/man/rast.html)("/mnt/diskstation/data/LandGIS/layers250m/lcv_landmask_esacci.lc.l4_c_250m_s0..0cm_2000..2015_v1.0.tif")
ov.sprops <- terra::[extract](https://rdrr.io/pkg/terra/man/extract.html)(mask, terra::[vect](https://rdrr.io/pkg/terra/man/vect.html)(tot_sprops.pnts))
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(ov.sprops[,2]))
if([sum](https://rdrr.io/r/base/sum.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]))>0 | [sum](https://rdrr.io/r/base/sum.html)(ov.sprops[,2]==2)>0){
rem.lst = [which](https://rdrr.io/r/base/which.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]) | ov.sprops[,2]==2 | ov.sprops[,2]==4)
rem.sp = tot_sprops.pnts$site_key[rem.lst]
tot_sprops.pnts = tot_sprops.pnts[-rem.lst,]
} else {
rem.sp = NA
}
}
## final number of unique spatial locations:
[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts)
#> [1] 25075
```
#### 6\.4\.0\.3 Clean\-up
Clean up typos and physically impossible values:
```
for(j in [c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "sand_tot_psa", "silt_tot_psa", "wpg2", "w6clod", "w10cld", "w3cld", "w15l2")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>100|tot_sprops[,j]<0, NA, tot_sprops[,j])
}
for(j in [c](https://rdrr.io/r/base/c.html)("ph_h2o_v","ph_kcl")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>12|tot_sprops[,j]<2, NA, tot_sprops[,j])
}
#hist(tot_sprops$db_od)
for(j in [c](https://rdrr.io/r/base/c.html)("db")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>2.4|tot_sprops[,j]<0.05, NA, tot_sprops[,j])
}
#summary(tot_sprops$ksat_lab)
for(j in [c](https://rdrr.io/r/base/c.html)("ksat_lab","ksat_field")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j] <=0, NA, tot_sprops[,j])
}
#hist(tot_sprops$oc)
for(j in [c](https://rdrr.io/r/base/c.html)("oc_v")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>90|tot_sprops[,j]<0, NA, tot_sprops[,j])
}
tot_sprops$hzn_depth = tot_sprops$hzn_top + (tot_sprops$hzn_bot-tot_sprops$hzn_top)/2
#tot_sprops = tot_sprops[!is.na(tot_sprops$hzn_depth),]
## texture fractions check:
sum.tex.T = [rowSums](https://rdrr.io/r/base/colSums.html)(tot_sprops[,[c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "silt_tot_psa", "sand_tot_psa")], na.rm = TRUE)
[which](https://rdrr.io/r/base/which.html)(sum.tex.T<1.2 & sum.tex.T>0)
#> [1] 9334 9979 12371 22431 22441 81311 81312 81313 93971 150217
for(i in [which](https://rdrr.io/r/base/which.html)(sum.tex.T<1.2 & sum.tex.T>0)){
for(j in [c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "silt_tot_psa", "sand_tot_psa")){
tot_sprops[i,j] <- NA
}
}
```
#### 6\.4\.0\.4 Histogram plots
```
[library](https://rdrr.io/r/base/library.html)([ggplot2](http://ggplot2.tidyverse.org))
#ggplot(tot_sprops[tot_sprops$w15l2<100,], aes(x=source_db, y=w15l2)) + geom_boxplot() + theme(axis.text.x = element_text(angle = 90, hjust = 1))
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops[tot_sprops$w3cld<100,], [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=w3cld)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 68935 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=db)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 69588 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=ph_h2o_v)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 122423 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log10](https://rdrr.io/r/base/Log.html)(ksat_field+1))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 144683 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(ksat_lab))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 139595 rows containing non-finite values (stat_boxplot).
```
#### 6\.4\.0\.5 Convert to wide format
Add `layer_sequence` where missing since this is needed to be able to convert to wide
format:
```
#summary(tot_sprops$layer_sequence)
tot_sprops$dsiteid = [paste](https://rdrr.io/r/base/paste.html)(tot_sprops$source_db, tot_sprops$site_key, tot_sprops$site_obsdate, sep="_")
if({
[library](https://rdrr.io/r/base/library.html)([dplyr](https://dplyr.tidyverse.org))
## Note: takes >1 min
l.s1 <- tot_sprops[,[c](https://rdrr.io/r/base/c.html)("olc_id","hzn_depth")] [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [group_by](https://dplyr.tidyverse.org/reference/group_by.html)(olc_id) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(layer_sequence.f = data.table::[frank](https://Rdatatable.gitlab.io/data.table/reference/frank.html)(hzn_depth, ties.method = "first"))
tot_sprops$layer_sequence.f = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(tot_sprops$layer_sequence), l.s1$layer_sequence.f, tot_sprops$layer_sequence)
tot_sprops$layer_sequence.f = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops$layer_sequence.f>6, 6, tot_sprops$layer_sequence.f)
}
```
Convert long table to [wide table format](https://ncss-tech.github.io/AQP/aqp/aqp-intro.html) so that each depth gets unique column:
```
if({
[library](https://rdrr.io/r/base/library.html)([data.table](http://r-datatable.com))
hor.names.s = [c](https://rdrr.io/r/base/c.html)("hzn_top", "hzn_bot", "hzn_desgn", "db", "w6clod", "w3cld", "w15l2", "adod", "wrd_ws13", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc_v", "ph_kcl", "ph_h2o_v", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field", "uuid")
tot_sprops.w = data.table::[dcast](https://Rdatatable.gitlab.io/data.table/reference/dcast.data.table.html)( [as.data.table](https://Rdatatable.gitlab.io/data.table/reference/as.data.table.html)(tot_sprops),
formula = olc_id ~ layer_sequence.f,
value.var = hor.names.s,
fun=function(x){ x[1] },
verbose = FALSE)
}
tot_sprops_w.pnts = tot_sprops.pnts
tot_sprops_w.pnts@data = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tot_sprops.pnts@data, tot_sprops.w)
#> Joining by: olc_id
```
Write all soil profiles using a wide format:
```
sel.rm.pnts <- tot_sprops_w.pnts$site_key [%in%](https://rdrr.io/r/base/match.html) rem.sp
[unlink](https://rdrr.io/r/base/unlink.html)("./out/gpkg/sol_hydro.pnts_horizons.gpkg")
writeOGR(tot_sprops_w.pnts[!sel.rm.pnts,], "./out/gpkg/sol_hydro.pnts_horizons.gpkg", "sol_hydro.pnts_horizons", drive="GPKG")
```
#### 6\.4\.0\.6 Ksat dataset:
```
sel.compl =
[summary](https://rdrr.io/r/base/summary.html)(sel.compl)
#> Mode FALSE TRUE
#> logical 131042 24752
## complete ksat_field points
tot_sprops.pnts.C = tot_sprops[ & !tot_sprops$source_db == "SIMULATED",]
sum.NA = [sapply](https://rdrr.io/r/base/lapply.html)(tot_sprops.pnts.C, function(i){[sum](https://rdrr.io/r/base/sum.html)([is.na](https://rdrr.io/r/base/NA.html)(i))})
tot_sprops.pnts.C = tot_sprops.pnts.C[,!sum.NA==[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts.C)]
tot_sprops.pnts.C$w10cld = NULL
tot_sprops.pnts.C$w6clod = NULL
tot_sprops.pnts.C$w15bfm = NULL
tot_sprops.pnts.C$site_obsdate = NULL
tot_sprops.pnts.C$confidence_degree = NULL
tot_sprops.pnts.C$cec_sum = NULL
tot_sprops.pnts.C$wpg2 = NULL
tot_sprops.pnts.C$uuid = NULL
[dim](https://rdrr.io/r/base/dim.html)(tot_sprops.pnts.C)
#> [1] 13258 25
```
#### 6\.4\.0\.7 RDS files
Plot in Goode Homolozine projection and save final objects:
```
if({
tot_sprops.pnts_sf <- st_as_sf(tot_sprops.pnts[1])
plot_gh(tot_sprops.pnts_sf, out.pdf="./img/sol_hydro.pnts_sites.pdf")
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_hydro.pnts_sites.pdf ./img/sol_hydro.pnts_sites -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_hydro.pnts_sites.png ./img/sol_hydro.pnts_sites.png")
}
```
(\#fig:sol\_hydro.pnts\_sites)Soil profiles and soil samples with physical and hydraulic soil properties properties global compilation.
Fig. 1: Soil profiles and soil samples with physical and hydraulic soil properties properties global compilation.
```
if({
sel.ks = [which](https://rdrr.io/r/base/which.html)(tot_sprops.pnts$location_id [%in%](https://rdrr.io/r/base/match.html) tot_sprops.pnts.C$location_id)
tot_spropsC.pnts_sf <- st_as_sf(tot_sprops.pnts[sel.ks, 1])
plot_gh(tot_spropsC.pnts_sf, out.pdf="./img/sol_ksat.pnts_sites.pdf")
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_ksat.pnts_sites.pdf ./img/sol_ksat.pnts_sites -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_ksat.pnts_sites.png ./img/sol_ksat.pnts_sites.png")
}
```
(\#fig:sol\_ksat.pnts\_sites)Soil profiles and soil samples with Ksat measurements global compilation
Fig. 2: Soil profiles and soil samples with Ksat measurements global compilation.
6\.5 alt text Overlay www.OpenLandMap.org layers
------------------------------------------------
Load the tiling system (1 degree grid representing global land mask) and run spatial overlay in parallel:
```
if({
tile.pol = readOGR("./tiles/global_tiling_100km_grid.gpkg")
#length(tile.pol)
ov.sol <- extract.tiled(obj=tot_sprops.pnts, tile.pol=tile.pol, path="/data/tt/LandGIS/grid250m", ID="ID", cpus=64)
## Valid predictors:
pr.vars = [unique](https://Rdatatable.gitlab.io/data.table/reference/duplicated.html)([unlist](https://rdrr.io/r/base/unlist.html)([sapply](https://rdrr.io/r/base/lapply.html)([c](https://rdrr.io/r/base/c.html)("fapar", "landsat", "lc100", "mod09a1", "mod11a2", "alos.palsar", "sm2rain", "irradiation_solar.atlas", "usgs.ecotapestry", "floodmap.500y", "bioclim", "water.table.depth_deltares", "snow.prob_esacci", "water.vapor_nasa.eo", "wind.speed_terraclimate", "merit.dem_m", "merit.hydro_m", "cloud.fraction_earthenv", "water.occurance_jrc", "wetlands.cw_upmc", "pb2002"), function(i){[names](https://rdrr.io/r/base/names.html)(ov.sol)[[grep](https://rdrr.io/r/base/grep.html)(i, [names](https://rdrr.io/r/base/names.html)(ov.sol))]})))
[str](https://rdrr.io/r/utils/str.html)(pr.vars)
## 349
#saveRDS.gz(ov.sol, "/mnt/diskstation/data/Soil_points/ov.sol_hydro.pnts_horizons.rds")
#ov.sol <- readRDS.gz("/mnt/diskstation/data/Soil_points/ov.sol_hydro.pnts_horizons.rds")
## Final regression matrix:
rm.sol = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tot_sprops, ov.sol[,[c](https://rdrr.io/r/base/c.html)("olc_id", pr.vars)])
## check that there are no duplicates
[sum](https://rdrr.io/r/base/sum.html)([duplicated](https://Rdatatable.gitlab.io/data.table/reference/duplicated.html)(rm.sol$uuid))
rm.ksat = rm.sol[(,]
}
[dim](https://rdrr.io/r/base/dim.html)(rm.sol)
```
Save final analysis\-ready objects:
```
saveRDS.gz(tot_sprops, "./out/rds/sol_hydro.pnts_horizons.rds")
saveRDS.gz(tot_sprops.pnts, "/mnt/diskstation/data/Soil_points/sol_hydro.pnts_sites.rds")
## reorder columns
tot_sprops.pnts.C$ID = 1:[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts.C)
tot_sprops.pnts.C = tot_sprops.pnts.C[,[c](https://rdrr.io/r/base/c.html)([which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(tot_sprops.pnts.C)=="ID"), [which](https://rdrr.io/r/base/which.html)(]
saveRDS.gz(tot_sprops.pnts.C, "./out/rds/sol_ksat.pnts_horizons.rds")
#library(farff)
#writeARFF(tot_sprops, "./out/arff/sol_hydro.pnts_horizons.arff", overwrite = TRUE)
#writeARFF(tot_sprops.pnts.C, "./out/arff/sol_ksat.pnts_horizons.arff", overwrite = TRUE)
## compressed CSV
[write.csv](https://rdrr.io/r/utils/write.table.html)(tot_sprops, file=[gzfile](https://rdrr.io/r/base/connections.html)("./out/csv/sol_hydro.pnts_horizons.csv.gz"))
[write.csv](https://rdrr.io/r/utils/write.table.html)(tot_sprops.pnts.C, file=[gzfile](https://rdrr.io/r/base/connections.html)("./out/csv/sol_ksat.pnts_horizons.csv.gz"), row.names = FALSE)
saveRDS.gz(rm.sol, "./out/rds/sol_hydro.pnts_horizons_rm.rds")
saveRDS.gz(rm.ksat, "./out/rds/sol_ksat.pnts_horizons_rm.rds")
```
Save temp object:
```
#rm(rm.sol); gc()
save.image.pigz(file="soilhydro.RData")
## rmarkdown::render("Index.rmd")
```
6\.1 Overview
-------------
This section describes import steps used to produce a global compilation of soil
laboratory data with physicals and hydraulic soil properties that can be then
used for predictive soil mapping / modeling at global and regional scales.
Read more about computing with soil hydraulic / physical properties in R:
* Gupta, S., Hengl, T., Lehmann, P., Bonetti, S., and Or, D. [**SoilKsatDB: global soil saturated hydraulic conductivity measurements for geoscience applications**](https://doi.org/10.5194/essd-2020-149). Earth Syst. Sci. Data Discuss., [https://doi.org/10\.5194/essd\-2020\-149](https://doi.org/10.5194/essd-2020-149), in review, 2021\.
* de Sousa, D. F., Rodrigues, S., de Lima, H. V., \& Chagas, L. T. (2020\). [R software packages as a tool for evaluating soil physical and hydraulic properties](https://doi.org/10.1016/j.compag.2019.105077). Computers and Electronics in Agriculture, 168, 105077\.
6\.2 alt text Specifications
----------------------------
#### 6\.2\.0\.1 Data standards
* Metadata information: [“Soil Survey Investigation Report No. 42\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1253872.pdf) and [“Soil Survey Investigation Report No. 45\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/nrcs142p2_052226.pdf)
* Model DB: [National Cooperative Soil Survey (NCSS) Soil Characterization Database](https://ncsslabdatamart.sc.egov.usda.gov/)
#### 6\.2\.0\.2 *Target variables:*
```
site.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max")
hor.names = [c](https://rdrr.io/r/base/c.html)("labsampnum","site_key","layer_sequence","hzn_top","hzn_bot","hzn_desgn","db_13b", "db_od", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
## target structure:
col.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "db_13b", "db_od", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field", "source_db", "confidence_degree", "project_url", "citation_url")
```
* `db_13b`: Bulk density (33kPa) in g/cm3 for \<2mm soil fraction,
* `db`: Bulk density (unknown method) in g/cm3 for \<2mm soil fraction,
* `COLEws`: Coefficient of Linear Extensibility (COLE) whole soil in ratio for \<2mm soil fraction,
* `w6clod`: Water Content 6 kPa \<2mm in % wt for \<2mm soil fraction,
* `w10cld`: Water Content 10 kPa \<2mm in % wt for \<2mm soil fraction,
* `w3cld`: Water Content 33 kPa \<2mm in % vol for \<2mm soil fraction (Field Capacity),
* `w15l2`: Water Content 1500 kPa \<2mm in % vol for \<2mm soil fraction (Permanent Wilting Point),
* `w15bfm`: Water Content 1500 kPa moist \<2mm in % wt for \<2mm soil fraction,
* `adod`: Air\-Dry/Oven\-Dry in ratio for \<2mm soil fraction,
* `wrd_ws13`: Water Retention Difference whole soil, 1500\-kPa suction and an upper limit of usually 33\-kPa in cm3 / cm\-3 for \<2mm soil fraction,
* `cec7_cly`: CEC\-7/Clay ratio in ratio for \<2mm soil fraction,
* `w15cly`: CEC/Clay ratio at 1500 kPa in ratio for \<2mm soil fraction,
* `tex_psda`: Texture Determined, PSDA in factor for \<2mm soil fraction,
* `clay_tot_psa`: Total Clay, \<0\.002 mm (\<2 µm) in % wt for \<2mm soil fraction,
* `silt_tot_psa`: Total Silt, 0\.002\-0\.05 mm in % wt for \<2mm soil fraction,
* `sand_tot_psa`: Total Sand, 0\.05\-2\.0 mm in % wt for \<2mm soil fraction,
* `wpg2`: Coarse fragments \>2\-mm weight fraction in % wt for \<2mm soil fraction,
* `hzn_top`: The top (upper) depth of the layer in centimeters. in cm for \<2mm soil fraction,
* `hzn_bot`: The bottom (lower) depth of the layer in centimeters. in cm for \<2mm soil fraction,
* `oc_v`: Organic carbon (unknown method) in % wt for \<2mm soil fraction,
* `ph_kcl`: pH, 1N KCl in ratio for \<2mm soil fraction,
* `ph_h2o_v`: pH in water (unknown method) for \<2mm soil fraction,
* `cec_sum`: Sum of Cations (CEC\-8\.2\) in cmol(\+)/kg for \<2mm soil fraction,
* `cec_nh4`: NH4OAc, pH 7 (CEC\-7\) in cmol(\+)/kg for \<2mm soil fraction,
* `ksat_field`: Field\-estimated Saturated Hydraulic Conductivity in cm/day for \<2mm soil fraction,
* `ksat_lab`: Laboratory\-estimated Saturated Hydraulic Conductivity in cm/day for \<2mm soil fraction,
Same variable names have been adjusted (e.g. `ph_h2o` to `ph_h2o_na`) to include `unknown method` so that variables from different laboratory methods can be seamlessly merged.
Conversion between VWC and MWC is based on the formula ([**landon1991handbook?**](#ref-landon1991handbook); [**benham1998field?**](#ref-benham1998field); [**vanReeuwijk1993procedures?**](#ref-vanReeuwijk1993procedures)):
* VWC (%v/v) \= MWC (% by weight ) \* bulk density (kg/m3\)
#### 6\.2\.0\.1 Data standards
* Metadata information: [“Soil Survey Investigation Report No. 42\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1253872.pdf) and [“Soil Survey Investigation Report No. 45\.”](https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/nrcs142p2_052226.pdf)
* Model DB: [National Cooperative Soil Survey (NCSS) Soil Characterization Database](https://ncsslabdatamart.sc.egov.usda.gov/)
#### 6\.2\.0\.2 *Target variables:*
```
site.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max")
hor.names = [c](https://rdrr.io/r/base/c.html)("labsampnum","site_key","layer_sequence","hzn_top","hzn_bot","hzn_desgn","db_13b", "db_od", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
## target structure:
col.names = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "db_13b", "db_od", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field", "source_db", "confidence_degree", "project_url", "citation_url")
```
* `db_13b`: Bulk density (33kPa) in g/cm3 for \<2mm soil fraction,
* `db`: Bulk density (unknown method) in g/cm3 for \<2mm soil fraction,
* `COLEws`: Coefficient of Linear Extensibility (COLE) whole soil in ratio for \<2mm soil fraction,
* `w6clod`: Water Content 6 kPa \<2mm in % wt for \<2mm soil fraction,
* `w10cld`: Water Content 10 kPa \<2mm in % wt for \<2mm soil fraction,
* `w3cld`: Water Content 33 kPa \<2mm in % vol for \<2mm soil fraction (Field Capacity),
* `w15l2`: Water Content 1500 kPa \<2mm in % vol for \<2mm soil fraction (Permanent Wilting Point),
* `w15bfm`: Water Content 1500 kPa moist \<2mm in % wt for \<2mm soil fraction,
* `adod`: Air\-Dry/Oven\-Dry in ratio for \<2mm soil fraction,
* `wrd_ws13`: Water Retention Difference whole soil, 1500\-kPa suction and an upper limit of usually 33\-kPa in cm3 / cm\-3 for \<2mm soil fraction,
* `cec7_cly`: CEC\-7/Clay ratio in ratio for \<2mm soil fraction,
* `w15cly`: CEC/Clay ratio at 1500 kPa in ratio for \<2mm soil fraction,
* `tex_psda`: Texture Determined, PSDA in factor for \<2mm soil fraction,
* `clay_tot_psa`: Total Clay, \<0\.002 mm (\<2 µm) in % wt for \<2mm soil fraction,
* `silt_tot_psa`: Total Silt, 0\.002\-0\.05 mm in % wt for \<2mm soil fraction,
* `sand_tot_psa`: Total Sand, 0\.05\-2\.0 mm in % wt for \<2mm soil fraction,
* `wpg2`: Coarse fragments \>2\-mm weight fraction in % wt for \<2mm soil fraction,
* `hzn_top`: The top (upper) depth of the layer in centimeters. in cm for \<2mm soil fraction,
* `hzn_bot`: The bottom (lower) depth of the layer in centimeters. in cm for \<2mm soil fraction,
* `oc_v`: Organic carbon (unknown method) in % wt for \<2mm soil fraction,
* `ph_kcl`: pH, 1N KCl in ratio for \<2mm soil fraction,
* `ph_h2o_v`: pH in water (unknown method) for \<2mm soil fraction,
* `cec_sum`: Sum of Cations (CEC\-8\.2\) in cmol(\+)/kg for \<2mm soil fraction,
* `cec_nh4`: NH4OAc, pH 7 (CEC\-7\) in cmol(\+)/kg for \<2mm soil fraction,
* `ksat_field`: Field\-estimated Saturated Hydraulic Conductivity in cm/day for \<2mm soil fraction,
* `ksat_lab`: Laboratory\-estimated Saturated Hydraulic Conductivity in cm/day for \<2mm soil fraction,
Same variable names have been adjusted (e.g. `ph_h2o` to `ph_h2o_na`) to include `unknown method` so that variables from different laboratory methods can be seamlessly merged.
Conversion between VWC and MWC is based on the formula ([**landon1991handbook?**](#ref-landon1991handbook); [**benham1998field?**](#ref-benham1998field); [**vanReeuwijk1993procedures?**](#ref-vanReeuwijk1993procedures)):
* VWC (%v/v) \= MWC (% by weight ) \* bulk density (kg/m3\)
6\.3 alt text Data import
-------------------------
#### 6\.3\.0\.1 NCSS Characterization Database
* National Cooperative Soil Survey, (2020\). [National Cooperative Soil Survey Characterization Database](http://ncsslabdatamart.sc.egov.usda.gov/). <http://ncsslabdatamart.sc.egov.usda.gov/>
```
if({
ncss.site <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Site_Location.csv", stringsAsFactors = FALSE)
#str(ncss.site)
## Location accuracy unknown but we assume 100m
ncss.site$location_accuracy_max = NA
ncss.site$location_accuracy_min = 100
ncss.layer <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Layer.csv", stringsAsFactors = FALSE)
ncss.bdm <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Bulk_Density_and_Moisture.csv", stringsAsFactors = FALSE)
#summary(as.factor(ncss.bdm$prep_code))
ncss.bdm.0 <- ncss.bdm[ncss.bdm$prep_code=="S",]
#summary(ncss.bdm.0$db_od)
## 0 values --- error!
ncss.carb <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Carbon_and_Extractions.csv", stringsAsFactors = FALSE)
ncss.organic <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Organic.csv", stringsAsFactors = FALSE)
ncss.pH <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_pH_and_Carbonates.csv", stringsAsFactors = FALSE)
#str(ncss.pH)
#summary(!is.na(ncss.pH$ph_h2o))
ncss.PSDA <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_PSDA_and_Rock_Fragments.csv", stringsAsFactors = FALSE)
ncss.CEC <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_CEC_and_Bases.csv")
ncss.horizons <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(ncss.bdm.0, ncss.layer, ncss.carb, ncss.organic, ncss.pH, ncss.PSDA, ncss.CEC), type = "left", by="labsampnum")
#head(ncss.horizons)
[nrow](https://rdrr.io/r/base/nrow.html)(ncss.horizons)
ncss.horizons$ksat_lab = NA; ncss.horizons$ksat_field = NA
hydrosprops.NCSS = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(ncss.site[,site.names], ncss.horizons[,hor.names], by="site_key")
## soil organic carbon:
#summary(!is.na(hydrosprops.NCSS$oc))
#summary(!is.na(hydrosprops.NCSS$ph_h2o))
#summary(!is.na(hydrosprops.NCSS$ph_kcl))
hydrosprops.NCSS$source_db = "USDA_NCSS"
#str(hydrosprops.NCSS)
#hist(hydrosprops.NCSS$w3cld[hydrosprops.NCSS$w3cld<150], breaks=45, col="gray")
## ERROR: MANY VALUES >100%
## fills in missing BD values using formula from Köchy, Hiederer, and Freibauer (2015)
db.f = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(hydrosprops.NCSS$db_13b), -0.31*[log](https://rdrr.io/r/base/Log.html)(hydrosprops.NCSS$oc)+1.38, hydrosprops.NCSS$db_13b)
db.f[db.f<0.02 | db.f>2.87] = NA
## Convert to volumetric % to match most of world data sets:
hydrosprops.NCSS$w3cld = hydrosprops.NCSS$w3cld * db.f
hydrosprops.NCSS$w15l2 = hydrosprops.NCSS$w15l2 * db.f
hydrosprops.NCSS$w10cld = hydrosprops.NCSS$w10cld * db.f
#summary(as.factor(hydrosprops.NCSS$tex_psda))
## texture classes need to be cleaned up!
## check WRC values for sandy soils
#hydrosprops.NCSS[which(!is.na(hydrosprops.NCSS$w3cld) & hydrosprops.NCSS$sand_tot_psa>95)[1:10],]
## check WRC values for ORGANIC soils
#hydrosprops.NCSS[which(!is.na(hydrosprops.NCSS$w3cld) & hydrosprops.NCSS$oc>12)[1:10],]
## w3cld > 100?
hydrosprops.NCSS$confidence_degree = 1
hydrosprops.NCSS$project_url = "http://ncsslabdatamart.sc.egov.usda.gov/"
hydrosprops.NCSS$citation_url = "https://doi.org/10.2136/sssaj2016.11.0386n"
hydrosprops.NCSS = complete.vars(hydrosprops.NCSS)
saveRDS.gz(hydrosprops.NCSS, "/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/hydrosprops.NCSS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.NCSS)
#> [1] 113991 40
```
#### 6\.3\.0\.2 Africa soil profiles database
* Leenaars, J. G., Van OOstrum, A. J. M., \& Ruiperez Gonzalez, M. (2014\). [Africa soil profiles database version 1\.2\. A compilation of georeferenced and standardized legacy soil profile data for Sub\-Saharan Africa (with dataset)](https://www.isric.org/projects/africa-soil-profiles-database-afsp). Wageningen: ISRIC Report 2014/01; 2014\.
```
if({
[require](https://rdrr.io/r/base/library.html)([foreign](https://svn.r-project.org/R-packages/trunk/foreign))
afspdb.profiles <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Profiles.dbf", as.is=TRUE)
## approximate location error
afspdb.profiles$location_accuracy_min = afspdb.profiles$XYAccur * 1e5
afspdb.profiles$location_accuracy_min = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(afspdb.profiles$location_accuracy_min < 20, NA, afspdb.profiles$location_accuracy_min)
afspdb.profiles$location_accuracy_max = NA
afspdb.layers <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Layers.dbf", as.is=TRUE)
## select columns of interest:
afspdb.s.lst <- [c](https://rdrr.io/r/base/c.html)("ProfileID", "usiteid", "T_Year", "X_LonDD", "Y_LatDD", "location_accuracy_min", "location_accuracy_max")
## Convert to weight content
#summary(afspdb.layers$BlkDens)
## select layers
afspdb.h.lst <- [c](https://rdrr.io/r/base/c.html)("LayerID", "ProfileID", "LayerNr", "UpDpth", "LowDpth", "HorDes", "db_13b", "BlkDens", "COLEws", "VMCpF18", "VMCpF20", "VMCpF25", "VMCpF42", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "LabTxtr", "Clay", "Silt", "Sand", "OrgC", "PHKCl", "PHH2O", "CecSoil", "cec_nh4", "CfPc", "ksat_lab", "ksat_field")
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid")){ afspdb.profiles[,j] = NA }
for(j in [c](https://rdrr.io/r/base/c.html)("db_13b", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_nh4", "ksat_lab", "ksat_field")){ afspdb.layers[,j] = NA }
hydrosprops.AfSPDB = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(afspdb.profiles[,afspdb.s.lst], afspdb.layers[,afspdb.h.lst])
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(hydrosprops.AfSPDB)){
if([is.numeric](https://rdrr.io/r/base/numeric.html)(hydrosprops.AfSPDB[,j])) { hydrosprops.AfSPDB[,j] <- [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(hydrosprops.AfSPDB[,j] < -200, NA, hydrosprops.AfSPDB[,j]) }
}
hydrosprops.AfSPDB$source_db = "AfSPDB"
hydrosprops.AfSPDB$confidence_degree = 5
hydrosprops.AfSPDB$OrgC = hydrosprops.AfSPDB$OrgC/10
#summary(hydrosprops.AfSPDB$OrgC)
hydrosprops.AfSPDB$project_url = "https://www.isric.org/projects/africa-soil-profiles-database-afsp"
hydrosprops.AfSPDB$citation_url = "https://www.isric.org/sites/default/files/isric_report_2014_01.pdf"
hydrosprops.AfSPDB = complete.vars(hydrosprops.AfSPDB, sel = [c](https://rdrr.io/r/base/c.html)("VMCpF25", "VMCpF42"), coords = [c](https://rdrr.io/r/base/c.html)("X_LonDD", "Y_LatDD"))
saveRDS.gz(hydrosprops.AfSPDB, "/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/hydrosprops.AfSPDB.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.AfSPDB)
#> [1] 10720 40
```
#### 6\.3\.0\.3 ISRIC ISIS
* Batjes, N. H. (1995\). [A homogenized soil data file for global environmental research: A subset of FAO, ISRIC and NRCS profiles (Version 1\.0\) (No. 95/10b)](https://www.isric.org/sites/default/files/isric_report_1995_10b.pdf). ISRIC.
* Van de Ven, T., \& Tempel, P. (1994\). ISIS 4\.0: ISRIC Soil Information System: User Manual. ISRIC.
```
if({
isis.xy <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/Sites.csv", stringsAsFactors = FALSE)
#str(isis.xy)
isis.des <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/SitedescriptionResults.csv", stringsAsFactors = FALSE)
isis.site <- [data.frame](https://rdrr.io/r/base/data.frame.html)(site_key=isis.xy$Id, usiteid=[paste](https://rdrr.io/r/base/paste.html)(isis.xy$CountryISO, isis.xy$SiteNumber, sep=""))
id0.lst = [c](https://rdrr.io/r/base/c.html)(236,235,224)
nm0.lst = [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees", "site_obsdate")
isis.site.l = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)(1:[length](https://rdrr.io/r/base/length.html)(id0.lst), function(i){plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)([subset](https://Rdatatable.gitlab.io/data.table/reference/subset.data.table.html)(isis.des, ValueId==id0.lst[i])[,[c](https://rdrr.io/r/base/c.html)("SampleId","Value")], replace=[c](https://rdrr.io/r/base/c.html)("SampleId"="site_key", "Value"=[paste](https://rdrr.io/r/base/paste.html)(nm0.lst[i])))}), type = "full")
isis.site.df = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(isis.site, isis.site.l)
for(j in nm0.lst){ isis.site.df[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(isis.site.df[,j]) }
isis.site.df[isis.site.df$usiteid=="CI2","latitude_decimal_degrees"] = 5.883333
#str(isis.site.df)
isis.site.df$location_accuracy_min = 100
isis.site.df$location_accuracy_max = NA
isis.smp <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/AnalyticalSamples.csv", stringsAsFactors = FALSE)
isis.ana <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/AnalyticalResults.csv", stringsAsFactors = FALSE)
#str(isis.ana)
isis.class <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/ClassificationResults.csv", stringsAsFactors = FALSE)
isis.hor <- [data.frame](https://rdrr.io/r/base/data.frame.html)(labsampnum=isis.smp$Id, hzn_top=isis.smp$Top, hzn_bot=isis.smp$Bottom, site_key=isis.smp$SiteId)
isis.hor$hzn_bot <- [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)(">", "", isis.hor$hzn_bot))
#str(isis.hor)
id.lst = [c](https://rdrr.io/r/base/c.html)(1,2,22,4,28,31,32,14,34,38,39,42)
nm.lst = [c](https://rdrr.io/r/base/c.html)("ph_h2o","ph_kcl","wpg2","oc","sand_tot_psa","silt_tot_psa","clay_tot_psa","cec_sum","db_od","w10cld","w3cld", "w15l2")
#str(as.numeric(isis.ana$Value[isis.ana$ValueId==38]))
isis.hor.l = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)(1:[length](https://rdrr.io/r/base/length.html)(id.lst), function(i){plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)([subset](https://Rdatatable.gitlab.io/data.table/reference/subset.data.table.html)(isis.ana, ValueId==id.lst[i])[,[c](https://rdrr.io/r/base/c.html)("SampleId","Value")], replace=[c](https://rdrr.io/r/base/c.html)("SampleId"="labsampnum", "Value"=[paste](https://rdrr.io/r/base/paste.html)(nm.lst[i])))}), type = "full")
#summary(as.numeric(isis.hor.l$w3cld))
isis.hor.df = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(isis.hor, isis.hor.l)
isis.hor.df = isis.hor.df[,]
#summary(as.numeric(isis.hor.df$w3cld))
for(j in nm.lst){ isis.hor.df[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(isis.hor.df[,j]) }
#str(isis.hor.df)
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("layer_sequence", "hzn_desgn", "tex_psda", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_nh4", "db_13b", "w6clod", "ksat_lab", "ksat_field")){ isis.hor.df[,j] = NA }
[which](https://rdrr.io/r/base/which.html)(!hor.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(isis.hor.df))
hydrosprops.ISIS <- [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(isis.site.df[,site.names], isis.hor.df[,hor.names], type="left")
hydrosprops.ISIS$source_db = "ISRIC_ISIS"
hydrosprops.ISIS$confidence_degree = 1
hydrosprops.ISIS$project_url = "https://isis.isric.org"
hydrosprops.ISIS$citation_url = "https://www.isric.org/sites/default/files/isric_report_1995_10b.pdf"
hydrosprops.ISIS = complete.vars(hydrosprops.ISIS)
saveRDS.gz(hydrosprops.ISIS, "/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/hydrosprops.ISIS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.ISIS)
#> [1] 1176 40
```
#### 6\.3\.0\.4 ISRIC WISE
* Batjes, N.H. (2019\). [Harmonized soil profile data for applications at global and continental scales: updates to the WISE database](http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x). Soil Use and Management 5:124–127\.
```
if({
wise.SITE <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_SITE.csv", stringsAsFactors=FALSE)
#summary(as.factor(wise.SITE$LONLAT_ACC))
wise.SITE$location_accuracy_min = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(wise.SITE$LONLAT_ACC=="D", 1e5/2, [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(wise.SITE$LONLAT_ACC=="S", 30, [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(wise.SITE$LONLAT_ACC=="M", 1800/2, NA)))
wise.SITE$location_accuracy_max = NA
wise.HORIZON <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_HORIZON.csv")
wise.s.lst <- [c](https://rdrr.io/r/base/c.html)("WISE3_id", "SOURCE_ID", "DATEYR", "LONDD", "LATDD", "location_accuracy_min", "location_accuracy_max")
## Volumetric values
#summary(wise.HORIZON$BULKDENS)
#summary(wise.HORIZON$VMC1)
wise.HORIZON$WISE3_id = wise.HORIZON$WISE3_ID
wise.h.lst <- [c](https://rdrr.io/r/base/c.html)("labsampnum", "WISE3_id", "HONU", "TOPDEP", "BOTDEP", "DESIG", "db_13b", "BULKDENS", "COLEws", "w6clod", "VMC1", "VMC2", "VMC3", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "CLAY", "SILT", "SAND", "ORGC", "PHKCL", "PHH2O", "CECSOIL", "cec_nh4", "GRAVEL", "ksat_lab", "ksat_field")
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "ksat_lab", "ksat_field")){ wise.HORIZON[,j] = NA }
hydrosprops.WISE = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(wise.SITE[,wise.s.lst], wise.HORIZON[,wise.h.lst])
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(hydrosprops.WISE)){
if([is.numeric](https://rdrr.io/r/base/numeric.html)(hydrosprops.WISE[,j])) { hydrosprops.WISE[,j] <- [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(hydrosprops.WISE[,j] < -200, NA, hydrosprops.WISE[,j]) }
}
hydrosprops.WISE$ORGC = hydrosprops.WISE$ORGC/10
hydrosprops.WISE$source_db = "ISRIC_WISE"
hydrosprops.WISE$project_url = "https://isric.org"
hydrosprops.WISE$citation_url = "http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x"
hydrosprops.WISE <- complete.vars(hydrosprops.WISE, sel=[c](https://rdrr.io/r/base/c.html)("VMC2", "VMC3"), coords = [c](https://rdrr.io/r/base/c.html)("LONDD", "LATDD"))
hydrosprops.WISE$confidence_degree = 5
#summary(hydrosprops.WISE$VMC3)
saveRDS.gz(hydrosprops.WISE, "/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/hydrosprops.WISE.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.WISE)
#> [1] 1325 40
```
#### 6\.3\.0\.5 Fine Root Ecology Database (FRED)
* Iversen CM, Powell AS, McCormack ML, Blackwood CB, Freschet GT, Kattge J, Roumet C, Stover DB, Soudzilovskaia NA, Valverde\-Barrantes OJ, van Bodegom PM, Violle C. 2018\. Fine\-Root Ecology Database (FRED): A Global Collection of Root Trait Data with Coincident Site, Vegetation, Edaphic, and Climatic Data, Version 2\. Oak Ridge National Laboratory, TES SFA, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A. Access on\-line at: [https://doi.org/10\.25581/ornlsfa.012/1417481](https://doi.org/10.25581/ornlsfa.012/1417481).
```
if({
fred = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED2_20180518.csv", skip = 5, header=FALSE)
[names](https://rdrr.io/r/base/names.html)(fred) = [names](https://rdrr.io/r/base/names.html)([read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED2_20180518.csv", nrows=1, header=TRUE))
fred.h.lst = [c](https://rdrr.io/r/base/c.html)("Notes_Row.ID", "Data.source_DOI", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "Soil.horizon", "db_13b", "Soil.bulk.density", "COLEws", "w6clod", "w10cld", "Soil.water_Volumetric.content", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "Soil.texture", "Soil.texture_Fraction.clay", "Soil.texture_Fraction.silt", "Soil.texture_Fraction.sand", "Soil.organic.C.content", "ph_kcl", "Soil.pH_Water", "Soil.cation.exchange.capacity..CEC.", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
#summary(fred$Soil.water_Volumetric.content)
#summary(fred$Soil.water_Storage.capacity)
fred$site_obsdate = [rowMeans](https://rdrr.io/r/base/colSums.html)(fred[,[c](https://rdrr.io/r/base/c.html)("Sample.collection_Year.ending.collection", "Sample.collection_Year.beginning.collection")], na.rm=TRUE)
#summary(fred$site_obsdate)
fred$longitude_decimal_degrees = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Longitude), fred$Longitude_Estimated, fred$Longitude)
fred$latitude_decimal_degrees = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Latitude), fred$Latitude_Estimated, fred$Latitude)
#summary(as.factor(fred$Soil.horizon))
fred$hzn_bot = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Lower.sampling.depth), fred$Soil.depth - 5, fred$Soil.depth_Lower.sampling.depth)
fred$hzn_top = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Upper.sampling.depth), fred$Soil.depth + 5, fred$Soil.depth_Upper.sampling.depth)
x.na = fred.h.lst[[which](https://rdrr.io/r/base/which.html)(!fred.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(fred))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ fred[,i] = NA } }
hydrosprops.FRED = fred[,fred.h.lst]
#plot(hydrosprops.FRED[,4:5])
hydrosprops.FRED$source_db = "FRED"
hydrosprops.FRED$confidence_degree = 5
hydrosprops.FRED$project_url = "https://roots.ornl.gov/"
hydrosprops.FRED$citation_url = "https://doi.org/10.25581/ornlsfa.012/1417481"
hydrosprops.FRED = complete.vars(hydrosprops.FRED, sel = [c](https://rdrr.io/r/base/c.html)("Soil.water_Volumetric.content", "Soil.texture_Fraction.clay"))
saveRDS.gz(hydrosprops.FRED, "/mnt/diskstation/data/Soil_points/INT/FRED/hydrosprops.FRED.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.FRED)
#> [1] 3761 40
```
#### 6\.3\.0\.6 EGRPR
* [Russian Federation: The Unified State Register of Soil Resources (EGRPR)](http://egrpr.esoil.ru/).
```
if({
russ.HOR = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Russia/EGRPR/Russia_EGRPR_soil_pedons.csv")
russ.HOR$SOURCEID = [paste](https://rdrr.io/r/base/paste.html)(russ.HOR$CardID, russ.HOR$SOIL_ID, sep="_")
russ.HOR$SNDPPT <- russ.HOR$TEXTSAF + russ.HOR$TEXSCM
russ.HOR$SLTPPT <- russ.HOR$TEXTSIC + russ.HOR$TEXTSIM + 0.8 * russ.HOR$TEXTSIF
russ.HOR$CLYPPT <- russ.HOR$TEXTCL + 0.2 * russ.HOR$TEXTSIF
## Correct texture fractions:
sumTex <- [rowSums](https://rdrr.io/r/base/colSums.html)(russ.HOR[,[c](https://rdrr.io/r/base/c.html)("SLTPPT","CLYPPT","SNDPPT")])
russ.HOR$SNDPPT <- russ.HOR$SNDPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$SLTPPT <- russ.HOR$SLTPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$oc <- russ.HOR$ORGMAT/1.724
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("site_obsdate", "location_accuracy_min", "location_accuracy_max", "labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")){ russ.HOR[,j] = NA }
russ.sel.h = [c](https://rdrr.io/r/base/c.html)("SOURCEID", "SOIL_ID", "site_obsdate", "LONG", "LAT", "location_accuracy_min", "location_accuracy_max", "labsampnum", "HORNMB", "HORTOP", "HORBOT", "HISMMN", "db_13b", "DVOL", "COLEws", "w6clod", "WR10", "WR33", "WR1500", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "CLYPPT", "SLTPPT", "SNDPPT", "oc", "PHSLT", "PHH2O", "CECST", "cec_nh4", "wpg2","ksat_lab", "ksat_field")
hydrosprops.EGRPR = russ.HOR[,russ.sel.h]
hydrosprops.EGRPR$source_db = "Russia_EGRPR"
hydrosprops.EGRPR$confidence_degree = 2
hydrosprops.EGRPR$project_url = "http://egrpr.esoil.ru/"
hydrosprops.EGRPR$citation_url = "https://doi.org/10.19047/0136-1694-2016-86-115-123"
hydrosprops.EGRPR <- complete.vars(hydrosprops.EGRPR, sel=[c](https://rdrr.io/r/base/c.html)("WR33", "WR1500"), coords = [c](https://rdrr.io/r/base/c.html)("LONG", "LAT"))
#summary(hydrosprops.EGRPR$WR1500)
saveRDS.gz(hydrosprops.EGRPR, "/mnt/diskstation/data/Soil_points/Russia/EGRPR/hydrosprops.EGRPR.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.EGRPR)
#> [1] 1138 40
```
#### 6\.3\.0\.7 SPADE\-2
* Hannam J.A., Hollis, J.M., Jones, R.J.A., Bellamy, P.H., Hayes, S.E., Holden, A., Van Liedekerke, M.H. and Montanarella, L. (2009\). [SPADE\-2: The soil profile analytical database for Europe, Version 2\.0 Beta Version March 2009](https://esdac.jrc.ec.europa.eu/content/soil-profile-analytical-database-2). Unpublished Report, 27pp.
* Kristensen, J. A., Balstrøm, T., Jones, R. J. A., Jones, A., Montanarella, L., Panagos, P., and Breuning\-Madsen, H.: Development of a harmonised soil profile analytical database for Europe: a resource for supporting regional soil management, SOIL, 5, 289–301, [https://doi.org/10\.5194/soil\-5\-289\-2019](https://doi.org/10.5194/soil-5-289-2019), 2019\.
```
if({
spade.PLOT <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/EU/SPADE/DAT_PLOT.csv")
#str(spade.PLOT)
spade.HOR <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/EU/SPADE/DAT_HOR.csv")
spade.PLOT = spade.PLOT[!spade.PLOT$LON_COOR_V>180 & spade.PLOT$LAT_COOR_V>20,]
#plot(spade.PLOT[,c("LON_COOR_V","LAT_COOR_V")])
spade.PLOT$location_accuracy_min = 100
spade.PLOT$location_accuracy_max = NA
#site.names = c("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees")
spade.PLOT$ProfileID = [paste](https://rdrr.io/r/base/paste.html)(spade.PLOT$CNTY_C, spade.PLOT$PLOT_ID, sep="_")
spade.PLOT$T_Year = 2009
spade.s.lst <- [c](https://rdrr.io/r/base/c.html)("PLOT_ID", "ProfileID", "T_Year", "LON_COOR_V", "LAT_COOR_V", "location_accuracy_min", "location_accuracy_max")
## standardize:
spade.HOR$SLTPPT <- spade.HOR$SILT1_V + spade.HOR$SILT2_V
spade.HOR$SNDPPT <- spade.HOR$SAND1_V + spade.HOR$SAND2_V + spade.HOR$SAND3_V
spade.HOR$PHIKCL <- NA
spade.HOR$PHIKCL[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A14")] <- spade.HOR$PH_V[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A14")]
spade.HOR$PHIHO5 <- NA
spade.HOR$PHIHO5[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A12")] <- spade.HOR$PH_V[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A12")]
#summary(spade.HOR$BD_V)
for(j in [c](https://rdrr.io/r/base/c.html)("site_obsdate", "layer_sequence", "db_13b", "COLEws", "w15bfm", "w6clod", "w10cld", "adod", "wrd_ws13", "w15bfm", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "ksat_lab", "ksat_field")){ spade.HOR[,j] = NA }
spade.h.lst = [c](https://rdrr.io/r/base/c.html)("HOR_ID","PLOT_ID","layer_sequence","HOR_BEG_V","HOR_END_V","HOR_NAME","db_13b", "BD_V", "COLEws", "w6clod", "w10cld", "WCFC_V", "WC4_V", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "CLAY_V", "SLTPPT", "SNDPPT", "OC_V", "PHIKCL", "PHIHO5", "CEC_V", "cec_nh4", "GRAV_C", "ksat_lab", "ksat_field")
hydrosprops.SPADE2 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(spade.PLOT[,spade.s.lst], spade.HOR[,spade.h.lst])
hydrosprops.SPADE2$source_db = "SPADE2"
hydrosprops.SPADE2$confidence_degree = 15
hydrosprops.SPADE2$project_url = "https://esdac.jrc.ec.europa.eu/content/soil-profile-analytical-database-2"
hydrosprops.SPADE2$citation_url = "https://doi.org/10.1016/j.landusepol.2011.07.003"
hydrosprops.SPADE2 <- complete.vars(hydrosprops.SPADE2, sel=[c](https://rdrr.io/r/base/c.html)("WCFC_V", "WC4_V"), coords = [c](https://rdrr.io/r/base/c.html)("LON_COOR_V","LAT_COOR_V"))
#summary(hydrosprops.SPADE2$WC4_V)
#summary(is.na(hydrosprops.SPADE2$WC4_V))
#hist(hydrosprops.SPADE2$WC4_V, breaks=45, col="gray")
saveRDS.gz(hydrosprops.SPADE2, "/mnt/diskstation/data/Soil_points/EU/SPADE/hydrosprops.SPADE2.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.SPADE2)
#> [1] 1182 40
```
#### 6\.3\.0\.8 Canada National Pedon Database
* [Agriculture and Agri\-Food Canada National Pedon Database](https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0).
```
if({
NPDB.nm = [c](https://rdrr.io/r/base/c.html)("NPDB_V2_sum_source_info.csv","NPDB_V2_sum_chemical.csv", "NPDB_V2_sum_horizons_raw.csv", "NPDB_V2_sum_physical.csv")
NPDB.HOR = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)([paste0](https://rdrr.io/r/base/paste.html)("/mnt/diskstation/data/Soil_points/Canada/NPDB/", NPDB.nm), read.csv), type = "full")
#str(NPDB.HOR)
#summary(NPDB.HOR$BULK_DEN)
## 0 values -> ERROR!
## add missing columns
NPDB.HOR$HISMMN = [paste0](https://rdrr.io/r/base/paste.html)(NPDB.HOR$HZN_MAS, NPDB.HOR$HZN_SUF, NPDB.HOR$HZN_MOD)
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "location_accuracy_max", "layer_sequence", "labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "w10cld", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "ph_kcl", "ksat_lab", "ksat_field")){ NPDB.HOR[,j] = NA }
npdb.sel.h = [c](https://rdrr.io/r/base/c.html)("PEDON_ID", "usiteid", "CAL_YEAR", "DD_LONG", "DD_LAT", "CONF_METRS", "location_accuracy_max", "labsampnum", "layer_sequence", "U_DEPTH", "L_DEPTH", "HISMMN", "db_13b", "BULK_DEN", "COLEws", "w6clod", "w10cld", "RETN_33KP", "RETN_1500K", "RETN_HYGR", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "T_CLAY", "T_SILT", "T_SAND", "CARB_ORG", "ph_kcl", "PH_H2O", "CEC", "cec_nh4", "VC_SAND", "ksat_lab", "ksat_field")
hydrosprops.NPDB = NPDB.HOR[,npdb.sel.h]
hydrosprops.NPDB$source_db = "Canada_NPDB"
hydrosprops.NPDB$confidence_degree = 1
hydrosprops.NPDB$project_url = "https://open.canada.ca/data/en/"
hydrosprops.NPDB$citation_url = "https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0"
hydrosprops.NPDB <- complete.vars(hydrosprops.NPDB, sel=[c](https://rdrr.io/r/base/c.html)("RETN_33KP", "RETN_1500K"), coords = [c](https://rdrr.io/r/base/c.html)("DD_LONG", "DD_LAT"))
saveRDS.gz(hydrosprops.NPDB, "/mnt/diskstation/data/Soil_points/Canada/NPDB/hydrosprops.NPDB.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.NPDB)
#> [1] 404 40
```
#### 6\.3\.0\.9 ETH imported data from literature
* Digitized soil hydraulic measurements from the literature by the [ETH Soil and Terrestrial Environmental Physics](https://step.ethz.ch/).
```
if({
xlsxFile = [list.files](https://rdrr.io/r/base/list.files.html)(pattern="Global_soil_water_tables.xlsx", full.names = TRUE, recursive = TRUE)
wb = openxlsx::[getSheetNames](https://rdrr.io/pkg/openxlsx/man/getSheetNames.html)(xlsxFile)
eth.tbl = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)(
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "ETH_imported_literature"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "ETH_imported_literature_more"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "ETH_extra_data set"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Tibetan_plateau"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Belgium_Vereecken_data"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Australia_dataset"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Florida_Soils_Ksat"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "China_dataset"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Sand_dunes_Siberia_database"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "New_data_4_03")
)
#dim(eth.tbl)
#summary(as.factor(eth.tbl$reference_source))
## Data quality tables
lab.ql = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Quality_per_site_key")
lab.cd = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(eth.tbl["site_key"], lab.ql)$confidence_degree
eth.tbl$confidence_degree = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(eth.tbl$confidence_degree), lab.cd, eth.tbl$confidence_degree)
#summary(as.factor(eth.tbl$confidence_degree))
## missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "labsampnum", "layer_sequence", "db_13b", "COLEws", "adod", "wrd_ws13", "w15bfm", "w15cly", "cec7_cly", "w6clod", "w10cld", "ph_kcl", "cec_sum", "cec_nh4", "wpg2", "project_url", "citation_url")){ eth.tbl[,j] = NA }
hydrosprops.ETH = eth.tbl[,col.names]
col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(eth.tbl))]
hydrosprops.ETH$project_url = "https://step.ethz.ch/"
hydrosprops.ETH$citation_url = "https://doi.org/10.5194/essd-2020-149"
hydrosprops.ETH = complete.vars(hydrosprops.ETH)
#hist(hydrosprops.ETH$w15l2, breaks=45, col="gray")
#hist(log1p(hydrosprops.ETH$ksat_lab), breaks=45, col="gray")
saveRDS.gz(hydrosprops.ETH, "/mnt/diskstation/data/Soil_points/INT/hydrosprops.ETH.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.ETH)
#> [1] 9023 40
```
#### 6\.3\.0\.10 HYBRAS
* Ottoni, M. V., Ottoni Filho, T. B., Schaap, M. G., Lopes\-Assad, M. L. R., \& Rotunno Filho, O. C. (2018\). [Hydrophysical database for Brazilian soils (HYBRAS) and pedotransfer functions for water retention](http://www.cprm.gov.br/en/Hydrology/Research-and-Innovation/HYBRAS-4208.html). Vadose Zone Journal, 17(1\).
```
if({
hybras.HOR = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "HYBRAS.V1_integrated_tables_RAW")
#str(hybras.HOR)
## some points had only UTM coordinates and had to be manually coorected
## subset to unique values:
hybras.HOR = hybras.HOR[,]
#summary(hybras.HOR$bulk_den)
#hist(hybras.HOR$ksat, breaks=35, col="grey")
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "layer_sequence", "labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "w10cld", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_sum", "cec_nh4", "ph_kcl", "ph_h2o", "ksat_field", "uuid")){ hybras.HOR[,j] = NA }
hybras.HOR$w3cld = [rowMeans](https://rdrr.io/r/base/colSums.html)(hybras.HOR[,[c](https://rdrr.io/r/base/c.html)("theta20","theta50")], na.rm = TRUE)
hybras.sel.h = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "year", "LongitudeOR", "LatitudeOR", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "top_depth", "bot_depth", "horizon", "db_13b", "bulk_den", "COLEws", "w6clod", "theta10", "w3cld", "theta15000", "satwat", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay", "silt", "sand", "org_carb", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "vc_sand", "ksat", "ksat_field")
hydrosprops.HYBRAS = hybras.HOR[,hybras.sel.h]
hydrosprops.HYBRAS$source_db = "HYBRAS"
hydrosprops.HYBRAS$confidence_degree = 1
for(i in [c](https://rdrr.io/r/base/c.html)("theta10", "w3cld", "theta15000", "satwat")){ hydrosprops.HYBRAS[,i] = hydrosprops.HYBRAS[,i]*100 }
#summary(hydrosprops.HYBRAS$theta10)
#summary(hydrosprops.HYBRAS$satwat)
#hist(hydrosprops.HYBRAS$theta10, breaks=45, col="gray")
#hist(log1p(hydrosprops.HYBRAS$ksat), breaks=45, col="gray")
#summary(!is.na(hydrosprops.HYBRAS$ksat))
hydrosprops.HYBRAS$project_url = "http://www.cprm.gov.br/en/Hydrology/Research-and-Innovation/HYBRAS-4208.html"
hydrosprops.HYBRAS$citation_url = "https://doi.org/10.2136/vzj2017.05.0095"
hydrosprops.HYBRAS <- complete.vars(hydrosprops.HYBRAS, sel=[c](https://rdrr.io/r/base/c.html)("w3cld", "theta15000", "ksat", "ksat_field"), coords = [c](https://rdrr.io/r/base/c.html)("LongitudeOR", "LatitudeOR"))
saveRDS.gz(hydrosprops.HYBRAS, "/mnt/diskstation/data/Soil_points/INT/HYBRAS/hydrosprops.HYBRAS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.HYBRAS)
#> [1] 814 40
```
#### 6\.3\.0\.11 UNSODA
* Nemes, Attila; Schaap, Marcel; Leij, Feike J.; Wösten, J. Henk M. (2015\). [UNSODA 2\.0: Unsaturated Soil Hydraulic Database](https://data.nal.usda.gov/dataset/unsoda-20-unsaturated-soil-hydraulic-database-database-and-program-indirect-methods-estimating-unsaturated-hydraulic-properties). Database and program for indirect methods of estimating unsaturated hydraulic properties. US Salinity Laboratory \- ARS \- USDA. [https://doi.org/10\.15482/USDA.ADC/1173246](https://doi.org/10.15482/USDA.ADC/1173246). Accessed 2020\-06\-08\.
```
if({
unsoda.LOC = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/general_c.csv")
#unsoda.LOC = unsoda.LOC[!unsoda.LOC$Lat==0,]
#plot(unsoda.LOC[,c("Long","Lat")])
unsoda.SOIL = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/soil_properties.csv")
#summary(unsoda.SOIL$k_sat)
## Soil water retention in lab:
tmp.hyd = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/lab_drying_h-t.csv")
#str(tmp.hyd)
tmp.hyd = tmp.hyd[,]
tmp.hyd$theta = tmp.hyd$theta*100
#head(tmp.hyd)
pr.lst = [c](https://rdrr.io/r/base/c.html)(6,10,33,15000)
cl.lst = [c](https://rdrr.io/r/base/c.html)("w6clod", "w10cld", "w3cld", "w15l2")
tmp.hyd.tbl = [data.frame](https://rdrr.io/r/base/data.frame.html)(code=[unique](https://Rdatatable.gitlab.io/data.table/reference/duplicated.html)(tmp.hyd$code), w6clod=NA, w10cld=NA, w3cld=NA, w15l2=NA)
for(i in 1:[length](https://rdrr.io/r/base/length.html)(pr.lst)){
tmp.hyd.tbl[,cl.lst[i]] = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tmp.hyd.tbl, tmp.hyd[[which](https://rdrr.io/r/base/which.html)(tmp.hyd$preshead==pr.lst[i]),[c](https://rdrr.io/r/base/c.html)("code","theta")], match="first")$theta
}
#head(tmp.hyd.tbl)
## ksat
kst.lev = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/comment_lab_sat_cond.csv", na.strings=[c](https://rdrr.io/r/base/c.html)("","NA","No comment"))
kst.met = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/methodology.csv", na.strings=[c](https://rdrr.io/r/base/c.html)("","NA","No comment"))
kst.met$comment_lsc = [paste](https://rdrr.io/r/base/paste.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(kst.met[[c](https://rdrr.io/r/base/c.html)("comment_lsc_ID")], kst.lev)$comment_lsc)
kst.met$comment_lsc[[which](https://rdrr.io/r/base/which.html)(kst.met$comment_lsc=="NA")] = NA
kst.fld = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/comment_field_sat_cond.csv", na.strings=[c](https://rdrr.io/r/base/c.html)("","NA","No comment"))
kst.met$comment_fsc = [paste](https://rdrr.io/r/base/paste.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(kst.met[[c](https://rdrr.io/r/base/c.html)("comment_fsc_ID")], kst.fld)$comment_fsc)
kst.met$comment_fsc[[which](https://rdrr.io/r/base/which.html)(kst.met$comment_fsc=="NA")] = NA
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(kst.met$comment_lsc))
kst.met$comment_met = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(kst.met$comment_lsc)&
unsoda.SOIL$comment_met = [paste](https://rdrr.io/r/base/paste.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(unsoda.SOIL[[c](https://rdrr.io/r/base/c.html)("code")], kst.met)$comment_met)
#summary(as.factor(unsoda.SOIL$comment_met))
sel.fld = unsoda.SOIL$comment_met [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("field Double ring infiltrometer","field Ponding", "field Steady infiltration")
unsoda.SOIL$ksat_lab[[which](https://rdrr.io/r/base/which.html)(!sel.fld)] = unsoda.SOIL$k_sat[[which](https://rdrr.io/r/base/which.html)(!sel.fld)]
unsoda.SOIL$ksat_field[[is.na](https://rdrr.io/r/base/NA.html)(unsoda.SOIL$ksat_lab)] = unsoda.SOIL$k_sat[[is.na](https://rdrr.io/r/base/NA.html)(unsoda.SOIL$ksat_lab)]
unsoda.col = join_all([list](https://rdrr.io/r/base/list.html)(unsoda.LOC, unsoda.SOIL, tmp.hyd.tbl))
#head(unsoda.col)
#summary(unsoda.col$OM_content)
unsoda.col$oc = [signif](https://rdrr.io/r/base/Round.html)(unsoda.col$OM_content/1.724, 4)
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "location_accuracy_min", "location_accuracy_max", "layer_sequence", "labsampnum", "db_13b", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_nh4", "ph_kcl", "wpg2")){ unsoda.col[,j] = NA }
unsoda.sel.h = [c](https://rdrr.io/r/base/c.html)("code", "usiteid", "date", "Long", "Lat", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "depth_upper", "depth_lower", "horizon", "db_13b", "bulk_density", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "Texture", "Clay", "Silt", "Sand", "oc", "ph_kcl", "pH", "CEC", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
hydrosprops.UNSODA = unsoda.col[,unsoda.sel.h]
hydrosprops.UNSODA$source_db = "UNSODA"
## corrected coordinates:
unsoda.ql = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "UNSODA_degree")
hydrosprops.UNSODA$confidence_degree = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$confidence_degree
hydrosprops.UNSODA$Texture = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$tex_psda
hydrosprops.UNSODA$location_accuracy_min = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$location_accuracy_min
hydrosprops.UNSODA$location_accuracy_max = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$location_accuracy_max
## replace coordinates
unsoda.Long = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$Improved_long
unsoda.Lat = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$Improved_lat
hydrosprops.UNSODA$Long = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(unsoda.Long), hydrosprops.UNSODA$Long, unsoda.Long)
hydrosprops.UNSODA$Lat = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(unsoda.Long), hydrosprops.UNSODA$Lat, unsoda.Lat)
#hist(hydrosprops.UNSODA$w15l2, breaks=45, col="gray")
#hist(hydrosprops.UNSODA$ksat_lab, breaks=45, col="gray")
unsoda.rem = hydrosprops.UNSODA$code [%in%](https://rdrr.io/r/base/match.html) unsoda.ql$code[[is.na](https://rdrr.io/r/base/NA.html)(unsoda.ql$additional_information)]
#summary(unsoda.rem)
hydrosprops.UNSODA = hydrosprops.UNSODA[unsoda.rem,]
## texture fractions sometimes need to be multiplied by 100!
#hydrosprops.UNSODA[hydrosprops.UNSODA$code==2220,]
sum.tex.1 = [rowSums](https://rdrr.io/r/base/colSums.html)(hydrosprops.UNSODA[,[c](https://rdrr.io/r/base/c.html)("Clay", "Silt", "Sand")], na.rm = TRUE)
sum.tex.r = [which](https://rdrr.io/r/base/which.html)(sum.tex.1<1.2 & sum.tex.1>0)
for(j in [c](https://rdrr.io/r/base/c.html)("Clay", "Silt", "Sand")){
hydrosprops.UNSODA[sum.tex.r,j] = hydrosprops.UNSODA[sum.tex.r,j] * 100
}
hydrosprops.UNSODA$project_url = "https://data.nal.usda.gov/dataset/unsoda-20-unsaturated-soil-hydraulic-database-database-and-program-indirect-methods-estimating-unsaturated-hydraulic-properties"
hydrosprops.UNSODA$citation_url = "https://doi.org/10.15482/USDA.ADC/1173246"
hydrosprops.UNSODA <- complete.vars(hydrosprops.UNSODA, coords = [c](https://rdrr.io/r/base/c.html)("Long", "Lat"))
saveRDS.gz(hydrosprops.UNSODA, "/mnt/diskstation/data/Soil_points/INT/UNSODA/hydrosprops.UNSODA.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.UNSODA)
#> [1] 298 40
```
#### 6\.3\.0\.12 HYDROS
* Schindler, Uwe; Müller, Lothar (2015\): [Soil hydraulic functions of international soils measured with the Extended Evaporation Method (EEM) and the HYPROP device](http://dx.doi.org/10.4228/ZALF.2003.273), Leibniz\-Zentrum für Agrarlandschaftsforschung (ZALF) e.V.\[doi: 10\.4228/ZALF.2003\.273]
```
if({
hydros.tbl = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/HydroS/int_rawret.csv", sep="\t", stringsAsFactors = FALSE, dec = ",")
hydros.tbl = hydros.tbl[,]
#summary(hydros.tbl$TENSION)
hydros.tbl$TENSIONc = [cut](https://rdrr.io/r/base/cut.html)(hydros.tbl$TENSION, breaks=[c](https://rdrr.io/r/base/c.html)(1,5,8,15,30,40,1000,15001))
#summary(hydros.tbl$TENSIONc)
hydros.tbl$WATER_CONTENT = hydros.tbl$WATER_CONTENT
#summary(hydros.tbl$WATER_CONTENT)
#head(hydros.tbl)
pr2.lst = [c](https://rdrr.io/r/base/c.html)("(5,8]", "(8,15]","(30,40]","(1e+03,1.5e+04]")
cl.lst = [c](https://rdrr.io/r/base/c.html)("w6clod", "w10cld", "w3cld", "w15l2")
hydros.tbl.df = [data.frame](https://rdrr.io/r/base/data.frame.html)(SITE_ID=[unique](https://Rdatatable.gitlab.io/data.table/reference/duplicated.html)(hydros.tbl$SITE_ID), w6clod=NA, w10cld=NA, w3cld=NA, w15l2=NA)
for(i in 1:[length](https://rdrr.io/r/base/length.html)(pr2.lst)){
hydros.tbl.df[,cl.lst[i]] = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydros.tbl.df, hydros.tbl[[which](https://rdrr.io/r/base/which.html)(hydros.tbl$TENSIONc==pr2.lst[i]),[c](https://rdrr.io/r/base/c.html)("SITE_ID","WATER_CONTENT")], match="first")$WATER_CONTENT
}
#head(hydros.tbl.df)
## properties:
hydros.soil = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/HydroS/int_basicdata.csv", sep="\t", stringsAsFactors = FALSE, dec = ",")
#head(hydros.soil)
#plot(hydros.soil[,c("H","R")])
hydros.col = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydros.soil, hydros.tbl.df)
#summary(hydros.col$OMC)
hydros.col$oc = hydros.col$OMC/1.724
hydros.col$location_accuracy_min = 100
hydros.col$location_accuracy_max = NA
for(j in [c](https://rdrr.io/r/base/c.html)("layer_sequence", "db_13b", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")){ hydros.col[,j] = NA }
hydros.sel.h = [c](https://rdrr.io/r/base/c.html)("SITE_ID", "SITE", "SAMP_DATE", "H", "R", "location_accuracy_min", "location_accuracy_max", "SAMP_NO", "layer_sequence", "TOP_DEPTH", "BOT_DEPTH", "HORIZON", "db_13b", "BULK_DENSITY", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
hydros.sel.h[[which](https://rdrr.io/r/base/which.html)(!hydros.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(hydros.col))]
hydrosprops.HYDROS = hydros.col[,hydros.sel.h]
hydrosprops.HYDROS$source_db = "HydroS"
hydrosprops.HYDROS$confidence_degree = 1
hydrosprops.HYDROS$project_url = "http://dx.doi.org/10.4228/ZALF.2003.273"
hydrosprops.HYDROS$citation_url = "https://doi.org/10.18174/odjar.v3i1.15763"
hydrosprops.HYDROS <- complete.vars(hydrosprops.HYDROS, coords = [c](https://rdrr.io/r/base/c.html)("H","R"))
saveRDS.gz(hydrosprops.HYDROS, "/mnt/diskstation/data/Soil_points/INT/HYDROS/hydrosprops.HYDROS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.HYDROS)
#> [1] 153 40
```
#### 6\.3\.0\.13 SWIG
* Rahmati, M., Weihermüller, L., Vanderborght, J., Pachepsky, Y. A., Mao, L., Sadeghi, S. H., … \& Toth, B. (2018\). [Development and analysis of the Soil Water Infiltration Global database](https://doi.org/10.5194/essd-10-1237-2018). Earth Syst. Sci. Data, 10, 1237–1263\.
```
if({
meta.tbl = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SWIG/Metadata.csv", skip = 1, fill = TRUE, blank.lines.skip=TRUE, flush=TRUE, stringsAsFactors=FALSE)
swig.xy = [read.table](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SWIG/Locations.csv", sep=";", dec = ",", stringsAsFactors=FALSE, header=TRUE, na.strings = [c](https://rdrr.io/r/base/c.html)("-",""," "), fill = TRUE)
swig.xy$x = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)(",", ".", swig.xy$x))
swig.xy$y = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)(",", ".", swig.xy$y))
swig.xy = swig.xy[,1:8]
[names](https://rdrr.io/r/base/names.html)(swig.xy)[3] = "EndDataset"
[library](https://rdrr.io/r/base/library.html)([tidyr](https://tidyr.tidyverse.org))
swig.xyf = tidyr::[fill](https://tidyr.tidyverse.org/reference/fill.html)(swig.xy, [c](https://rdrr.io/r/base/c.html)("Dataset","EndDataset"))
swig.xyf$N = swig.xyf$EndDataset - swig.xyf$Dataset + 1
swig.xyf$N = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(swig.xyf$N<1,1,swig.xyf$N)
swig.xyf = swig.xyf[,]
#plot(swig.xyf[,c("x","y")])
swig.xyf.df = swig.xyf[[rep](https://rdrr.io/r/base/rep.html)([seq_len](https://rdrr.io/r/base/seq.html)([nrow](https://rdrr.io/r/base/nrow.html)(swig.xyf)), swig.xyf$N),]
rn = [sapply](https://rdrr.io/r/base/lapply.html)([row.names](https://rdrr.io/r/base/row.names.html)(swig.xyf.df), function(i){[as.numeric](https://rdrr.io/r/base/numeric.html)([strsplit](https://Rdatatable.gitlab.io/data.table/reference/tstrsplit.html)(i, "\\.")[[1]][2])})
swig.xyf.df$Code = [rowSums](https://rdrr.io/r/base/colSums.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(rn, swig.xyf.df$Dataset), na.rm = TRUE)
## bind together
swig.col = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(swig.xyf.df[,[c](https://rdrr.io/r/base/c.html)("Code","x","y")], meta.tbl)
## aditional values for ksat
swig2.tbl = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SWIG/Statistics.csv", fill = TRUE, blank.lines.skip=TRUE, sep=";", dec = ",", flush=TRUE, stringsAsFactors=FALSE)
#hist(log1p(as.numeric(swig2.tbl$Ks..cm.hr.)), breaks=45, col="gray")
swig.col$Ks..cm.hr. = [as.numeric](https://rdrr.io/r/base/numeric.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(swig.col["Code"], swig2.tbl[[c](https://rdrr.io/r/base/c.html)("Code","Ks..cm.hr.")])$Ks..cm.hr.)
swig.col$Ks..cm.hr. = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(swig.col$Ks..cm.hr. * 24 <= 0.01, NA, swig.col$Ks..cm.hr.)
swig.col$Ksat = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(swig.col$Ksat), swig.col$Ks..cm.hr., swig.col$Ksat)
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "site_obsdate", "labsampnum", "layer_sequence", "hzn_desgn", "db_13b", "COLEws", "adod", "wrd_ws13", "w15bfm", "w15cly", "cec7_cly", "w6clod", "w10cld", "ph_kcl", "cec_nh4", "ksat_lab")){ swig.col[,j] = NA }
## depths are missing?
swig.col$hzn_top = 0
swig.col$hzn_bot = 20
swig.col$location_accuracy_min = NA
swig.col$location_accuracy_max = NA
swig.col$w15l2 = swig.col$PWP * 100
swig.col$w3cld = swig.col$FC * 100
swig.sel.h = [c](https://rdrr.io/r/base/c.html)("Code", "usiteid", "site_obsdate", "x", "y", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "db_13b", "Db", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "Texture.Class", "Clay", "Silt", "Sand", "OC", "ph_kcl", "pH", "CEC", "cec_nh4", "Gravel", "ksat_lab", "Ksat")
swig.sel.h[[which](https://rdrr.io/r/base/which.html)(!swig.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(swig.col))]
hydrosprops.SWIG = swig.col[,swig.sel.h]
hydrosprops.SWIG$source_db = "SWIG"
hydrosprops.SWIG$Ksat = hydrosprops.SWIG$Ksat * 24 ## convert to days
#hist(hydrosprops.SWIG$w3cld, breaks=45, col="gray")
#hist(log1p(hydrosprops.SWIG$Ksat), breaks=25, col="gray")
#summary(hydrosprops.SWIG$Ksat); summary(hydrosprops.UNSODA$ksat_lab)
## confidence degree
SWIG.ql = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "SWIG_database_Confidence_degree")
hydrosprops.SWIG$confidence_degree = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$confidence_degree
hydrosprops.SWIG$location_accuracy_min = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$location_accuracy_min
hydrosprops.SWIG$location_accuracy_max = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$location_accuracy_max
#summary(as.factor(hydrosprops.SWIG$confidence_degree))
## replace coordinates
SWIG.Long = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$Improved_long
SWIG.Lat = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$Improved_lat
hydrosprops.SWIG$x = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(SWIG.Long), hydrosprops.SWIG$x, SWIG.Long)
hydrosprops.SWIG$y = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(SWIG.Long), hydrosprops.SWIG$y, SWIG.Lat)
hydrosprops.SWIG$Texture.Class = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$tex_psda
swig.lab = SWIG.ql$Code[[which](https://rdrr.io/r/base/which.html)(SWIG.ql$Ksat_Method [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("Constant head method", "Constant Head Method", "Falling head method"))]
hydrosprops.SWIG$ksat_lab[hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) swig.lab] = hydrosprops.SWIG$Ksat[hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) swig.lab]
hydrosprops.SWIG$Ksat[hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) swig.lab] = NA
## remove duplicates
swig.rem = hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) SWIG.ql$Code[[is.na](https://rdrr.io/r/base/NA.html)(SWIG.ql$additional_information)]
#summary(swig.rem)
#Mode FALSE TRUE
#logical 200 6921
hydrosprops.SWIG = hydrosprops.SWIG[swig.rem,]
hydrosprops.SWIG = hydrosprops.SWIG[,]
## remove all ksat values < 0.01 ?
#summary(hydrosprops.SWIG$Ksat < 0.01)
hydrosprops.SWIG$project_url = "https://soil-modeling.org/resources-links/data-portal/swig"
hydrosprops.SWIG$citation_url = "https://doi.org/10.5194/essd-10-1237-2018"
hydrosprops.SWIG <- complete.vars(hydrosprops.SWIG, sel=[c](https://rdrr.io/r/base/c.html)("w15l2","w3cld","ksat_lab","Ksat"), coords=[c](https://rdrr.io/r/base/c.html)("x","y"))
saveRDS.gz(hydrosprops.SWIG, "/mnt/diskstation/data/Soil_points/INT/SWIG/hydrosprops.SWIG.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.SWIG)
#> [1] 3676 40
```
#### 6\.3\.0\.14 Pseudo\-points
* Pseudo\-observations using simulated points (world deserts)
```
if({
## 0 soil organic carbon + 98% sand content (deserts)
sprops.SIM = [readRDS](https://rdrr.io/r/base/readRDS.html)("/mnt/diskstation/data/LandGIS/training_points/soil_props/sprops.SIM.rds")
sprops.SIM$w10cld = 3.1
sprops.SIM$w3cld = 1.2
sprops.SIM$w15l2 = 0.8
sprops.SIM$tex_psda = "sand"
sprops.SIM$usiteid = sprops.SIM$lcv_admin0_fao.gaul_c_250m_s0..0cm_2015_v1.0
sprops.SIM$longitude_decimal_degrees = sprops.SIM$x
sprops.SIM$latitude_decimal_degrees = sprops.SIM$y
## Very approximate values for Ksat for shifting sand:
tax.r = raster::[extract](https://rdrr.io/pkg/raster/man/extract.html)(raster("/mnt/diskstation/data/LandGIS/archive/predicted250m/sol_grtgroup_usda.soiltax_c_250m_s0..0cm_1950..2017_v0.1.tif"), sprops.SIM[,[c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees","latitude_decimal_degrees")])
tax.leg = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/LandGIS/archive/predicted250m/sol_grtgroup_usda.soiltax_c_250m_s0..0cm_1950..2017_v0.1.tif.csv")
tax.ksat_lab = [aggregate](https://rdrr.io/r/stats/aggregate.html)(eth.tbl$ksat_lab, by=[list](https://rdrr.io/r/base/list.html)(Group=eth.tbl$tax_grtgroup), FUN=mean, na.rm=TRUE)
tax.ksat_lab.sd = [aggregate](https://rdrr.io/r/stats/aggregate.html)(eth.tbl$ksat_lab, by=[list](https://rdrr.io/r/base/list.html)(Group=eth.tbl$tax_grtgroup), FUN=sd, na.rm=TRUE)
tax.ksat_field = [aggregate](https://rdrr.io/r/stats/aggregate.html)(eth.tbl$ksat_field, by=[list](https://rdrr.io/r/base/list.html)(Group=eth.tbl$tax_grtgroup), FUN=mean, na.rm=TRUE)
tax.leg$ksat_lab = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(tax.leg, tax.ksat_lab)$x
tax.leg$ksat_field = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(tax.leg, tax.ksat_field)$x
tax.sel = [c](https://rdrr.io/r/base/c.html)("cryochrepts","cryorthods","torripsamments","haplustolls","torrifluvents")
sprops.SIM$ksat_field = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(site_key=sprops.SIM$site_key, Number=tax.r), tax.leg[tax.leg$Group [%in%](https://rdrr.io/r/base/match.html) tax.sel,])$ksat_field
sprops.SIM$ksat_lab = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(site_key=sprops.SIM$site_key, Number=tax.r), tax.leg[tax.leg$Group [%in%](https://rdrr.io/r/base/match.html) tax.sel,])$ksat_lab
#summary(sprops.SIM$ksat_lab)
#summary(sprops.SIM$ksat_field)
#View(sprops.SIM)
for(j in col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sprops.SIM))]){ sprops.SIM[,j] <- NA }
sprops.SIM$project_url = "https://gitlab.com/openlandmap/global-layers"
sprops.SIM$citation_url = ""
hydrosprops.SIM = sprops.SIM[,col.names]
hydrosprops.SIM$confidence_degree = 30
saveRDS.gz(hydrosprops.SIM, "/mnt/diskstation/data/Soil_points/INT/hydrosprops.SIM.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.SIM)
#> [1] 8133 40
```
#### 6\.3\.0\.1 NCSS Characterization Database
* National Cooperative Soil Survey, (2020\). [National Cooperative Soil Survey Characterization Database](http://ncsslabdatamart.sc.egov.usda.gov/). <http://ncsslabdatamart.sc.egov.usda.gov/>
```
if({
ncss.site <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Site_Location.csv", stringsAsFactors = FALSE)
#str(ncss.site)
## Location accuracy unknown but we assume 100m
ncss.site$location_accuracy_max = NA
ncss.site$location_accuracy_min = 100
ncss.layer <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Layer.csv", stringsAsFactors = FALSE)
ncss.bdm <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Bulk_Density_and_Moisture.csv", stringsAsFactors = FALSE)
#summary(as.factor(ncss.bdm$prep_code))
ncss.bdm.0 <- ncss.bdm[ncss.bdm$prep_code=="S",]
#summary(ncss.bdm.0$db_od)
## 0 values --- error!
ncss.carb <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Carbon_and_Extractions.csv", stringsAsFactors = FALSE)
ncss.organic <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_Organic.csv", stringsAsFactors = FALSE)
ncss.pH <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_pH_and_Carbonates.csv", stringsAsFactors = FALSE)
#str(ncss.pH)
#summary(!is.na(ncss.pH$ph_h2o))
ncss.PSDA <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_PSDA_and_Rock_Fragments.csv", stringsAsFactors = FALSE)
ncss.CEC <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/NCSS_CEC_and_Bases.csv")
ncss.horizons <- plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([list](https://rdrr.io/r/base/list.html)(ncss.bdm.0, ncss.layer, ncss.carb, ncss.organic, ncss.pH, ncss.PSDA, ncss.CEC), type = "left", by="labsampnum")
#head(ncss.horizons)
[nrow](https://rdrr.io/r/base/nrow.html)(ncss.horizons)
ncss.horizons$ksat_lab = NA; ncss.horizons$ksat_field = NA
hydrosprops.NCSS = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(ncss.site[,site.names], ncss.horizons[,hor.names], by="site_key")
## soil organic carbon:
#summary(!is.na(hydrosprops.NCSS$oc))
#summary(!is.na(hydrosprops.NCSS$ph_h2o))
#summary(!is.na(hydrosprops.NCSS$ph_kcl))
hydrosprops.NCSS$source_db = "USDA_NCSS"
#str(hydrosprops.NCSS)
#hist(hydrosprops.NCSS$w3cld[hydrosprops.NCSS$w3cld<150], breaks=45, col="gray")
## ERROR: MANY VALUES >100%
## fills in missing BD values using formula from Köchy, Hiederer, and Freibauer (2015)
db.f = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(hydrosprops.NCSS$db_13b), -0.31*[log](https://rdrr.io/r/base/Log.html)(hydrosprops.NCSS$oc)+1.38, hydrosprops.NCSS$db_13b)
db.f[db.f<0.02 | db.f>2.87] = NA
## Convert to volumetric % to match most of world data sets:
hydrosprops.NCSS$w3cld = hydrosprops.NCSS$w3cld * db.f
hydrosprops.NCSS$w15l2 = hydrosprops.NCSS$w15l2 * db.f
hydrosprops.NCSS$w10cld = hydrosprops.NCSS$w10cld * db.f
#summary(as.factor(hydrosprops.NCSS$tex_psda))
## texture classes need to be cleaned up!
## check WRC values for sandy soils
#hydrosprops.NCSS[which(!is.na(hydrosprops.NCSS$w3cld) & hydrosprops.NCSS$sand_tot_psa>95)[1:10],]
## check WRC values for ORGANIC soils
#hydrosprops.NCSS[which(!is.na(hydrosprops.NCSS$w3cld) & hydrosprops.NCSS$oc>12)[1:10],]
## w3cld > 100?
hydrosprops.NCSS$confidence_degree = 1
hydrosprops.NCSS$project_url = "http://ncsslabdatamart.sc.egov.usda.gov/"
hydrosprops.NCSS$citation_url = "https://doi.org/10.2136/sssaj2016.11.0386n"
hydrosprops.NCSS = complete.vars(hydrosprops.NCSS)
saveRDS.gz(hydrosprops.NCSS, "/mnt/diskstation/data/Soil_points/INT/USDA_NCSS/hydrosprops.NCSS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.NCSS)
#> [1] 113991 40
```
#### 6\.3\.0\.2 Africa soil profiles database
* Leenaars, J. G., Van OOstrum, A. J. M., \& Ruiperez Gonzalez, M. (2014\). [Africa soil profiles database version 1\.2\. A compilation of georeferenced and standardized legacy soil profile data for Sub\-Saharan Africa (with dataset)](https://www.isric.org/projects/africa-soil-profiles-database-afsp). Wageningen: ISRIC Report 2014/01; 2014\.
```
if({
[require](https://rdrr.io/r/base/library.html)([foreign](https://svn.r-project.org/R-packages/trunk/foreign))
afspdb.profiles <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Profiles.dbf", as.is=TRUE)
## approximate location error
afspdb.profiles$location_accuracy_min = afspdb.profiles$XYAccur * 1e5
afspdb.profiles$location_accuracy_min = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(afspdb.profiles$location_accuracy_min < 20, NA, afspdb.profiles$location_accuracy_min)
afspdb.profiles$location_accuracy_max = NA
afspdb.layers <- [read.dbf](https://rdrr.io/pkg/foreign/man/read.dbf.html)("/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/AfSP012Qry_Layers.dbf", as.is=TRUE)
## select columns of interest:
afspdb.s.lst <- [c](https://rdrr.io/r/base/c.html)("ProfileID", "usiteid", "T_Year", "X_LonDD", "Y_LatDD", "location_accuracy_min", "location_accuracy_max")
## Convert to weight content
#summary(afspdb.layers$BlkDens)
## select layers
afspdb.h.lst <- [c](https://rdrr.io/r/base/c.html)("LayerID", "ProfileID", "LayerNr", "UpDpth", "LowDpth", "HorDes", "db_13b", "BlkDens", "COLEws", "VMCpF18", "VMCpF20", "VMCpF25", "VMCpF42", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "LabTxtr", "Clay", "Silt", "Sand", "OrgC", "PHKCl", "PHH2O", "CecSoil", "cec_nh4", "CfPc", "ksat_lab", "ksat_field")
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid")){ afspdb.profiles[,j] = NA }
for(j in [c](https://rdrr.io/r/base/c.html)("db_13b", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_nh4", "ksat_lab", "ksat_field")){ afspdb.layers[,j] = NA }
hydrosprops.AfSPDB = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(afspdb.profiles[,afspdb.s.lst], afspdb.layers[,afspdb.h.lst])
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(hydrosprops.AfSPDB)){
if([is.numeric](https://rdrr.io/r/base/numeric.html)(hydrosprops.AfSPDB[,j])) { hydrosprops.AfSPDB[,j] <- [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(hydrosprops.AfSPDB[,j] < -200, NA, hydrosprops.AfSPDB[,j]) }
}
hydrosprops.AfSPDB$source_db = "AfSPDB"
hydrosprops.AfSPDB$confidence_degree = 5
hydrosprops.AfSPDB$OrgC = hydrosprops.AfSPDB$OrgC/10
#summary(hydrosprops.AfSPDB$OrgC)
hydrosprops.AfSPDB$project_url = "https://www.isric.org/projects/africa-soil-profiles-database-afsp"
hydrosprops.AfSPDB$citation_url = "https://www.isric.org/sites/default/files/isric_report_2014_01.pdf"
hydrosprops.AfSPDB = complete.vars(hydrosprops.AfSPDB, sel = [c](https://rdrr.io/r/base/c.html)("VMCpF25", "VMCpF42"), coords = [c](https://rdrr.io/r/base/c.html)("X_LonDD", "Y_LatDD"))
saveRDS.gz(hydrosprops.AfSPDB, "/mnt/diskstation/data/Soil_points/AF/AfSIS_SPDB/hydrosprops.AfSPDB.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.AfSPDB)
#> [1] 10720 40
```
#### 6\.3\.0\.3 ISRIC ISIS
* Batjes, N. H. (1995\). [A homogenized soil data file for global environmental research: A subset of FAO, ISRIC and NRCS profiles (Version 1\.0\) (No. 95/10b)](https://www.isric.org/sites/default/files/isric_report_1995_10b.pdf). ISRIC.
* Van de Ven, T., \& Tempel, P. (1994\). ISIS 4\.0: ISRIC Soil Information System: User Manual. ISRIC.
```
if({
isis.xy <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/Sites.csv", stringsAsFactors = FALSE)
#str(isis.xy)
isis.des <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/SitedescriptionResults.csv", stringsAsFactors = FALSE)
isis.site <- [data.frame](https://rdrr.io/r/base/data.frame.html)(site_key=isis.xy$Id, usiteid=[paste](https://rdrr.io/r/base/paste.html)(isis.xy$CountryISO, isis.xy$SiteNumber, sep=""))
id0.lst = [c](https://rdrr.io/r/base/c.html)(236,235,224)
nm0.lst = [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees", "site_obsdate")
isis.site.l = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)(1:[length](https://rdrr.io/r/base/length.html)(id0.lst), function(i){plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)([subset](https://Rdatatable.gitlab.io/data.table/reference/subset.data.table.html)(isis.des, ValueId==id0.lst[i])[,[c](https://rdrr.io/r/base/c.html)("SampleId","Value")], replace=[c](https://rdrr.io/r/base/c.html)("SampleId"="site_key", "Value"=[paste](https://rdrr.io/r/base/paste.html)(nm0.lst[i])))}), type = "full")
isis.site.df = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(isis.site, isis.site.l)
for(j in nm0.lst){ isis.site.df[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(isis.site.df[,j]) }
isis.site.df[isis.site.df$usiteid=="CI2","latitude_decimal_degrees"] = 5.883333
#str(isis.site.df)
isis.site.df$location_accuracy_min = 100
isis.site.df$location_accuracy_max = NA
isis.smp <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/AnalyticalSamples.csv", stringsAsFactors = FALSE)
isis.ana <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/AnalyticalResults.csv", stringsAsFactors = FALSE)
#str(isis.ana)
isis.class <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/ClassificationResults.csv", stringsAsFactors = FALSE)
isis.hor <- [data.frame](https://rdrr.io/r/base/data.frame.html)(labsampnum=isis.smp$Id, hzn_top=isis.smp$Top, hzn_bot=isis.smp$Bottom, site_key=isis.smp$SiteId)
isis.hor$hzn_bot <- [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)(">", "", isis.hor$hzn_bot))
#str(isis.hor)
id.lst = [c](https://rdrr.io/r/base/c.html)(1,2,22,4,28,31,32,14,34,38,39,42)
nm.lst = [c](https://rdrr.io/r/base/c.html)("ph_h2o","ph_kcl","wpg2","oc","sand_tot_psa","silt_tot_psa","clay_tot_psa","cec_sum","db_od","w10cld","w3cld", "w15l2")
#str(as.numeric(isis.ana$Value[isis.ana$ValueId==38]))
isis.hor.l = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)(1:[length](https://rdrr.io/r/base/length.html)(id.lst), function(i){plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)([subset](https://Rdatatable.gitlab.io/data.table/reference/subset.data.table.html)(isis.ana, ValueId==id.lst[i])[,[c](https://rdrr.io/r/base/c.html)("SampleId","Value")], replace=[c](https://rdrr.io/r/base/c.html)("SampleId"="labsampnum", "Value"=[paste](https://rdrr.io/r/base/paste.html)(nm.lst[i])))}), type = "full")
#summary(as.numeric(isis.hor.l$w3cld))
isis.hor.df = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(isis.hor, isis.hor.l)
isis.hor.df = isis.hor.df[,]
#summary(as.numeric(isis.hor.df$w3cld))
for(j in nm.lst){ isis.hor.df[,j] <- [as.numeric](https://rdrr.io/r/base/numeric.html)(isis.hor.df[,j]) }
#str(isis.hor.df)
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("layer_sequence", "hzn_desgn", "tex_psda", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_nh4", "db_13b", "w6clod", "ksat_lab", "ksat_field")){ isis.hor.df[,j] = NA }
[which](https://rdrr.io/r/base/which.html)(!hor.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(isis.hor.df))
hydrosprops.ISIS <- [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(isis.site.df[,site.names], isis.hor.df[,hor.names], type="left")
hydrosprops.ISIS$source_db = "ISRIC_ISIS"
hydrosprops.ISIS$confidence_degree = 1
hydrosprops.ISIS$project_url = "https://isis.isric.org"
hydrosprops.ISIS$citation_url = "https://www.isric.org/sites/default/files/isric_report_1995_10b.pdf"
hydrosprops.ISIS = complete.vars(hydrosprops.ISIS)
saveRDS.gz(hydrosprops.ISIS, "/mnt/diskstation/data/Soil_points/INT/ISRIC_ISIS/hydrosprops.ISIS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.ISIS)
#> [1] 1176 40
```
#### 6\.3\.0\.4 ISRIC WISE
* Batjes, N.H. (2019\). [Harmonized soil profile data for applications at global and continental scales: updates to the WISE database](http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x). Soil Use and Management 5:124–127\.
```
if({
wise.SITE <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_SITE.csv", stringsAsFactors=FALSE)
#summary(as.factor(wise.SITE$LONLAT_ACC))
wise.SITE$location_accuracy_min = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(wise.SITE$LONLAT_ACC=="D", 1e5/2, [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(wise.SITE$LONLAT_ACC=="S", 30, [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(wise.SITE$LONLAT_ACC=="M", 1800/2, NA)))
wise.SITE$location_accuracy_max = NA
wise.HORIZON <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/WISE3_HORIZON.csv")
wise.s.lst <- [c](https://rdrr.io/r/base/c.html)("WISE3_id", "SOURCE_ID", "DATEYR", "LONDD", "LATDD", "location_accuracy_min", "location_accuracy_max")
## Volumetric values
#summary(wise.HORIZON$BULKDENS)
#summary(wise.HORIZON$VMC1)
wise.HORIZON$WISE3_id = wise.HORIZON$WISE3_ID
wise.h.lst <- [c](https://rdrr.io/r/base/c.html)("labsampnum", "WISE3_id", "HONU", "TOPDEP", "BOTDEP", "DESIG", "db_13b", "BULKDENS", "COLEws", "w6clod", "VMC1", "VMC2", "VMC3", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "CLAY", "SILT", "SAND", "ORGC", "PHKCL", "PHH2O", "CECSOIL", "cec_nh4", "GRAVEL", "ksat_lab", "ksat_field")
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "ksat_lab", "ksat_field")){ wise.HORIZON[,j] = NA }
hydrosprops.WISE = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(wise.SITE[,wise.s.lst], wise.HORIZON[,wise.h.lst])
for(j in 1:[ncol](https://rdrr.io/r/base/nrow.html)(hydrosprops.WISE)){
if([is.numeric](https://rdrr.io/r/base/numeric.html)(hydrosprops.WISE[,j])) { hydrosprops.WISE[,j] <- [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(hydrosprops.WISE[,j] < -200, NA, hydrosprops.WISE[,j]) }
}
hydrosprops.WISE$ORGC = hydrosprops.WISE$ORGC/10
hydrosprops.WISE$source_db = "ISRIC_WISE"
hydrosprops.WISE$project_url = "https://isric.org"
hydrosprops.WISE$citation_url = "http://dx.doi.org/10.1111/j.1475-2743.2009.00202.x"
hydrosprops.WISE <- complete.vars(hydrosprops.WISE, sel=[c](https://rdrr.io/r/base/c.html)("VMC2", "VMC3"), coords = [c](https://rdrr.io/r/base/c.html)("LONDD", "LATDD"))
hydrosprops.WISE$confidence_degree = 5
#summary(hydrosprops.WISE$VMC3)
saveRDS.gz(hydrosprops.WISE, "/mnt/diskstation/data/Soil_points/INT/ISRIC_WISE/hydrosprops.WISE.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.WISE)
#> [1] 1325 40
```
#### 6\.3\.0\.5 Fine Root Ecology Database (FRED)
* Iversen CM, Powell AS, McCormack ML, Blackwood CB, Freschet GT, Kattge J, Roumet C, Stover DB, Soudzilovskaia NA, Valverde\-Barrantes OJ, van Bodegom PM, Violle C. 2018\. Fine\-Root Ecology Database (FRED): A Global Collection of Root Trait Data with Coincident Site, Vegetation, Edaphic, and Climatic Data, Version 2\. Oak Ridge National Laboratory, TES SFA, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A. Access on\-line at: [https://doi.org/10\.25581/ornlsfa.012/1417481](https://doi.org/10.25581/ornlsfa.012/1417481).
```
if({
fred = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED2_20180518.csv", skip = 5, header=FALSE)
[names](https://rdrr.io/r/base/names.html)(fred) = [names](https://rdrr.io/r/base/names.html)([read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/FRED/FRED2_20180518.csv", nrows=1, header=TRUE))
fred.h.lst = [c](https://rdrr.io/r/base/c.html)("Notes_Row.ID", "Data.source_DOI", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "Soil.horizon", "db_13b", "Soil.bulk.density", "COLEws", "w6clod", "w10cld", "Soil.water_Volumetric.content", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "Soil.texture", "Soil.texture_Fraction.clay", "Soil.texture_Fraction.silt", "Soil.texture_Fraction.sand", "Soil.organic.C.content", "ph_kcl", "Soil.pH_Water", "Soil.cation.exchange.capacity..CEC.", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
#summary(fred$Soil.water_Volumetric.content)
#summary(fred$Soil.water_Storage.capacity)
fred$site_obsdate = [rowMeans](https://rdrr.io/r/base/colSums.html)(fred[,[c](https://rdrr.io/r/base/c.html)("Sample.collection_Year.ending.collection", "Sample.collection_Year.beginning.collection")], na.rm=TRUE)
#summary(fred$site_obsdate)
fred$longitude_decimal_degrees = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Longitude), fred$Longitude_Estimated, fred$Longitude)
fred$latitude_decimal_degrees = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Latitude), fred$Latitude_Estimated, fred$Latitude)
#summary(as.factor(fred$Soil.horizon))
fred$hzn_bot = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Lower.sampling.depth), fred$Soil.depth - 5, fred$Soil.depth_Lower.sampling.depth)
fred$hzn_top = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(fred$Soil.depth_Upper.sampling.depth), fred$Soil.depth + 5, fred$Soil.depth_Upper.sampling.depth)
x.na = fred.h.lst[[which](https://rdrr.io/r/base/which.html)(!fred.h.lst [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(fred))]
if([length](https://rdrr.io/r/base/length.html)(x.na)>0){ for(i in x.na){ fred[,i] = NA } }
hydrosprops.FRED = fred[,fred.h.lst]
#plot(hydrosprops.FRED[,4:5])
hydrosprops.FRED$source_db = "FRED"
hydrosprops.FRED$confidence_degree = 5
hydrosprops.FRED$project_url = "https://roots.ornl.gov/"
hydrosprops.FRED$citation_url = "https://doi.org/10.25581/ornlsfa.012/1417481"
hydrosprops.FRED = complete.vars(hydrosprops.FRED, sel = [c](https://rdrr.io/r/base/c.html)("Soil.water_Volumetric.content", "Soil.texture_Fraction.clay"))
saveRDS.gz(hydrosprops.FRED, "/mnt/diskstation/data/Soil_points/INT/FRED/hydrosprops.FRED.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.FRED)
#> [1] 3761 40
```
#### 6\.3\.0\.6 EGRPR
* [Russian Federation: The Unified State Register of Soil Resources (EGRPR)](http://egrpr.esoil.ru/).
```
if({
russ.HOR = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/Russia/EGRPR/Russia_EGRPR_soil_pedons.csv")
russ.HOR$SOURCEID = [paste](https://rdrr.io/r/base/paste.html)(russ.HOR$CardID, russ.HOR$SOIL_ID, sep="_")
russ.HOR$SNDPPT <- russ.HOR$TEXTSAF + russ.HOR$TEXSCM
russ.HOR$SLTPPT <- russ.HOR$TEXTSIC + russ.HOR$TEXTSIM + 0.8 * russ.HOR$TEXTSIF
russ.HOR$CLYPPT <- russ.HOR$TEXTCL + 0.2 * russ.HOR$TEXTSIF
## Correct texture fractions:
sumTex <- [rowSums](https://rdrr.io/r/base/colSums.html)(russ.HOR[,[c](https://rdrr.io/r/base/c.html)("SLTPPT","CLYPPT","SNDPPT")])
russ.HOR$SNDPPT <- russ.HOR$SNDPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$SLTPPT <- russ.HOR$SLTPPT / ((sumTex - russ.HOR$CLYPPT) /(100 - russ.HOR$CLYPPT))
russ.HOR$oc <- russ.HOR$ORGMAT/1.724
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("site_obsdate", "location_accuracy_min", "location_accuracy_max", "labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")){ russ.HOR[,j] = NA }
russ.sel.h = [c](https://rdrr.io/r/base/c.html)("SOURCEID", "SOIL_ID", "site_obsdate", "LONG", "LAT", "location_accuracy_min", "location_accuracy_max", "labsampnum", "HORNMB", "HORTOP", "HORBOT", "HISMMN", "db_13b", "DVOL", "COLEws", "w6clod", "WR10", "WR33", "WR1500", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "CLYPPT", "SLTPPT", "SNDPPT", "oc", "PHSLT", "PHH2O", "CECST", "cec_nh4", "wpg2","ksat_lab", "ksat_field")
hydrosprops.EGRPR = russ.HOR[,russ.sel.h]
hydrosprops.EGRPR$source_db = "Russia_EGRPR"
hydrosprops.EGRPR$confidence_degree = 2
hydrosprops.EGRPR$project_url = "http://egrpr.esoil.ru/"
hydrosprops.EGRPR$citation_url = "https://doi.org/10.19047/0136-1694-2016-86-115-123"
hydrosprops.EGRPR <- complete.vars(hydrosprops.EGRPR, sel=[c](https://rdrr.io/r/base/c.html)("WR33", "WR1500"), coords = [c](https://rdrr.io/r/base/c.html)("LONG", "LAT"))
#summary(hydrosprops.EGRPR$WR1500)
saveRDS.gz(hydrosprops.EGRPR, "/mnt/diskstation/data/Soil_points/Russia/EGRPR/hydrosprops.EGRPR.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.EGRPR)
#> [1] 1138 40
```
#### 6\.3\.0\.7 SPADE\-2
* Hannam J.A., Hollis, J.M., Jones, R.J.A., Bellamy, P.H., Hayes, S.E., Holden, A., Van Liedekerke, M.H. and Montanarella, L. (2009\). [SPADE\-2: The soil profile analytical database for Europe, Version 2\.0 Beta Version March 2009](https://esdac.jrc.ec.europa.eu/content/soil-profile-analytical-database-2). Unpublished Report, 27pp.
* Kristensen, J. A., Balstrøm, T., Jones, R. J. A., Jones, A., Montanarella, L., Panagos, P., and Breuning\-Madsen, H.: Development of a harmonised soil profile analytical database for Europe: a resource for supporting regional soil management, SOIL, 5, 289–301, [https://doi.org/10\.5194/soil\-5\-289\-2019](https://doi.org/10.5194/soil-5-289-2019), 2019\.
```
if({
spade.PLOT <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/EU/SPADE/DAT_PLOT.csv")
#str(spade.PLOT)
spade.HOR <- [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/EU/SPADE/DAT_HOR.csv")
spade.PLOT = spade.PLOT[!spade.PLOT$LON_COOR_V>180 & spade.PLOT$LAT_COOR_V>20,]
#plot(spade.PLOT[,c("LON_COOR_V","LAT_COOR_V")])
spade.PLOT$location_accuracy_min = 100
spade.PLOT$location_accuracy_max = NA
#site.names = c("site_key", "usiteid", "site_obsdate", "longitude_decimal_degrees", "latitude_decimal_degrees")
spade.PLOT$ProfileID = [paste](https://rdrr.io/r/base/paste.html)(spade.PLOT$CNTY_C, spade.PLOT$PLOT_ID, sep="_")
spade.PLOT$T_Year = 2009
spade.s.lst <- [c](https://rdrr.io/r/base/c.html)("PLOT_ID", "ProfileID", "T_Year", "LON_COOR_V", "LAT_COOR_V", "location_accuracy_min", "location_accuracy_max")
## standardize:
spade.HOR$SLTPPT <- spade.HOR$SILT1_V + spade.HOR$SILT2_V
spade.HOR$SNDPPT <- spade.HOR$SAND1_V + spade.HOR$SAND2_V + spade.HOR$SAND3_V
spade.HOR$PHIKCL <- NA
spade.HOR$PHIKCL[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A14")] <- spade.HOR$PH_V[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A14")]
spade.HOR$PHIHO5 <- NA
spade.HOR$PHIHO5[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A12")] <- spade.HOR$PH_V[[which](https://rdrr.io/r/base/which.html)(spade.HOR$PH_M [%in%](https://rdrr.io/r/base/match.html) "A12")]
#summary(spade.HOR$BD_V)
for(j in [c](https://rdrr.io/r/base/c.html)("site_obsdate", "layer_sequence", "db_13b", "COLEws", "w15bfm", "w6clod", "w10cld", "adod", "wrd_ws13", "w15bfm", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "ksat_lab", "ksat_field")){ spade.HOR[,j] = NA }
spade.h.lst = [c](https://rdrr.io/r/base/c.html)("HOR_ID","PLOT_ID","layer_sequence","HOR_BEG_V","HOR_END_V","HOR_NAME","db_13b", "BD_V", "COLEws", "w6clod", "w10cld", "WCFC_V", "WC4_V", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "CLAY_V", "SLTPPT", "SNDPPT", "OC_V", "PHIKCL", "PHIHO5", "CEC_V", "cec_nh4", "GRAV_C", "ksat_lab", "ksat_field")
hydrosprops.SPADE2 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(spade.PLOT[,spade.s.lst], spade.HOR[,spade.h.lst])
hydrosprops.SPADE2$source_db = "SPADE2"
hydrosprops.SPADE2$confidence_degree = 15
hydrosprops.SPADE2$project_url = "https://esdac.jrc.ec.europa.eu/content/soil-profile-analytical-database-2"
hydrosprops.SPADE2$citation_url = "https://doi.org/10.1016/j.landusepol.2011.07.003"
hydrosprops.SPADE2 <- complete.vars(hydrosprops.SPADE2, sel=[c](https://rdrr.io/r/base/c.html)("WCFC_V", "WC4_V"), coords = [c](https://rdrr.io/r/base/c.html)("LON_COOR_V","LAT_COOR_V"))
#summary(hydrosprops.SPADE2$WC4_V)
#summary(is.na(hydrosprops.SPADE2$WC4_V))
#hist(hydrosprops.SPADE2$WC4_V, breaks=45, col="gray")
saveRDS.gz(hydrosprops.SPADE2, "/mnt/diskstation/data/Soil_points/EU/SPADE/hydrosprops.SPADE2.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.SPADE2)
#> [1] 1182 40
```
#### 6\.3\.0\.8 Canada National Pedon Database
* [Agriculture and Agri\-Food Canada National Pedon Database](https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0).
```
if({
NPDB.nm = [c](https://rdrr.io/r/base/c.html)("NPDB_V2_sum_source_info.csv","NPDB_V2_sum_chemical.csv", "NPDB_V2_sum_horizons_raw.csv", "NPDB_V2_sum_physical.csv")
NPDB.HOR = plyr::[join_all](https://rdrr.io/pkg/plyr/man/join_all.html)([lapply](https://rdrr.io/r/base/lapply.html)([paste0](https://rdrr.io/r/base/paste.html)("/mnt/diskstation/data/Soil_points/Canada/NPDB/", NPDB.nm), read.csv), type = "full")
#str(NPDB.HOR)
#summary(NPDB.HOR$BULK_DEN)
## 0 values -> ERROR!
## add missing columns
NPDB.HOR$HISMMN = [paste0](https://rdrr.io/r/base/paste.html)(NPDB.HOR$HZN_MAS, NPDB.HOR$HZN_SUF, NPDB.HOR$HZN_MOD)
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "location_accuracy_max", "layer_sequence", "labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "w10cld", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "cec_nh4", "ph_kcl", "ksat_lab", "ksat_field")){ NPDB.HOR[,j] = NA }
npdb.sel.h = [c](https://rdrr.io/r/base/c.html)("PEDON_ID", "usiteid", "CAL_YEAR", "DD_LONG", "DD_LAT", "CONF_METRS", "location_accuracy_max", "labsampnum", "layer_sequence", "U_DEPTH", "L_DEPTH", "HISMMN", "db_13b", "BULK_DEN", "COLEws", "w6clod", "w10cld", "RETN_33KP", "RETN_1500K", "RETN_HYGR", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "T_CLAY", "T_SILT", "T_SAND", "CARB_ORG", "ph_kcl", "PH_H2O", "CEC", "cec_nh4", "VC_SAND", "ksat_lab", "ksat_field")
hydrosprops.NPDB = NPDB.HOR[,npdb.sel.h]
hydrosprops.NPDB$source_db = "Canada_NPDB"
hydrosprops.NPDB$confidence_degree = 1
hydrosprops.NPDB$project_url = "https://open.canada.ca/data/en/"
hydrosprops.NPDB$citation_url = "https://open.canada.ca/data/en/dataset/6457fad6-b6f5-47a3-9bd1-ad14aea4b9e0"
hydrosprops.NPDB <- complete.vars(hydrosprops.NPDB, sel=[c](https://rdrr.io/r/base/c.html)("RETN_33KP", "RETN_1500K"), coords = [c](https://rdrr.io/r/base/c.html)("DD_LONG", "DD_LAT"))
saveRDS.gz(hydrosprops.NPDB, "/mnt/diskstation/data/Soil_points/Canada/NPDB/hydrosprops.NPDB.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.NPDB)
#> [1] 404 40
```
#### 6\.3\.0\.9 ETH imported data from literature
* Digitized soil hydraulic measurements from the literature by the [ETH Soil and Terrestrial Environmental Physics](https://step.ethz.ch/).
```
if({
xlsxFile = [list.files](https://rdrr.io/r/base/list.files.html)(pattern="Global_soil_water_tables.xlsx", full.names = TRUE, recursive = TRUE)
wb = openxlsx::[getSheetNames](https://rdrr.io/pkg/openxlsx/man/getSheetNames.html)(xlsxFile)
eth.tbl = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)(
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "ETH_imported_literature"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "ETH_imported_literature_more"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "ETH_extra_data set"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Tibetan_plateau"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Belgium_Vereecken_data"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Australia_dataset"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Florida_Soils_Ksat"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "China_dataset"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Sand_dunes_Siberia_database"),
openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "New_data_4_03")
)
#dim(eth.tbl)
#summary(as.factor(eth.tbl$reference_source))
## Data quality tables
lab.ql = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "Quality_per_site_key")
lab.cd = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(eth.tbl["site_key"], lab.ql)$confidence_degree
eth.tbl$confidence_degree = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(eth.tbl$confidence_degree), lab.cd, eth.tbl$confidence_degree)
#summary(as.factor(eth.tbl$confidence_degree))
## missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "labsampnum", "layer_sequence", "db_13b", "COLEws", "adod", "wrd_ws13", "w15bfm", "w15cly", "cec7_cly", "w6clod", "w10cld", "ph_kcl", "cec_sum", "cec_nh4", "wpg2", "project_url", "citation_url")){ eth.tbl[,j] = NA }
hydrosprops.ETH = eth.tbl[,col.names]
col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(eth.tbl))]
hydrosprops.ETH$project_url = "https://step.ethz.ch/"
hydrosprops.ETH$citation_url = "https://doi.org/10.5194/essd-2020-149"
hydrosprops.ETH = complete.vars(hydrosprops.ETH)
#hist(hydrosprops.ETH$w15l2, breaks=45, col="gray")
#hist(log1p(hydrosprops.ETH$ksat_lab), breaks=45, col="gray")
saveRDS.gz(hydrosprops.ETH, "/mnt/diskstation/data/Soil_points/INT/hydrosprops.ETH.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.ETH)
#> [1] 9023 40
```
#### 6\.3\.0\.10 HYBRAS
* Ottoni, M. V., Ottoni Filho, T. B., Schaap, M. G., Lopes\-Assad, M. L. R., \& Rotunno Filho, O. C. (2018\). [Hydrophysical database for Brazilian soils (HYBRAS) and pedotransfer functions for water retention](http://www.cprm.gov.br/en/Hydrology/Research-and-Innovation/HYBRAS-4208.html). Vadose Zone Journal, 17(1\).
```
if({
hybras.HOR = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "HYBRAS.V1_integrated_tables_RAW")
#str(hybras.HOR)
## some points had only UTM coordinates and had to be manually coorected
## subset to unique values:
hybras.HOR = hybras.HOR[,]
#summary(hybras.HOR$bulk_den)
#hist(hybras.HOR$ksat, breaks=35, col="grey")
## add missing columns
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "layer_sequence", "labsampnum", "db_13b", "COLEws", "w15bfm", "w6clod", "w10cld", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_sum", "cec_nh4", "ph_kcl", "ph_h2o", "ksat_field", "uuid")){ hybras.HOR[,j] = NA }
hybras.HOR$w3cld = [rowMeans](https://rdrr.io/r/base/colSums.html)(hybras.HOR[,[c](https://rdrr.io/r/base/c.html)("theta20","theta50")], na.rm = TRUE)
hybras.sel.h = [c](https://rdrr.io/r/base/c.html)("site_key", "usiteid", "year", "LongitudeOR", "LatitudeOR", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "top_depth", "bot_depth", "horizon", "db_13b", "bulk_den", "COLEws", "w6clod", "theta10", "w3cld", "theta15000", "satwat", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay", "silt", "sand", "org_carb", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "vc_sand", "ksat", "ksat_field")
hydrosprops.HYBRAS = hybras.HOR[,hybras.sel.h]
hydrosprops.HYBRAS$source_db = "HYBRAS"
hydrosprops.HYBRAS$confidence_degree = 1
for(i in [c](https://rdrr.io/r/base/c.html)("theta10", "w3cld", "theta15000", "satwat")){ hydrosprops.HYBRAS[,i] = hydrosprops.HYBRAS[,i]*100 }
#summary(hydrosprops.HYBRAS$theta10)
#summary(hydrosprops.HYBRAS$satwat)
#hist(hydrosprops.HYBRAS$theta10, breaks=45, col="gray")
#hist(log1p(hydrosprops.HYBRAS$ksat), breaks=45, col="gray")
#summary(!is.na(hydrosprops.HYBRAS$ksat))
hydrosprops.HYBRAS$project_url = "http://www.cprm.gov.br/en/Hydrology/Research-and-Innovation/HYBRAS-4208.html"
hydrosprops.HYBRAS$citation_url = "https://doi.org/10.2136/vzj2017.05.0095"
hydrosprops.HYBRAS <- complete.vars(hydrosprops.HYBRAS, sel=[c](https://rdrr.io/r/base/c.html)("w3cld", "theta15000", "ksat", "ksat_field"), coords = [c](https://rdrr.io/r/base/c.html)("LongitudeOR", "LatitudeOR"))
saveRDS.gz(hydrosprops.HYBRAS, "/mnt/diskstation/data/Soil_points/INT/HYBRAS/hydrosprops.HYBRAS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.HYBRAS)
#> [1] 814 40
```
#### 6\.3\.0\.11 UNSODA
* Nemes, Attila; Schaap, Marcel; Leij, Feike J.; Wösten, J. Henk M. (2015\). [UNSODA 2\.0: Unsaturated Soil Hydraulic Database](https://data.nal.usda.gov/dataset/unsoda-20-unsaturated-soil-hydraulic-database-database-and-program-indirect-methods-estimating-unsaturated-hydraulic-properties). Database and program for indirect methods of estimating unsaturated hydraulic properties. US Salinity Laboratory \- ARS \- USDA. [https://doi.org/10\.15482/USDA.ADC/1173246](https://doi.org/10.15482/USDA.ADC/1173246). Accessed 2020\-06\-08\.
```
if({
unsoda.LOC = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/general_c.csv")
#unsoda.LOC = unsoda.LOC[!unsoda.LOC$Lat==0,]
#plot(unsoda.LOC[,c("Long","Lat")])
unsoda.SOIL = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/soil_properties.csv")
#summary(unsoda.SOIL$k_sat)
## Soil water retention in lab:
tmp.hyd = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/lab_drying_h-t.csv")
#str(tmp.hyd)
tmp.hyd = tmp.hyd[,]
tmp.hyd$theta = tmp.hyd$theta*100
#head(tmp.hyd)
pr.lst = [c](https://rdrr.io/r/base/c.html)(6,10,33,15000)
cl.lst = [c](https://rdrr.io/r/base/c.html)("w6clod", "w10cld", "w3cld", "w15l2")
tmp.hyd.tbl = [data.frame](https://rdrr.io/r/base/data.frame.html)(code=[unique](https://Rdatatable.gitlab.io/data.table/reference/duplicated.html)(tmp.hyd$code), w6clod=NA, w10cld=NA, w3cld=NA, w15l2=NA)
for(i in 1:[length](https://rdrr.io/r/base/length.html)(pr.lst)){
tmp.hyd.tbl[,cl.lst[i]] = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tmp.hyd.tbl, tmp.hyd[[which](https://rdrr.io/r/base/which.html)(tmp.hyd$preshead==pr.lst[i]),[c](https://rdrr.io/r/base/c.html)("code","theta")], match="first")$theta
}
#head(tmp.hyd.tbl)
## ksat
kst.lev = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/comment_lab_sat_cond.csv", na.strings=[c](https://rdrr.io/r/base/c.html)("","NA","No comment"))
kst.met = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/methodology.csv", na.strings=[c](https://rdrr.io/r/base/c.html)("","NA","No comment"))
kst.met$comment_lsc = [paste](https://rdrr.io/r/base/paste.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(kst.met[[c](https://rdrr.io/r/base/c.html)("comment_lsc_ID")], kst.lev)$comment_lsc)
kst.met$comment_lsc[[which](https://rdrr.io/r/base/which.html)(kst.met$comment_lsc=="NA")] = NA
kst.fld = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/UNSODA/comment_field_sat_cond.csv", na.strings=[c](https://rdrr.io/r/base/c.html)("","NA","No comment"))
kst.met$comment_fsc = [paste](https://rdrr.io/r/base/paste.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(kst.met[[c](https://rdrr.io/r/base/c.html)("comment_fsc_ID")], kst.fld)$comment_fsc)
kst.met$comment_fsc[[which](https://rdrr.io/r/base/which.html)(kst.met$comment_fsc=="NA")] = NA
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(kst.met$comment_lsc))
kst.met$comment_met = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(kst.met$comment_lsc)&
unsoda.SOIL$comment_met = [paste](https://rdrr.io/r/base/paste.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(unsoda.SOIL[[c](https://rdrr.io/r/base/c.html)("code")], kst.met)$comment_met)
#summary(as.factor(unsoda.SOIL$comment_met))
sel.fld = unsoda.SOIL$comment_met [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("field Double ring infiltrometer","field Ponding", "field Steady infiltration")
unsoda.SOIL$ksat_lab[[which](https://rdrr.io/r/base/which.html)(!sel.fld)] = unsoda.SOIL$k_sat[[which](https://rdrr.io/r/base/which.html)(!sel.fld)]
unsoda.SOIL$ksat_field[[is.na](https://rdrr.io/r/base/NA.html)(unsoda.SOIL$ksat_lab)] = unsoda.SOIL$k_sat[[is.na](https://rdrr.io/r/base/NA.html)(unsoda.SOIL$ksat_lab)]
unsoda.col = join_all([list](https://rdrr.io/r/base/list.html)(unsoda.LOC, unsoda.SOIL, tmp.hyd.tbl))
#head(unsoda.col)
#summary(unsoda.col$OM_content)
unsoda.col$oc = [signif](https://rdrr.io/r/base/Round.html)(unsoda.col$OM_content/1.724, 4)
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "location_accuracy_min", "location_accuracy_max", "layer_sequence", "labsampnum", "db_13b", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "cec_nh4", "ph_kcl", "wpg2")){ unsoda.col[,j] = NA }
unsoda.sel.h = [c](https://rdrr.io/r/base/c.html)("code", "usiteid", "date", "Long", "Lat", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "depth_upper", "depth_lower", "horizon", "db_13b", "bulk_density", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "Texture", "Clay", "Silt", "Sand", "oc", "ph_kcl", "pH", "CEC", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
hydrosprops.UNSODA = unsoda.col[,unsoda.sel.h]
hydrosprops.UNSODA$source_db = "UNSODA"
## corrected coordinates:
unsoda.ql = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "UNSODA_degree")
hydrosprops.UNSODA$confidence_degree = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$confidence_degree
hydrosprops.UNSODA$Texture = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$tex_psda
hydrosprops.UNSODA$location_accuracy_min = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$location_accuracy_min
hydrosprops.UNSODA$location_accuracy_max = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$location_accuracy_max
## replace coordinates
unsoda.Long = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$Improved_long
unsoda.Lat = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.UNSODA["code"], unsoda.ql)$Improved_lat
hydrosprops.UNSODA$Long = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(unsoda.Long), hydrosprops.UNSODA$Long, unsoda.Long)
hydrosprops.UNSODA$Lat = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(unsoda.Long), hydrosprops.UNSODA$Lat, unsoda.Lat)
#hist(hydrosprops.UNSODA$w15l2, breaks=45, col="gray")
#hist(hydrosprops.UNSODA$ksat_lab, breaks=45, col="gray")
unsoda.rem = hydrosprops.UNSODA$code [%in%](https://rdrr.io/r/base/match.html) unsoda.ql$code[[is.na](https://rdrr.io/r/base/NA.html)(unsoda.ql$additional_information)]
#summary(unsoda.rem)
hydrosprops.UNSODA = hydrosprops.UNSODA[unsoda.rem,]
## texture fractions sometimes need to be multiplied by 100!
#hydrosprops.UNSODA[hydrosprops.UNSODA$code==2220,]
sum.tex.1 = [rowSums](https://rdrr.io/r/base/colSums.html)(hydrosprops.UNSODA[,[c](https://rdrr.io/r/base/c.html)("Clay", "Silt", "Sand")], na.rm = TRUE)
sum.tex.r = [which](https://rdrr.io/r/base/which.html)(sum.tex.1<1.2 & sum.tex.1>0)
for(j in [c](https://rdrr.io/r/base/c.html)("Clay", "Silt", "Sand")){
hydrosprops.UNSODA[sum.tex.r,j] = hydrosprops.UNSODA[sum.tex.r,j] * 100
}
hydrosprops.UNSODA$project_url = "https://data.nal.usda.gov/dataset/unsoda-20-unsaturated-soil-hydraulic-database-database-and-program-indirect-methods-estimating-unsaturated-hydraulic-properties"
hydrosprops.UNSODA$citation_url = "https://doi.org/10.15482/USDA.ADC/1173246"
hydrosprops.UNSODA <- complete.vars(hydrosprops.UNSODA, coords = [c](https://rdrr.io/r/base/c.html)("Long", "Lat"))
saveRDS.gz(hydrosprops.UNSODA, "/mnt/diskstation/data/Soil_points/INT/UNSODA/hydrosprops.UNSODA.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.UNSODA)
#> [1] 298 40
```
#### 6\.3\.0\.12 HYDROS
* Schindler, Uwe; Müller, Lothar (2015\): [Soil hydraulic functions of international soils measured with the Extended Evaporation Method (EEM) and the HYPROP device](http://dx.doi.org/10.4228/ZALF.2003.273), Leibniz\-Zentrum für Agrarlandschaftsforschung (ZALF) e.V.\[doi: 10\.4228/ZALF.2003\.273]
```
if({
hydros.tbl = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/HydroS/int_rawret.csv", sep="\t", stringsAsFactors = FALSE, dec = ",")
hydros.tbl = hydros.tbl[,]
#summary(hydros.tbl$TENSION)
hydros.tbl$TENSIONc = [cut](https://rdrr.io/r/base/cut.html)(hydros.tbl$TENSION, breaks=[c](https://rdrr.io/r/base/c.html)(1,5,8,15,30,40,1000,15001))
#summary(hydros.tbl$TENSIONc)
hydros.tbl$WATER_CONTENT = hydros.tbl$WATER_CONTENT
#summary(hydros.tbl$WATER_CONTENT)
#head(hydros.tbl)
pr2.lst = [c](https://rdrr.io/r/base/c.html)("(5,8]", "(8,15]","(30,40]","(1e+03,1.5e+04]")
cl.lst = [c](https://rdrr.io/r/base/c.html)("w6clod", "w10cld", "w3cld", "w15l2")
hydros.tbl.df = [data.frame](https://rdrr.io/r/base/data.frame.html)(SITE_ID=[unique](https://Rdatatable.gitlab.io/data.table/reference/duplicated.html)(hydros.tbl$SITE_ID), w6clod=NA, w10cld=NA, w3cld=NA, w15l2=NA)
for(i in 1:[length](https://rdrr.io/r/base/length.html)(pr2.lst)){
hydros.tbl.df[,cl.lst[i]] = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydros.tbl.df, hydros.tbl[[which](https://rdrr.io/r/base/which.html)(hydros.tbl$TENSIONc==pr2.lst[i]),[c](https://rdrr.io/r/base/c.html)("SITE_ID","WATER_CONTENT")], match="first")$WATER_CONTENT
}
#head(hydros.tbl.df)
## properties:
hydros.soil = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/HydroS/int_basicdata.csv", sep="\t", stringsAsFactors = FALSE, dec = ",")
#head(hydros.soil)
#plot(hydros.soil[,c("H","R")])
hydros.col = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydros.soil, hydros.tbl.df)
#summary(hydros.col$OMC)
hydros.col$oc = hydros.col$OMC/1.724
hydros.col$location_accuracy_min = 100
hydros.col$location_accuracy_max = NA
for(j in [c](https://rdrr.io/r/base/c.html)("layer_sequence", "db_13b", "COLEws", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")){ hydros.col[,j] = NA }
hydros.sel.h = [c](https://rdrr.io/r/base/c.html)("SITE_ID", "SITE", "SAMP_DATE", "H", "R", "location_accuracy_min", "location_accuracy_max", "SAMP_NO", "layer_sequence", "TOP_DEPTH", "BOT_DEPTH", "HORIZON", "db_13b", "BULK_DENSITY", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc", "ph_kcl", "ph_h2o", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field")
hydros.sel.h[[which](https://rdrr.io/r/base/which.html)(!hydros.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(hydros.col))]
hydrosprops.HYDROS = hydros.col[,hydros.sel.h]
hydrosprops.HYDROS$source_db = "HydroS"
hydrosprops.HYDROS$confidence_degree = 1
hydrosprops.HYDROS$project_url = "http://dx.doi.org/10.4228/ZALF.2003.273"
hydrosprops.HYDROS$citation_url = "https://doi.org/10.18174/odjar.v3i1.15763"
hydrosprops.HYDROS <- complete.vars(hydrosprops.HYDROS, coords = [c](https://rdrr.io/r/base/c.html)("H","R"))
saveRDS.gz(hydrosprops.HYDROS, "/mnt/diskstation/data/Soil_points/INT/HYDROS/hydrosprops.HYDROS.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.HYDROS)
#> [1] 153 40
```
#### 6\.3\.0\.13 SWIG
* Rahmati, M., Weihermüller, L., Vanderborght, J., Pachepsky, Y. A., Mao, L., Sadeghi, S. H., … \& Toth, B. (2018\). [Development and analysis of the Soil Water Infiltration Global database](https://doi.org/10.5194/essd-10-1237-2018). Earth Syst. Sci. Data, 10, 1237–1263\.
```
if({
meta.tbl = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SWIG/Metadata.csv", skip = 1, fill = TRUE, blank.lines.skip=TRUE, flush=TRUE, stringsAsFactors=FALSE)
swig.xy = [read.table](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SWIG/Locations.csv", sep=";", dec = ",", stringsAsFactors=FALSE, header=TRUE, na.strings = [c](https://rdrr.io/r/base/c.html)("-",""," "), fill = TRUE)
swig.xy$x = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)(",", ".", swig.xy$x))
swig.xy$y = [as.numeric](https://rdrr.io/r/base/numeric.html)([gsub](https://rdrr.io/r/base/grep.html)(",", ".", swig.xy$y))
swig.xy = swig.xy[,1:8]
[names](https://rdrr.io/r/base/names.html)(swig.xy)[3] = "EndDataset"
[library](https://rdrr.io/r/base/library.html)([tidyr](https://tidyr.tidyverse.org))
swig.xyf = tidyr::[fill](https://tidyr.tidyverse.org/reference/fill.html)(swig.xy, [c](https://rdrr.io/r/base/c.html)("Dataset","EndDataset"))
swig.xyf$N = swig.xyf$EndDataset - swig.xyf$Dataset + 1
swig.xyf$N = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(swig.xyf$N<1,1,swig.xyf$N)
swig.xyf = swig.xyf[,]
#plot(swig.xyf[,c("x","y")])
swig.xyf.df = swig.xyf[[rep](https://rdrr.io/r/base/rep.html)([seq_len](https://rdrr.io/r/base/seq.html)([nrow](https://rdrr.io/r/base/nrow.html)(swig.xyf)), swig.xyf$N),]
rn = [sapply](https://rdrr.io/r/base/lapply.html)([row.names](https://rdrr.io/r/base/row.names.html)(swig.xyf.df), function(i){[as.numeric](https://rdrr.io/r/base/numeric.html)([strsplit](https://Rdatatable.gitlab.io/data.table/reference/tstrsplit.html)(i, "\\.")[[1]][2])})
swig.xyf.df$Code = [rowSums](https://rdrr.io/r/base/colSums.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(rn, swig.xyf.df$Dataset), na.rm = TRUE)
## bind together
swig.col = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(swig.xyf.df[,[c](https://rdrr.io/r/base/c.html)("Code","x","y")], meta.tbl)
## aditional values for ksat
swig2.tbl = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/SWIG/Statistics.csv", fill = TRUE, blank.lines.skip=TRUE, sep=";", dec = ",", flush=TRUE, stringsAsFactors=FALSE)
#hist(log1p(as.numeric(swig2.tbl$Ks..cm.hr.)), breaks=45, col="gray")
swig.col$Ks..cm.hr. = [as.numeric](https://rdrr.io/r/base/numeric.html)(plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(swig.col["Code"], swig2.tbl[[c](https://rdrr.io/r/base/c.html)("Code","Ks..cm.hr.")])$Ks..cm.hr.)
swig.col$Ks..cm.hr. = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(swig.col$Ks..cm.hr. * 24 <= 0.01, NA, swig.col$Ks..cm.hr.)
swig.col$Ksat = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(swig.col$Ksat), swig.col$Ks..cm.hr., swig.col$Ksat)
for(j in [c](https://rdrr.io/r/base/c.html)("usiteid", "site_obsdate", "labsampnum", "layer_sequence", "hzn_desgn", "db_13b", "COLEws", "adod", "wrd_ws13", "w15bfm", "w15cly", "cec7_cly", "w6clod", "w10cld", "ph_kcl", "cec_nh4", "ksat_lab")){ swig.col[,j] = NA }
## depths are missing?
swig.col$hzn_top = 0
swig.col$hzn_bot = 20
swig.col$location_accuracy_min = NA
swig.col$location_accuracy_max = NA
swig.col$w15l2 = swig.col$PWP * 100
swig.col$w3cld = swig.col$FC * 100
swig.sel.h = [c](https://rdrr.io/r/base/c.html)("Code", "usiteid", "site_obsdate", "x", "y", "location_accuracy_min", "location_accuracy_max", "labsampnum", "layer_sequence", "hzn_top", "hzn_bot", "hzn_desgn", "db_13b", "Db", "COLEws", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13", "cec7_cly", "w15cly", "Texture.Class", "Clay", "Silt", "Sand", "OC", "ph_kcl", "pH", "CEC", "cec_nh4", "Gravel", "ksat_lab", "Ksat")
swig.sel.h[[which](https://rdrr.io/r/base/which.html)(!swig.sel.h [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(swig.col))]
hydrosprops.SWIG = swig.col[,swig.sel.h]
hydrosprops.SWIG$source_db = "SWIG"
hydrosprops.SWIG$Ksat = hydrosprops.SWIG$Ksat * 24 ## convert to days
#hist(hydrosprops.SWIG$w3cld, breaks=45, col="gray")
#hist(log1p(hydrosprops.SWIG$Ksat), breaks=25, col="gray")
#summary(hydrosprops.SWIG$Ksat); summary(hydrosprops.UNSODA$ksat_lab)
## confidence degree
SWIG.ql = openxlsx::[read.xlsx](https://rdrr.io/pkg/openxlsx/man/read.xlsx.html)(xlsxFile, sheet = "SWIG_database_Confidence_degree")
hydrosprops.SWIG$confidence_degree = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$confidence_degree
hydrosprops.SWIG$location_accuracy_min = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$location_accuracy_min
hydrosprops.SWIG$location_accuracy_max = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$location_accuracy_max
#summary(as.factor(hydrosprops.SWIG$confidence_degree))
## replace coordinates
SWIG.Long = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$Improved_long
SWIG.Lat = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$Improved_lat
hydrosprops.SWIG$x = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(SWIG.Long), hydrosprops.SWIG$x, SWIG.Long)
hydrosprops.SWIG$y = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(SWIG.Long), hydrosprops.SWIG$y, SWIG.Lat)
hydrosprops.SWIG$Texture.Class = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hydrosprops.SWIG["Code"], SWIG.ql)$tex_psda
swig.lab = SWIG.ql$Code[[which](https://rdrr.io/r/base/which.html)(SWIG.ql$Ksat_Method [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("Constant head method", "Constant Head Method", "Falling head method"))]
hydrosprops.SWIG$ksat_lab[hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) swig.lab] = hydrosprops.SWIG$Ksat[hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) swig.lab]
hydrosprops.SWIG$Ksat[hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) swig.lab] = NA
## remove duplicates
swig.rem = hydrosprops.SWIG$Code [%in%](https://rdrr.io/r/base/match.html) SWIG.ql$Code[[is.na](https://rdrr.io/r/base/NA.html)(SWIG.ql$additional_information)]
#summary(swig.rem)
#Mode FALSE TRUE
#logical 200 6921
hydrosprops.SWIG = hydrosprops.SWIG[swig.rem,]
hydrosprops.SWIG = hydrosprops.SWIG[,]
## remove all ksat values < 0.01 ?
#summary(hydrosprops.SWIG$Ksat < 0.01)
hydrosprops.SWIG$project_url = "https://soil-modeling.org/resources-links/data-portal/swig"
hydrosprops.SWIG$citation_url = "https://doi.org/10.5194/essd-10-1237-2018"
hydrosprops.SWIG <- complete.vars(hydrosprops.SWIG, sel=[c](https://rdrr.io/r/base/c.html)("w15l2","w3cld","ksat_lab","Ksat"), coords=[c](https://rdrr.io/r/base/c.html)("x","y"))
saveRDS.gz(hydrosprops.SWIG, "/mnt/diskstation/data/Soil_points/INT/SWIG/hydrosprops.SWIG.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.SWIG)
#> [1] 3676 40
```
#### 6\.3\.0\.14 Pseudo\-points
* Pseudo\-observations using simulated points (world deserts)
```
if({
## 0 soil organic carbon + 98% sand content (deserts)
sprops.SIM = [readRDS](https://rdrr.io/r/base/readRDS.html)("/mnt/diskstation/data/LandGIS/training_points/soil_props/sprops.SIM.rds")
sprops.SIM$w10cld = 3.1
sprops.SIM$w3cld = 1.2
sprops.SIM$w15l2 = 0.8
sprops.SIM$tex_psda = "sand"
sprops.SIM$usiteid = sprops.SIM$lcv_admin0_fao.gaul_c_250m_s0..0cm_2015_v1.0
sprops.SIM$longitude_decimal_degrees = sprops.SIM$x
sprops.SIM$latitude_decimal_degrees = sprops.SIM$y
## Very approximate values for Ksat for shifting sand:
tax.r = raster::[extract](https://rdrr.io/pkg/raster/man/extract.html)(raster("/mnt/diskstation/data/LandGIS/archive/predicted250m/sol_grtgroup_usda.soiltax_c_250m_s0..0cm_1950..2017_v0.1.tif"), sprops.SIM[,[c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees","latitude_decimal_degrees")])
tax.leg = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/LandGIS/archive/predicted250m/sol_grtgroup_usda.soiltax_c_250m_s0..0cm_1950..2017_v0.1.tif.csv")
tax.ksat_lab = [aggregate](https://rdrr.io/r/stats/aggregate.html)(eth.tbl$ksat_lab, by=[list](https://rdrr.io/r/base/list.html)(Group=eth.tbl$tax_grtgroup), FUN=mean, na.rm=TRUE)
tax.ksat_lab.sd = [aggregate](https://rdrr.io/r/stats/aggregate.html)(eth.tbl$ksat_lab, by=[list](https://rdrr.io/r/base/list.html)(Group=eth.tbl$tax_grtgroup), FUN=sd, na.rm=TRUE)
tax.ksat_field = [aggregate](https://rdrr.io/r/stats/aggregate.html)(eth.tbl$ksat_field, by=[list](https://rdrr.io/r/base/list.html)(Group=eth.tbl$tax_grtgroup), FUN=mean, na.rm=TRUE)
tax.leg$ksat_lab = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(tax.leg, tax.ksat_lab)$x
tax.leg$ksat_field = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(tax.leg, tax.ksat_field)$x
tax.sel = [c](https://rdrr.io/r/base/c.html)("cryochrepts","cryorthods","torripsamments","haplustolls","torrifluvents")
sprops.SIM$ksat_field = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(site_key=sprops.SIM$site_key, Number=tax.r), tax.leg[tax.leg$Group [%in%](https://rdrr.io/r/base/match.html) tax.sel,])$ksat_field
sprops.SIM$ksat_lab = [join](https://dplyr.tidyverse.org/reference/mutate-joins.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(site_key=sprops.SIM$site_key, Number=tax.r), tax.leg[tax.leg$Group [%in%](https://rdrr.io/r/base/match.html) tax.sel,])$ksat_lab
#summary(sprops.SIM$ksat_lab)
#summary(sprops.SIM$ksat_field)
#View(sprops.SIM)
for(j in col.names[[which](https://rdrr.io/r/base/which.html)(!col.names [%in%](https://rdrr.io/r/base/match.html) [names](https://rdrr.io/r/base/names.html)(sprops.SIM))]){ sprops.SIM[,j] <- NA }
sprops.SIM$project_url = "https://gitlab.com/openlandmap/global-layers"
sprops.SIM$citation_url = ""
hydrosprops.SIM = sprops.SIM[,col.names]
hydrosprops.SIM$confidence_degree = 30
saveRDS.gz(hydrosprops.SIM, "/mnt/diskstation/data/Soil_points/INT/hydrosprops.SIM.rds")
}
[dim](https://rdrr.io/r/base/dim.html)(hydrosprops.SIM)
#> [1] 8133 40
```
6\.4 alt text Bind all datasets
-------------------------------
#### 6\.4\.0\.1 Bind and clean\-up
Bind all tables / rename columns where necessary:
```
[ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("hydrosprops.*"))
#> [1] "hydrosprops.AfSPDB" "hydrosprops.EGRPR" "hydrosprops.ETH"
#> [4] "hydrosprops.FRED" "hydrosprops.HYBRAS" "hydrosprops.HYDROS"
#> [7] "hydrosprops.ISIS" "hydrosprops.NCSS" "hydrosprops.NPDB"
#> [10] "hydrosprops.SIM" "hydrosprops.SPADE2" "hydrosprops.SWIG"
#> [13] "hydrosprops.UNSODA" "hydrosprops.WISE"
tot_sprops = dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)([lapply](https://rdrr.io/r/base/lapply.html)([ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("hydrosprops.*")), function(i){ [mutate_all](https://dplyr.tidyverse.org/reference/mutate_all.html)([setNames](https://rdrr.io/r/stats/setNames.html)([get](https://rdrr.io/r/base/get.html)(i), col.names), as.character) }))
## convert to numeric:
for(j in [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max", "layer_sequence", "hzn_top","hzn_bot", "oc", "ph_h2o", "ph_kcl", "db_od", "clay_tot_psa", "sand_tot_psa","silt_tot_psa", "wpg2", "db_13b", "COLEws", "w15cly", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13","cec7_cly", "cec_sum", "cec_nh4", "ksat_lab","ksat_field")){
tot_sprops[,j] = [as.numeric](https://rdrr.io/r/base/numeric.html)(tot_sprops[,j])
}
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#head(tot_sprops)
## rename some columns:
tot_sprops = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(tot_sprops, replace = [c](https://rdrr.io/r/base/c.html)("db_od" = "db", "ph_h2o" = "ph_h2o_v", "oc" = "oc_v"))
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$source_db))
#> AfSPDB Australian_ksat_data Belgian_ksat_data
#> 10720 118 145
#> Canada_NPDB China_ksat_data ETH_literature
#> 404 209 1954
#> Florida_ksat_data FRED HYBRAS
#> 6532 3761 814
#> HydroS ISRIC_ISIS ISRIC_WISE
#> 153 1176 1325
#> Russia_EGRPR SIMULATED SPADE2
#> 1138 8133 1182
#> SWIG Tibetan_plateau_ksat_data UNSODA
#> 3676 65 298
#> USDA_NCSS
#> 113991
```
Add unique row identifier
```
tot_sprops$uuid = uuid::[UUIDgenerate](https://rdrr.io/pkg/uuid/man/UUIDgenerate.html)(use.time=TRUE, n=[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops))
```
and unique location based on the [Open Location Code](https://cran.r-project.org/web/packages/olctools/vignettes/Introduction_to_olctools.html):
```
tot_sprops$olc_id = olctools::[encode_olc](https://rdrr.io/pkg/olctools/man/encode_olc.html)(tot_sprops$latitude_decimal_degrees, tot_sprops$longitude_decimal_degrees, 11)
[length](https://rdrr.io/r/base/length.html)([levels](https://rdrr.io/r/base/levels.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$olc_id)))
#> [1] 25075
```
#### 6\.4\.0\.2 Quality\-control spatial locations
Unique locations:
```
tot_sprops.pnts = tot_sprops[]
coordinates(tot_sprops.pnts) <- ~ longitude_decimal_degrees + latitude_decimal_degrees
proj4string(tot_sprops.pnts) <- "+init=epsg:4326"
```
Remove points falling in the sea or similar:
```
if({
#mask = terra::rast("./layers1km/lcv_landmask_esacci.lc.l4_c_1km_s0..0cm_2000..2015_v1.0.tif")
mask = terra::[rast](https://rdrr.io/pkg/terra/man/rast.html)("/mnt/diskstation/data/LandGIS/layers250m/lcv_landmask_esacci.lc.l4_c_250m_s0..0cm_2000..2015_v1.0.tif")
ov.sprops <- terra::[extract](https://rdrr.io/pkg/terra/man/extract.html)(mask, terra::[vect](https://rdrr.io/pkg/terra/man/vect.html)(tot_sprops.pnts))
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(ov.sprops[,2]))
if([sum](https://rdrr.io/r/base/sum.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]))>0 | [sum](https://rdrr.io/r/base/sum.html)(ov.sprops[,2]==2)>0){
rem.lst = [which](https://rdrr.io/r/base/which.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]) | ov.sprops[,2]==2 | ov.sprops[,2]==4)
rem.sp = tot_sprops.pnts$site_key[rem.lst]
tot_sprops.pnts = tot_sprops.pnts[-rem.lst,]
} else {
rem.sp = NA
}
}
## final number of unique spatial locations:
[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts)
#> [1] 25075
```
#### 6\.4\.0\.3 Clean\-up
Clean up typos and physically impossible values:
```
for(j in [c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "sand_tot_psa", "silt_tot_psa", "wpg2", "w6clod", "w10cld", "w3cld", "w15l2")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>100|tot_sprops[,j]<0, NA, tot_sprops[,j])
}
for(j in [c](https://rdrr.io/r/base/c.html)("ph_h2o_v","ph_kcl")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>12|tot_sprops[,j]<2, NA, tot_sprops[,j])
}
#hist(tot_sprops$db_od)
for(j in [c](https://rdrr.io/r/base/c.html)("db")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>2.4|tot_sprops[,j]<0.05, NA, tot_sprops[,j])
}
#summary(tot_sprops$ksat_lab)
for(j in [c](https://rdrr.io/r/base/c.html)("ksat_lab","ksat_field")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j] <=0, NA, tot_sprops[,j])
}
#hist(tot_sprops$oc)
for(j in [c](https://rdrr.io/r/base/c.html)("oc_v")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>90|tot_sprops[,j]<0, NA, tot_sprops[,j])
}
tot_sprops$hzn_depth = tot_sprops$hzn_top + (tot_sprops$hzn_bot-tot_sprops$hzn_top)/2
#tot_sprops = tot_sprops[!is.na(tot_sprops$hzn_depth),]
## texture fractions check:
sum.tex.T = [rowSums](https://rdrr.io/r/base/colSums.html)(tot_sprops[,[c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "silt_tot_psa", "sand_tot_psa")], na.rm = TRUE)
[which](https://rdrr.io/r/base/which.html)(sum.tex.T<1.2 & sum.tex.T>0)
#> [1] 9334 9979 12371 22431 22441 81311 81312 81313 93971 150217
for(i in [which](https://rdrr.io/r/base/which.html)(sum.tex.T<1.2 & sum.tex.T>0)){
for(j in [c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "silt_tot_psa", "sand_tot_psa")){
tot_sprops[i,j] <- NA
}
}
```
#### 6\.4\.0\.4 Histogram plots
```
[library](https://rdrr.io/r/base/library.html)([ggplot2](http://ggplot2.tidyverse.org))
#ggplot(tot_sprops[tot_sprops$w15l2<100,], aes(x=source_db, y=w15l2)) + geom_boxplot() + theme(axis.text.x = element_text(angle = 90, hjust = 1))
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops[tot_sprops$w3cld<100,], [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=w3cld)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 68935 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=db)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 69588 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=ph_h2o_v)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 122423 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log10](https://rdrr.io/r/base/Log.html)(ksat_field+1))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 144683 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(ksat_lab))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 139595 rows containing non-finite values (stat_boxplot).
```
#### 6\.4\.0\.5 Convert to wide format
Add `layer_sequence` where missing since this is needed to be able to convert to wide
format:
```
#summary(tot_sprops$layer_sequence)
tot_sprops$dsiteid = [paste](https://rdrr.io/r/base/paste.html)(tot_sprops$source_db, tot_sprops$site_key, tot_sprops$site_obsdate, sep="_")
if({
[library](https://rdrr.io/r/base/library.html)([dplyr](https://dplyr.tidyverse.org))
## Note: takes >1 min
l.s1 <- tot_sprops[,[c](https://rdrr.io/r/base/c.html)("olc_id","hzn_depth")] [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [group_by](https://dplyr.tidyverse.org/reference/group_by.html)(olc_id) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(layer_sequence.f = data.table::[frank](https://Rdatatable.gitlab.io/data.table/reference/frank.html)(hzn_depth, ties.method = "first"))
tot_sprops$layer_sequence.f = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(tot_sprops$layer_sequence), l.s1$layer_sequence.f, tot_sprops$layer_sequence)
tot_sprops$layer_sequence.f = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops$layer_sequence.f>6, 6, tot_sprops$layer_sequence.f)
}
```
Convert long table to [wide table format](https://ncss-tech.github.io/AQP/aqp/aqp-intro.html) so that each depth gets unique column:
```
if({
[library](https://rdrr.io/r/base/library.html)([data.table](http://r-datatable.com))
hor.names.s = [c](https://rdrr.io/r/base/c.html)("hzn_top", "hzn_bot", "hzn_desgn", "db", "w6clod", "w3cld", "w15l2", "adod", "wrd_ws13", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc_v", "ph_kcl", "ph_h2o_v", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field", "uuid")
tot_sprops.w = data.table::[dcast](https://Rdatatable.gitlab.io/data.table/reference/dcast.data.table.html)( [as.data.table](https://Rdatatable.gitlab.io/data.table/reference/as.data.table.html)(tot_sprops),
formula = olc_id ~ layer_sequence.f,
value.var = hor.names.s,
fun=function(x){ x[1] },
verbose = FALSE)
}
tot_sprops_w.pnts = tot_sprops.pnts
tot_sprops_w.pnts@data = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tot_sprops.pnts@data, tot_sprops.w)
#> Joining by: olc_id
```
Write all soil profiles using a wide format:
```
sel.rm.pnts <- tot_sprops_w.pnts$site_key [%in%](https://rdrr.io/r/base/match.html) rem.sp
[unlink](https://rdrr.io/r/base/unlink.html)("./out/gpkg/sol_hydro.pnts_horizons.gpkg")
writeOGR(tot_sprops_w.pnts[!sel.rm.pnts,], "./out/gpkg/sol_hydro.pnts_horizons.gpkg", "sol_hydro.pnts_horizons", drive="GPKG")
```
#### 6\.4\.0\.6 Ksat dataset:
```
sel.compl =
[summary](https://rdrr.io/r/base/summary.html)(sel.compl)
#> Mode FALSE TRUE
#> logical 131042 24752
## complete ksat_field points
tot_sprops.pnts.C = tot_sprops[ & !tot_sprops$source_db == "SIMULATED",]
sum.NA = [sapply](https://rdrr.io/r/base/lapply.html)(tot_sprops.pnts.C, function(i){[sum](https://rdrr.io/r/base/sum.html)([is.na](https://rdrr.io/r/base/NA.html)(i))})
tot_sprops.pnts.C = tot_sprops.pnts.C[,!sum.NA==[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts.C)]
tot_sprops.pnts.C$w10cld = NULL
tot_sprops.pnts.C$w6clod = NULL
tot_sprops.pnts.C$w15bfm = NULL
tot_sprops.pnts.C$site_obsdate = NULL
tot_sprops.pnts.C$confidence_degree = NULL
tot_sprops.pnts.C$cec_sum = NULL
tot_sprops.pnts.C$wpg2 = NULL
tot_sprops.pnts.C$uuid = NULL
[dim](https://rdrr.io/r/base/dim.html)(tot_sprops.pnts.C)
#> [1] 13258 25
```
#### 6\.4\.0\.7 RDS files
Plot in Goode Homolozine projection and save final objects:
```
if({
tot_sprops.pnts_sf <- st_as_sf(tot_sprops.pnts[1])
plot_gh(tot_sprops.pnts_sf, out.pdf="./img/sol_hydro.pnts_sites.pdf")
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_hydro.pnts_sites.pdf ./img/sol_hydro.pnts_sites -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_hydro.pnts_sites.png ./img/sol_hydro.pnts_sites.png")
}
```
(\#fig:sol\_hydro.pnts\_sites)Soil profiles and soil samples with physical and hydraulic soil properties properties global compilation.
Fig. 1: Soil profiles and soil samples with physical and hydraulic soil properties properties global compilation.
```
if({
sel.ks = [which](https://rdrr.io/r/base/which.html)(tot_sprops.pnts$location_id [%in%](https://rdrr.io/r/base/match.html) tot_sprops.pnts.C$location_id)
tot_spropsC.pnts_sf <- st_as_sf(tot_sprops.pnts[sel.ks, 1])
plot_gh(tot_spropsC.pnts_sf, out.pdf="./img/sol_ksat.pnts_sites.pdf")
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_ksat.pnts_sites.pdf ./img/sol_ksat.pnts_sites -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_ksat.pnts_sites.png ./img/sol_ksat.pnts_sites.png")
}
```
(\#fig:sol\_ksat.pnts\_sites)Soil profiles and soil samples with Ksat measurements global compilation
Fig. 2: Soil profiles and soil samples with Ksat measurements global compilation.
#### 6\.4\.0\.1 Bind and clean\-up
Bind all tables / rename columns where necessary:
```
[ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("hydrosprops.*"))
#> [1] "hydrosprops.AfSPDB" "hydrosprops.EGRPR" "hydrosprops.ETH"
#> [4] "hydrosprops.FRED" "hydrosprops.HYBRAS" "hydrosprops.HYDROS"
#> [7] "hydrosprops.ISIS" "hydrosprops.NCSS" "hydrosprops.NPDB"
#> [10] "hydrosprops.SIM" "hydrosprops.SPADE2" "hydrosprops.SWIG"
#> [13] "hydrosprops.UNSODA" "hydrosprops.WISE"
tot_sprops = dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)([lapply](https://rdrr.io/r/base/lapply.html)([ls](https://rdrr.io/r/base/ls.html)(pattern=[glob2rx](https://rdrr.io/r/utils/glob2rx.html)("hydrosprops.*")), function(i){ [mutate_all](https://dplyr.tidyverse.org/reference/mutate_all.html)([setNames](https://rdrr.io/r/stats/setNames.html)([get](https://rdrr.io/r/base/get.html)(i), col.names), as.character) }))
## convert to numeric:
for(j in [c](https://rdrr.io/r/base/c.html)("longitude_decimal_degrees", "latitude_decimal_degrees", "location_accuracy_min", "location_accuracy_max", "layer_sequence", "hzn_top","hzn_bot", "oc", "ph_h2o", "ph_kcl", "db_od", "clay_tot_psa", "sand_tot_psa","silt_tot_psa", "wpg2", "db_13b", "COLEws", "w15cly", "w6clod", "w10cld", "w3cld", "w15l2", "w15bfm", "adod", "wrd_ws13","cec7_cly", "cec_sum", "cec_nh4", "ksat_lab","ksat_field")){
tot_sprops[,j] = [as.numeric](https://rdrr.io/r/base/numeric.html)(tot_sprops[,j])
}
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#> Warning: NAs introduced by coercion
#head(tot_sprops)
## rename some columns:
tot_sprops = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(tot_sprops, replace = [c](https://rdrr.io/r/base/c.html)("db_od" = "db", "ph_h2o" = "ph_h2o_v", "oc" = "oc_v"))
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$source_db))
#> AfSPDB Australian_ksat_data Belgian_ksat_data
#> 10720 118 145
#> Canada_NPDB China_ksat_data ETH_literature
#> 404 209 1954
#> Florida_ksat_data FRED HYBRAS
#> 6532 3761 814
#> HydroS ISRIC_ISIS ISRIC_WISE
#> 153 1176 1325
#> Russia_EGRPR SIMULATED SPADE2
#> 1138 8133 1182
#> SWIG Tibetan_plateau_ksat_data UNSODA
#> 3676 65 298
#> USDA_NCSS
#> 113991
```
Add unique row identifier
```
tot_sprops$uuid = uuid::[UUIDgenerate](https://rdrr.io/pkg/uuid/man/UUIDgenerate.html)(use.time=TRUE, n=[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops))
```
and unique location based on the [Open Location Code](https://cran.r-project.org/web/packages/olctools/vignettes/Introduction_to_olctools.html):
```
tot_sprops$olc_id = olctools::[encode_olc](https://rdrr.io/pkg/olctools/man/encode_olc.html)(tot_sprops$latitude_decimal_degrees, tot_sprops$longitude_decimal_degrees, 11)
[length](https://rdrr.io/r/base/length.html)([levels](https://rdrr.io/r/base/levels.html)([as.factor](https://rdrr.io/r/base/factor.html)(tot_sprops$olc_id)))
#> [1] 25075
```
#### 6\.4\.0\.2 Quality\-control spatial locations
Unique locations:
```
tot_sprops.pnts = tot_sprops[]
coordinates(tot_sprops.pnts) <- ~ longitude_decimal_degrees + latitude_decimal_degrees
proj4string(tot_sprops.pnts) <- "+init=epsg:4326"
```
Remove points falling in the sea or similar:
```
if({
#mask = terra::rast("./layers1km/lcv_landmask_esacci.lc.l4_c_1km_s0..0cm_2000..2015_v1.0.tif")
mask = terra::[rast](https://rdrr.io/pkg/terra/man/rast.html)("/mnt/diskstation/data/LandGIS/layers250m/lcv_landmask_esacci.lc.l4_c_250m_s0..0cm_2000..2015_v1.0.tif")
ov.sprops <- terra::[extract](https://rdrr.io/pkg/terra/man/extract.html)(mask, terra::[vect](https://rdrr.io/pkg/terra/man/vect.html)(tot_sprops.pnts))
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(ov.sprops[,2]))
if([sum](https://rdrr.io/r/base/sum.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]))>0 | [sum](https://rdrr.io/r/base/sum.html)(ov.sprops[,2]==2)>0){
rem.lst = [which](https://rdrr.io/r/base/which.html)([is.na](https://rdrr.io/r/base/NA.html)(ov.sprops[,2]) | ov.sprops[,2]==2 | ov.sprops[,2]==4)
rem.sp = tot_sprops.pnts$site_key[rem.lst]
tot_sprops.pnts = tot_sprops.pnts[-rem.lst,]
} else {
rem.sp = NA
}
}
## final number of unique spatial locations:
[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts)
#> [1] 25075
```
#### 6\.4\.0\.3 Clean\-up
Clean up typos and physically impossible values:
```
for(j in [c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "sand_tot_psa", "silt_tot_psa", "wpg2", "w6clod", "w10cld", "w3cld", "w15l2")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>100|tot_sprops[,j]<0, NA, tot_sprops[,j])
}
for(j in [c](https://rdrr.io/r/base/c.html)("ph_h2o_v","ph_kcl")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>12|tot_sprops[,j]<2, NA, tot_sprops[,j])
}
#hist(tot_sprops$db_od)
for(j in [c](https://rdrr.io/r/base/c.html)("db")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>2.4|tot_sprops[,j]<0.05, NA, tot_sprops[,j])
}
#summary(tot_sprops$ksat_lab)
for(j in [c](https://rdrr.io/r/base/c.html)("ksat_lab","ksat_field")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j] <=0, NA, tot_sprops[,j])
}
#hist(tot_sprops$oc)
for(j in [c](https://rdrr.io/r/base/c.html)("oc_v")){
tot_sprops[,j] = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops[,j]>90|tot_sprops[,j]<0, NA, tot_sprops[,j])
}
tot_sprops$hzn_depth = tot_sprops$hzn_top + (tot_sprops$hzn_bot-tot_sprops$hzn_top)/2
#tot_sprops = tot_sprops[!is.na(tot_sprops$hzn_depth),]
## texture fractions check:
sum.tex.T = [rowSums](https://rdrr.io/r/base/colSums.html)(tot_sprops[,[c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "silt_tot_psa", "sand_tot_psa")], na.rm = TRUE)
[which](https://rdrr.io/r/base/which.html)(sum.tex.T<1.2 & sum.tex.T>0)
#> [1] 9334 9979 12371 22431 22441 81311 81312 81313 93971 150217
for(i in [which](https://rdrr.io/r/base/which.html)(sum.tex.T<1.2 & sum.tex.T>0)){
for(j in [c](https://rdrr.io/r/base/c.html)("clay_tot_psa", "silt_tot_psa", "sand_tot_psa")){
tot_sprops[i,j] <- NA
}
}
```
#### 6\.4\.0\.4 Histogram plots
```
[library](https://rdrr.io/r/base/library.html)([ggplot2](http://ggplot2.tidyverse.org))
#ggplot(tot_sprops[tot_sprops$w15l2<100,], aes(x=source_db, y=w15l2)) + geom_boxplot() + theme(axis.text.x = element_text(angle = 90, hjust = 1))
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops[tot_sprops$w3cld<100,], [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=w3cld)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 68935 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=db)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 69588 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=ph_h2o_v)) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 122423 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log10](https://rdrr.io/r/base/Log.html)(ksat_field+1))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 144683 rows containing non-finite values (stat_boxplot).
```
```
[ggplot](https://rdrr.io/pkg/ggplot2/man/ggplot.html)(tot_sprops, [aes](https://rdrr.io/pkg/ggplot2/man/aes.html)(x=source_db, y=[log1p](https://rdrr.io/r/base/Log.html)(ksat_lab))) + [geom_boxplot](https://rdrr.io/pkg/ggplot2/man/geom_boxplot.html)() + [theme](https://rdrr.io/pkg/ggplot2/man/theme.html)(axis.text.x = [element_text](https://rdrr.io/pkg/ggplot2/man/element.html)(angle = 90, hjust = 1))
#> Warning: Removed 139595 rows containing non-finite values (stat_boxplot).
```
#### 6\.4\.0\.5 Convert to wide format
Add `layer_sequence` where missing since this is needed to be able to convert to wide
format:
```
#summary(tot_sprops$layer_sequence)
tot_sprops$dsiteid = [paste](https://rdrr.io/r/base/paste.html)(tot_sprops$source_db, tot_sprops$site_key, tot_sprops$site_obsdate, sep="_")
if({
[library](https://rdrr.io/r/base/library.html)([dplyr](https://dplyr.tidyverse.org))
## Note: takes >1 min
l.s1 <- tot_sprops[,[c](https://rdrr.io/r/base/c.html)("olc_id","hzn_depth")] [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [group_by](https://dplyr.tidyverse.org/reference/group_by.html)(olc_id) [%>%](https://magrittr.tidyverse.org/reference/pipe.html) [mutate](https://dplyr.tidyverse.org/reference/mutate.html)(layer_sequence.f = data.table::[frank](https://Rdatatable.gitlab.io/data.table/reference/frank.html)(hzn_depth, ties.method = "first"))
tot_sprops$layer_sequence.f = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(tot_sprops$layer_sequence), l.s1$layer_sequence.f, tot_sprops$layer_sequence)
tot_sprops$layer_sequence.f = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tot_sprops$layer_sequence.f>6, 6, tot_sprops$layer_sequence.f)
}
```
Convert long table to [wide table format](https://ncss-tech.github.io/AQP/aqp/aqp-intro.html) so that each depth gets unique column:
```
if({
[library](https://rdrr.io/r/base/library.html)([data.table](http://r-datatable.com))
hor.names.s = [c](https://rdrr.io/r/base/c.html)("hzn_top", "hzn_bot", "hzn_desgn", "db", "w6clod", "w3cld", "w15l2", "adod", "wrd_ws13", "tex_psda", "clay_tot_psa", "silt_tot_psa", "sand_tot_psa", "oc_v", "ph_kcl", "ph_h2o_v", "cec_sum", "cec_nh4", "wpg2", "ksat_lab", "ksat_field", "uuid")
tot_sprops.w = data.table::[dcast](https://Rdatatable.gitlab.io/data.table/reference/dcast.data.table.html)( [as.data.table](https://Rdatatable.gitlab.io/data.table/reference/as.data.table.html)(tot_sprops),
formula = olc_id ~ layer_sequence.f,
value.var = hor.names.s,
fun=function(x){ x[1] },
verbose = FALSE)
}
tot_sprops_w.pnts = tot_sprops.pnts
tot_sprops_w.pnts@data = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tot_sprops.pnts@data, tot_sprops.w)
#> Joining by: olc_id
```
Write all soil profiles using a wide format:
```
sel.rm.pnts <- tot_sprops_w.pnts$site_key [%in%](https://rdrr.io/r/base/match.html) rem.sp
[unlink](https://rdrr.io/r/base/unlink.html)("./out/gpkg/sol_hydro.pnts_horizons.gpkg")
writeOGR(tot_sprops_w.pnts[!sel.rm.pnts,], "./out/gpkg/sol_hydro.pnts_horizons.gpkg", "sol_hydro.pnts_horizons", drive="GPKG")
```
#### 6\.4\.0\.6 Ksat dataset:
```
sel.compl =
[summary](https://rdrr.io/r/base/summary.html)(sel.compl)
#> Mode FALSE TRUE
#> logical 131042 24752
## complete ksat_field points
tot_sprops.pnts.C = tot_sprops[ & !tot_sprops$source_db == "SIMULATED",]
sum.NA = [sapply](https://rdrr.io/r/base/lapply.html)(tot_sprops.pnts.C, function(i){[sum](https://rdrr.io/r/base/sum.html)([is.na](https://rdrr.io/r/base/NA.html)(i))})
tot_sprops.pnts.C = tot_sprops.pnts.C[,!sum.NA==[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts.C)]
tot_sprops.pnts.C$w10cld = NULL
tot_sprops.pnts.C$w6clod = NULL
tot_sprops.pnts.C$w15bfm = NULL
tot_sprops.pnts.C$site_obsdate = NULL
tot_sprops.pnts.C$confidence_degree = NULL
tot_sprops.pnts.C$cec_sum = NULL
tot_sprops.pnts.C$wpg2 = NULL
tot_sprops.pnts.C$uuid = NULL
[dim](https://rdrr.io/r/base/dim.html)(tot_sprops.pnts.C)
#> [1] 13258 25
```
#### 6\.4\.0\.7 RDS files
Plot in Goode Homolozine projection and save final objects:
```
if({
tot_sprops.pnts_sf <- st_as_sf(tot_sprops.pnts[1])
plot_gh(tot_sprops.pnts_sf, out.pdf="./img/sol_hydro.pnts_sites.pdf")
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_hydro.pnts_sites.pdf ./img/sol_hydro.pnts_sites -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_hydro.pnts_sites.png ./img/sol_hydro.pnts_sites.png")
}
```
(\#fig:sol\_hydro.pnts\_sites)Soil profiles and soil samples with physical and hydraulic soil properties properties global compilation.
Fig. 1: Soil profiles and soil samples with physical and hydraulic soil properties properties global compilation.
```
if({
sel.ks = [which](https://rdrr.io/r/base/which.html)(tot_sprops.pnts$location_id [%in%](https://rdrr.io/r/base/match.html) tot_sprops.pnts.C$location_id)
tot_spropsC.pnts_sf <- st_as_sf(tot_sprops.pnts[sel.ks, 1])
plot_gh(tot_spropsC.pnts_sf, out.pdf="./img/sol_ksat.pnts_sites.pdf")
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_ksat.pnts_sites.pdf ./img/sol_ksat.pnts_sites -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_ksat.pnts_sites.png ./img/sol_ksat.pnts_sites.png")
}
```
(\#fig:sol\_ksat.pnts\_sites)Soil profiles and soil samples with Ksat measurements global compilation
Fig. 2: Soil profiles and soil samples with Ksat measurements global compilation.
6\.5 alt text Overlay www.OpenLandMap.org layers
------------------------------------------------
Load the tiling system (1 degree grid representing global land mask) and run spatial overlay in parallel:
```
if({
tile.pol = readOGR("./tiles/global_tiling_100km_grid.gpkg")
#length(tile.pol)
ov.sol <- extract.tiled(obj=tot_sprops.pnts, tile.pol=tile.pol, path="/data/tt/LandGIS/grid250m", ID="ID", cpus=64)
## Valid predictors:
pr.vars = [unique](https://Rdatatable.gitlab.io/data.table/reference/duplicated.html)([unlist](https://rdrr.io/r/base/unlist.html)([sapply](https://rdrr.io/r/base/lapply.html)([c](https://rdrr.io/r/base/c.html)("fapar", "landsat", "lc100", "mod09a1", "mod11a2", "alos.palsar", "sm2rain", "irradiation_solar.atlas", "usgs.ecotapestry", "floodmap.500y", "bioclim", "water.table.depth_deltares", "snow.prob_esacci", "water.vapor_nasa.eo", "wind.speed_terraclimate", "merit.dem_m", "merit.hydro_m", "cloud.fraction_earthenv", "water.occurance_jrc", "wetlands.cw_upmc", "pb2002"), function(i){[names](https://rdrr.io/r/base/names.html)(ov.sol)[[grep](https://rdrr.io/r/base/grep.html)(i, [names](https://rdrr.io/r/base/names.html)(ov.sol))]})))
[str](https://rdrr.io/r/utils/str.html)(pr.vars)
## 349
#saveRDS.gz(ov.sol, "/mnt/diskstation/data/Soil_points/ov.sol_hydro.pnts_horizons.rds")
#ov.sol <- readRDS.gz("/mnt/diskstation/data/Soil_points/ov.sol_hydro.pnts_horizons.rds")
## Final regression matrix:
rm.sol = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(tot_sprops, ov.sol[,[c](https://rdrr.io/r/base/c.html)("olc_id", pr.vars)])
## check that there are no duplicates
[sum](https://rdrr.io/r/base/sum.html)([duplicated](https://Rdatatable.gitlab.io/data.table/reference/duplicated.html)(rm.sol$uuid))
rm.ksat = rm.sol[(,]
}
[dim](https://rdrr.io/r/base/dim.html)(rm.sol)
```
Save final analysis\-ready objects:
```
saveRDS.gz(tot_sprops, "./out/rds/sol_hydro.pnts_horizons.rds")
saveRDS.gz(tot_sprops.pnts, "/mnt/diskstation/data/Soil_points/sol_hydro.pnts_sites.rds")
## reorder columns
tot_sprops.pnts.C$ID = 1:[nrow](https://rdrr.io/r/base/nrow.html)(tot_sprops.pnts.C)
tot_sprops.pnts.C = tot_sprops.pnts.C[,[c](https://rdrr.io/r/base/c.html)([which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(tot_sprops.pnts.C)=="ID"), [which](https://rdrr.io/r/base/which.html)(]
saveRDS.gz(tot_sprops.pnts.C, "./out/rds/sol_ksat.pnts_horizons.rds")
#library(farff)
#writeARFF(tot_sprops, "./out/arff/sol_hydro.pnts_horizons.arff", overwrite = TRUE)
#writeARFF(tot_sprops.pnts.C, "./out/arff/sol_ksat.pnts_horizons.arff", overwrite = TRUE)
## compressed CSV
[write.csv](https://rdrr.io/r/utils/write.table.html)(tot_sprops, file=[gzfile](https://rdrr.io/r/base/connections.html)("./out/csv/sol_hydro.pnts_horizons.csv.gz"))
[write.csv](https://rdrr.io/r/utils/write.table.html)(tot_sprops.pnts.C, file=[gzfile](https://rdrr.io/r/base/connections.html)("./out/csv/sol_ksat.pnts_horizons.csv.gz"), row.names = FALSE)
saveRDS.gz(rm.sol, "./out/rds/sol_hydro.pnts_horizons_rm.rds")
saveRDS.gz(rm.ksat, "./out/rds/sol_ksat.pnts_horizons_rm.rds")
```
Save temp object:
```
#rm(rm.sol); gc()
save.image.pigz(file="soilhydro.RData")
## rmarkdown::render("Index.rmd")
```
| Life Sciences |
opengeohub.github.io | https://opengeohub.github.io/SoilSamples/wrb-soil-types.html |
7 WRB soil types
================
You are reading the work\-in\-progress An Open Compendium of Soil Sample and Soil Profile Datasets. This chapter is currently draft version, a peer\-review publication is pending. You can find the polished first edition at <https://opengeohub.github.io/SoilSamples/>.
7\.1 Overview
-------------
This section describes import steps used to produce a global compilation of
FAO’s IUSS’s [World Reference Base (WRB)](https://www.fao.org/soils-portal/data-hub/soil-classification/world-reference-base/en/) observations of soil types.
Classes are either used as\-is or a translated from some local or international system.
[Correlation tables](https://github.com/OpenGeoHub/SoilSamples/tree/main/correlation) are available in the folder `./correlation` and is based on
various literature sources. Correlation is not trivial and often not 1:1 so we
typically use 2–3 options for translating soil types ([Krasilnikov, Arnold, Marti, \& Shoba, 2009](references.html#ref-krasilnikov2009handbook)). Output compilations are
available in the [folder](https://github.com/OpenGeoHub/SoilSamples/tree/main/out) `./out`.
Please refer to the dataset version / DOI to ensure full reproducibility of your modeling.
This dataset is currently used to produce [soil type maps of the world](https://github.com/OpenGeoHub/SoilTypeMapping/)
at various spatial resolutions. To add new dataset please open a [new issue](https://github.com/OpenGeoHub/SoilSamples/issues) or do a merge request.
7\.2 WoSIS point datasets
-------------------------
There are currently several global datasets that reference distribution of WRB
classes. For building training points for predictive soil type mapping the most
comprehensive dataset seems to be World Soil Information Service (WoSIS) soil profile database (available via:
<https://www.isric.org/explore/wosis>) ([Batjes, Ribeiro, \& Van Oostrum, 2020](references.html#ref-batjes2020standardised)).
You can download the most up\-to\-date snapshot of the most up\-to\-date WOSIS Geopackage file directly
by using the [ISRIC’s Web Feature Service](https://www.isric.org/explore/wosis/accessing-wosis-derived-datasets#Access_data).
Below is an example of a snapshot downloaded on 10\-April\-2023:
```
wosis = sf::[read_sf](https://r-spatial.github.io/sf/reference/st_read.html)("/mnt/landmark/HWSDv2/wosis_latest_profiles.gpkg")
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(wosis$cwrb_version))
```
The WRB soil types can be generate by combining multiple columns:
```
wosis$wrb4 = [paste0](https://rdrr.io/r/base/paste.html)([ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(wosis$cwrb_prefix_qualifier), "", [paste0](https://rdrr.io/r/base/paste.html)(wosis$cwrb_prefix_qualifier, " ")), wosis$cwrb_reference_soil_group, [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(wosis$cwrb_suffix_qualifier), "", [paste0](https://rdrr.io/r/base/paste.html)(" ", wosis$cwrb_suffix_qualifier)))
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(wosis$wrb4), maxsum=20)
```
This finally gives about 30,000 soil points with soil classification:
```
wosis.wrb = wosis[!wosis$wrb4=="NA", [c](https://rdrr.io/r/base/c.html)("profile_id", "geom_accuracy", "latitude", "longitude", "wrb4", "dataset_id")]
[str](https://rdrr.io/r/utils/str.html)(wosis.wrb)
#plot(wosis.wrb[,c("longitude","latitude")])
```
We can write the [summary distribution](https://github.com/OpenGeoHub/SoilSamples/blob/main/correlation/wosis.wrb_summary.csv) of soil types by using:
```
xs0 = [summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(wosis.wrb$wrb4), maxsum = [length](https://rdrr.io/r/base/length.html)([levels](https://rdrr.io/r/base/levels.html)([as.factor](https://rdrr.io/r/base/factor.html)(wosis.wrb$wrb4))))
[write.csv](https://rdrr.io/r/utils/write.table.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(WRB=[attr](https://rdrr.io/r/base/attr.html)(xs0, "names"), count=xs0), "./correlation/wosis.wrb_summary.csv")
```
After some clean\-up, we can prepare a harmonized legend that now has consistently
WRB soil types up to the level of great\-group \+ prefix and with soil points
that have a location accuracy of not worse than 1 km:
```
h.wosis = [read.csv](https://rdrr.io/r/utils/read.table.html)("./correlation/WOSIS_WRB_legend.csv")
h.wosis$wrb4 = h.wosis$WRB
wosis.wrb$h_wrb4 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)([as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(wosis.wrb["wrb4"]), h.wosis)$h_wrb4
## copy values
wosis.wrb2 = wosis.wrb
wosis.wrb2$h_wrb4 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)([as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(wosis.wrb2["wrb4"]), h.wosis)$h_wrb4.2
#summary(as.factor(wosis.wrb$h_wrb4))
#str(levels(as.factor(wosis.wrb$h_wrb4)))
## remove points with poor location accuracy
h.wosis.wrb = [rbind](https://Rdatatable.gitlab.io/data.table/reference/rbindlist.html)(
[as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(wosis.wrb[!([is.na](https://rdrr.io/r/base/NA.html)(wosis.wrb$h_wrb4)|wosis.wrb$h_wrb4==""|wosis.wrb$geom_accuracy>0.08334),]),
[as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(wosis.wrb2[!([is.na](https://rdrr.io/r/base/NA.html)(wosis.wrb2$h_wrb4)|wosis.wrb2$h_wrb4==""|wosis.wrb2$geom_accuracy>0.08334),]))
h.wosis.wrb$source_db = h.wosis.wrb$dataset_id
[dim](https://rdrr.io/r/base/dim.html)(h.wosis.wrb)
saveRDS.gz(h.wosis.wrb, "./correlation/h.wosis.wrb.rds")
```
which can be now considered an analysis\-ready point dataset with `h_wrb4` being the
harmonized value of the soil type:
```
h.wosis.wrb = [readRDS](https://rdrr.io/r/base/readRDS.html)("./correlation/h.wosis.wrb.rds")
[head](https://rdrr.io/r/utils/head.html)(h.wosis.wrb)
#> profile_id geom_accuracy latitude longitude wrb4
#> 1 47431 0.000100 6.5875 2.1525 Planosol
#> 2 47478 0.010000 6.5875 2.1900 Planosol
#> 3 47503 0.000100 6.5875 2.2262 Planosol
#> 4 47596 0.000100 6.8733 2.3458 Acrisol
#> 5 52678 0.000278 1.0700 34.9000 Vertisol
#> 6 52686 0.000278 0.9400 34.9500 Ferralsol
#> dataset_id geom h_wrb4
#> 1 {AF-AfSP,BJSOTER,WD-WISE} POINT (2.1525 6.5875) Eutric Planosols
#> 2 {AF-AfSP,BJSOTER,WD-WISE} POINT (2.19 6.5875) Eutric Planosols
#> 3 {AF-AfSP,BJSOTER,WD-WISE} POINT (2.2262 6.5875) Eutric Planosols
#> 4 {AF-AfSP,BJSOTER,WD-WISE} POINT (2.3458 6.8733) Haplic Acrisols
#> 5 {AF-AfSP,KE-SOTER,WD-WISE} POINT (34.9 1.07) Haplic Vertisols
#> 6 {AF-AfSP,KE-SOTER,WD-WISE} POINT (34.95 0.94) Geric Ferralsols
#> source_db
#> 1 {AF-AfSP,BJSOTER,WD-WISE}
#> 2 {AF-AfSP,BJSOTER,WD-WISE}
#> 3 {AF-AfSP,BJSOTER,WD-WISE}
#> 4 {AF-AfSP,BJSOTER,WD-WISE}
#> 5 {AF-AfSP,KE-SOTER,WD-WISE}
#> 6 {AF-AfSP,KE-SOTER,WD-WISE}
```
7\.3 HWSDv2
-----------
A disadvantage of using only legacy soil profiles, however, is that these are often
spatially clustered i.e. large gaps exists where almost no data available. This applies especially
for African and Asian continents. To increase spatial coverage of the training points for
[global soil type mapping](https://github.com/OpenGeoHub/SoilTypeMapping), we can add points generated from the global soil polygon map
e.g. [Harmonized World Soil Database](https://iiasa.ac.at/models-tools-data/hwsd) (HWSD) ([FAO \& IIASA, 2023](references.html#ref-hwsd2023)).
This can be done in three steps: first, we prepare [summary of all WRB soil types](https://github.com/OpenGeoHub/SoilSamples/blob/main/correlation/hwsd2_summary.csv) in the HWSDv2:
```
hwsd <- [mdb.get](https://rdrr.io/pkg/Hmisc/man/mdb.get.html)("/mnt/landmark/HWSDv2/HWSD2.mdb")
#str(hwsd)
wrb.leg = hwsd$D_WRB4
#str(wrb.leg)
wrb.leg$WRB4 = wrb.leg$CODE
layer = hwsd$HWSD2_LAYERS[,[c](https://rdrr.io/r/base/c.html)("HWSD2.SMU.ID", "WRB4")]
layer$VALUE = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(layer, wrb.leg[[c](https://rdrr.io/r/base/c.html)("VALUE","WRB4")], match="first")$VALUE
xs = [summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(layer$VALUE), maxsum = [nrow](https://rdrr.io/r/base/nrow.html)(wrb.leg))
[write.csv](https://rdrr.io/r/utils/write.table.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(WRB4=[attr](https://rdrr.io/r/base/attr.html)(xs, "names"), count=xs), "./correlation/hwsd2_summary.csv")
```
The `./correlation/hwsd2_summary.csv` table now shows which are the most frequent soil types
for the world based on the HWSDv2\. Note, these are primarily expert\-based and of
unknown uncertainty / confidence, so should be used with caution, unlike WoSIS and
other legacy soil profiles which are based on actual soil observations and fieldwork.
Second, we can prepare a raster layer that we can use to randomly draw training points
using some probability sampling e.g. [Simple Random Sampling](https://opengeohub.github.io/spatial-sampling-ml/).
Because we want to draw samples from an equal area space, a good idea is to convert the
original HWSD raster to the [IGH projection](https://epsg.io/54052):
```
rgdal::[GDALinfo](http://rgdal.r-forge.r-project.org/reference/readGDAL.html)("/mnt/landmark/HWSDv2/HWSD2.bil")
## 1km resolution
te = [c](https://rdrr.io/r/base/c.html)(-20037508,-6728980, 20037508, 8421750)
gh.prj = "+proj=igh +ellps=WGS84 +units=m +no_defs"
[system](https://rdrr.io/r/base/system.html)([paste0](https://rdrr.io/r/base/paste.html)('gdalwarp /mnt/landmark/HWSDv2/HWSD2.bil /mnt/landmark/HWSDv2/HWSD2_gh_1km.tif -r \"near\"
--config CHECK_WITH_INVERT_PROJ TRUE -t_srs \"', gh.prj,
'\" -co \"COMPRESS=DEFLATE\" -tr 1000 1000 -overwrite -te ',
[paste](https://rdrr.io/r/base/paste.html)(te, collapse = " ")))
```
Now we can sample random points from the projected raster by using the terra package functionality ([Hijmans, 2019](references.html#ref-hijmans2019spatial)).
We generate 100,000 random points, although in principle we can later on subset the points to much less
points as needed.
```
rnd.hwsd = terra::[spatSample](https://rdrr.io/pkg/terra/man/sample.html)(terra::[rast](https://rdrr.io/pkg/terra/man/rast.html)("/mnt/landmark/HWSDv2/HWSD2_gh_1km.tif"),
size=1e5, method="random", na.rm=TRUE, xy=TRUE)
## merge and make WRB4 layer
[names](https://rdrr.io/r/base/names.html)(rnd.hwsd)[3] = "HWSD2.SMU.ID"
rnd.hwsd$WRB4 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(rnd.hwsd, layer[[c](https://rdrr.io/r/base/c.html)("HWSD2.SMU.ID","VALUE")], match="first")$VALUE
rnd.sf = sf::[st_as_sf](https://r-spatial.github.io/sf/reference/st_as_sf.html)(rnd.hwsd, coords = [c](https://rdrr.io/r/base/c.html)(1,2), crs=gh.prj)
rnd.ll <- rnd.sf [%>%](https://magrittr.tidyverse.org/reference/pipe.html) sf::[st_transform](https://r-spatial.github.io/sf/reference/st_transform.html)(4326)
[unlink](https://rdrr.io/r/base/unlink.html)("./correlation/sim_hwsdv2_pnts.gpkg")
sf::[write_sf](https://r-spatial.github.io/sf/reference/st_write.html)(rnd.ll, "./correlation/sim_hwsdv2_pnts.gpkg", driver = "GPKG")
[saveRDS](https://rdrr.io/r/base/readRDS.html)(rnd.ll, "./correlation/sim_hwsdv2_pnts.rds")
```
We can further harmonize values using a correlation legend:
```
rnd.ll = [readRDS](https://rdrr.io/r/base/readRDS.html)("./correlation/sim_hwsdv2_pnts.rds")
h.hwsd = [read.csv](https://rdrr.io/r/utils/read.table.html)("./correlation/HWSDv2_WRB_legend.csv")
hwsd.wrb = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)([cbind](https://rdrr.io/r/base/cbind.html)(rnd.ll, sf::[st_coordinates](https://r-spatial.github.io/sf/reference/st_coordinates.html)(rnd.ll)))
hwsd.wrb = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(hwsd.wrb, [c](https://rdrr.io/r/base/c.html)("X"="longitude", "Y"="latitude"))
hwsd.wrb$h_wrb4 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hwsd.wrb, h.hwsd)$h_wrb4
#> Joining by: WRB4
hwsd.wrb$source_db = "HWSDv2"
hwsd.wrb$profile_id = [paste0](https://rdrr.io/r/base/paste.html)("SIM", 1:[nrow](https://rdrr.io/r/base/nrow.html)(hwsd.wrb))
## subset to complete points
hwsd.wrb = hwsd.wrb[,]
```
which now also has a harmonized column `h_wrb4` compatible to the column produced
using the WoSIS points:
```
[head](https://rdrr.io/r/utils/head.html)(hwsd.wrb)
#> HWSD2.SMU.ID WRB4 longitude latitude
#> 1 1467 Ferric Lixisols 8.290244 13.93512
#> 2 11321 Anthrosols 82.118452 41.73180
#> 3 26612 Calcaric Cambisols 33.212770 15.03106
#> 4 16664 Eutric Cambisols 38.366071 10.60237
#> 5 3387 Cambic Cryosols -64.448944 55.92456
#> 6 11805 Haplic Acrisols 118.358546 25.82881
#> geometry h_wrb4 source_db profile_id
#> 1 POINT (8.290244 13.93512) Ferric Lixisols HWSDv2 SIM1
#> 2 POINT (82.11845 41.7318) <NA> HWSDv2 SIM2
#> 3 POINT (33.21277 15.03106) Calcaric Cambisols HWSDv2 SIM3
#> 4 POINT (38.36607 10.60237) Eutric Cambisols HWSDv2 SIM4
#> 5 POINT (-64.44894 55.92456) Cambic Cryosols HWSDv2 SIM5
#> 6 POINT (118.3585 25.82881) Haplic Acrisols HWSDv2 SIM6
```
So in summary we have prepared two point datasets from WoSIS and HSWDv2\.
WoSIS are the actual observations of soil types and should be considered ground\-truth.
The HWSDv2 contains the WRB 2022 version soil types per mapping unit, but these are
potentially of variable accuracy and should be only used to fill gaps in training data.
We had to manually adjust harmonize some classes that we either outdated (old WRB versions)
or are missing prefix / suffix.
There are many more point datasets with WRB classification or compatible classification
that could be added to the list of training points. Below we list most recent
datasets with soil types that we import and add to WoSIS to produce the most up\-to\-date
compilation of soil training data.
7\.4 Additional point datasets with WRB classes
-----------------------------------------------
In addition to WoSIS and HWSDv2, we can also add some additional point datasets
that are not yet included in any global compilation but potentially contain *ground\-truth*
observations of soil types.
For example, we can use point observations coming from the land\-surface observations e.g.
to represent shifting sand and bare\-rock areas. For example, global land cover validation data
sets that are produced by photo\-interpretation of very high resolution satellite imagery
(e.g. 20 cm spatial resolution) often contain useful observations of the shifting sand,
permanent ice and bare\-rocks ([Tsendbazar et al., 2021](references.html#ref-tsendbazar2021towards)). These specific surface materials
are often missing or are systematically under\-represented in the legacy soil profile datasets.
Here is an example of almost 6000 observations of bare rock and shifting sands:
```
lc.xy = [readRDS](https://rdrr.io/r/base/readRDS.html)("./correlation/photointerpretation_leptosol.rds")
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(lc.xy$h_wrb4))
#> Lithic Leptosols Shifting sands
#> 3928 2156
```
Note that we define the `Shifting sands` as an additional category of soil type as these
are arbitrarily either masked out or (controversially) classified to different soil types.
We consider that it is more consistent to map shifting sands as a separate category of land.
Another dataset interesting for soil type mapping are the legacy soil observations
focused on tropical peatlands, published in the literature, then digitized manually ([Gumbricht et al., 2017](references.html#ref-gumbricht2017expert); [Hengl, 2016](references.html#ref-hengl2016global)):
```
peat.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/CIFOR_peatlands/SOC_literature_CIFOR_v1.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("", "NA"))
peat.xy = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(peat.xy, [c](https://rdrr.io/r/base/c.html)("TAXNWRB"="h_wrb4", "modelling.x"="longitude", "modelling.y"="latitude", "SOURCEID"="profile_id"))
peat.xy$source_db = "CIFOR_peatlands"
peat.xy = peat.xy[]
```
Peatlands and inaccessible tropical soil types are often under\-represented and hence this
dataset helps reduce the bias in under\-representing tropical soils.
Croatian soil profile database also contains soil types still in FAO
classification system ([Antonić, Pernar, \& Jelaska, 2003](references.html#ref-antonic2003spatial)):
```
hr.xy = [readRDS](https://rdrr.io/r/base/readRDS.html)("./correlation/hrspdb_pnts.rds")
hr.xy2 = hr.xy[,-[which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(hr.xy) == "h_wrb4")]
hr.xy2 = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(hr.xy2, [c](https://rdrr.io/r/base/c.html)("h_wrb4.2"="h_wrb4"))
```
Legacy soil observations for Italy ([Righini, Costantini, \& Sulli, 2001](references.html#ref-righini2001banca); [Vecchio, Barberis, \& Bourlot, 2002](references.html#ref-vecchio2002regional)):
```
it.xy = [readRDS](https://rdrr.io/r/base/readRDS.html)("/mnt/landmark/HWSDv2/taxa/WRB_points_Italy.rds")
it.xy = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(it.xy, [c](https://rdrr.io/r/base/c.html)("wrb2015"="h_wrb4", "id_site"="profile_id", "lat"="latitude", "long"="longitude"))
it.xy$source_db = "Italian_SPDB"
```
German agricultural soil inventory dataset ([Poeplau et al., 2020](references.html#ref-poeplau2020stocks)):
```
de.xy = [readRDS](https://rdrr.io/r/base/readRDS.html)("/mnt/landmark/HWSDv2/taxa/WRB_points_Germany.rds")
de.xy = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(de.xy, [c](https://rdrr.io/r/base/c.html)("WRB_correlation_1"="h_wrb4", "PointID"="profile_id", "lat"="latitude", "lon"="longitude"))
de.xy$Specific.soil.subtype = NULL
de.xy$source_db = "BZE_LW"
de.xy2 = de.xy[,-[which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(de.xy) == "h_wrb4")]
de.xy2 = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(de.xy2, [c](https://rdrr.io/r/base/c.html)("WRB_correlation_2"="h_wrb4"))
```
French RMQS soil profile and monitoring dataset ([Saby et al., 2020](references.html#ref-AIQ9WS_2020)):
```
fr.xy = [readRDS](https://rdrr.io/r/base/readRDS.html)("/mnt/landmark/HWSDv2/taxa/WRB_points_France.rds")
fr.xy = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(fr.xy, [c](https://rdrr.io/r/base/c.html)("WRB_correlation_1"="h_wrb4", "id_site"="profile_id", "lat"="latitude", "lon"="longitude"))
fr.xy$signific_ger_95 = NULL
fr.xy$source_db = "RMQS1"
fr.xy2 = fr.xy[,-[which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(fr.xy) == "h_wrb4")]
fr.xy2 = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(fr.xy2, [c](https://rdrr.io/r/base/c.html)("WRB_correlation_2"="h_wrb4"))
```
7\.5 Final compilation of analysis\-ready points
------------------------------------------------
Multiple imported point datasets can be finally combined into a single global
consistent analysis\-ready training dataset (here we limit to max 20,000 simulated points):
```
sel.cn = [c](https://rdrr.io/r/base/c.html)("profile_id","latitude", "longitude", "h_wrb4", "source_db")
tr.pnts = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([list](https://rdrr.io/r/base/list.html)(h.wosis.wrb[, sel.cn],
hwsd.wrb[[sample.int](https://rdrr.io/r/base/sample.html)(2e4, n = [nrow](https://rdrr.io/r/base/nrow.html)(hwsd.wrb)), sel.cn],
lc.xy, hr.xy2[, sel.cn],
peat.xy[, sel.cn],
hr.xy[, sel.cn],
it.xy[, sel.cn], de.xy[, sel.cn], de.xy2[, sel.cn],
fr.xy[, sel.cn], fr.xy2[, sel.cn]))
```
We can clean\-up some systematic naming issues common in many soil DBs:
```
tr.pnts$h_wrb4 = [gsub](https://rdrr.io/r/base/grep.html)(",", " ", tr.pnts$h_wrb4)
tr.pnts$h_wrb4 = [gsub](https://rdrr.io/r/base/grep.html)("distric", "dystric", tr.pnts$h_wrb4, ignore.case = TRUE)
tr.pnts$h_wrb4 = [gsub](https://rdrr.io/r/base/grep.html)("sol ", "sols ", tr.pnts$h_wrb4, ignore.case = TRUE)
tr.pnts$h_wrb4 = [gsub](https://rdrr.io/r/base/grep.html)("sol$", "sols", tr.pnts$h_wrb4, ignore.case = TRUE)
tr.pnts$h_wrb4 = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tr.pnts$h_wrb4=="", NA, tr.pnts$h_wrb4)
## subset to complete points:
tr.pnts = tr.pnts[,]
```
This gives a total of about 70,000 training points with WRB soil type. Note that correlation
between different national systems is not trivial and hence you should always check
the [folder](https://github.com/OpenGeoHub/SoilSamples/tree/main/correlation) `./correlation` to see if some soil types are possibly incorrectly
correlated. Also, IUSS and FAO are kind to maintain the [World Reference Base (WRB)](https://www.fao.org/soils-portal/data-hub/soil-classification/world-reference-base/en/),
but a table correlating various versions of WRB is [often not trivial](https://docs.google.com/spreadsheets/d/1GaNpiH65yiuHusNVkUrKog2FCiVUO6kz_wNdIzHbdfg/edit#gid=1418992436) and we have to
somewhat improvise (or ignore the issue) that some soil types have changed over time.
```
wrb.pnts_profiles_sf <- sf::[st_as_sf](https://r-spatial.github.io/sf/reference/st_as_sf.html)(tr.pnts, coords = [c](https://rdrr.io/r/base/c.html)("longitude","latitude"), crs="EPSG:4326")
if({
plot_gh(wrb.pnts_profiles_sf, out.pdf="./img/sol_wrb.pnts_profiles.pdf")
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_wrb.pnts_profiles.pdf ./img/sol_wrb.pnts_profiles -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_wrb.pnts_profiles.png ./img/sol_wrb.pnts_profiles.png")
}
```
(\#fig:sol\_wrb.pnts\_profiles)A global compilation of soil profiles with WRB soil type classification.
```
wrb.pnts_profilesL_sf <- sf::[st_as_sf](https://r-spatial.github.io/sf/reference/st_as_sf.html)(tr.pnts[!(tr.pnts$source_db [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("HWSDv2", "GlobalLCV")),], coords = [c](https://rdrr.io/r/base/c.html)("longitude","latitude"), crs="EPSG:4326")
if({
plot_gh(wrb.pnts_profilesL_sf, out.pdf="./img/sol_wrb.pnts_tot.profiles.pdf")
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_wrb.pnts_tot.profiles.pdf ./img/sol_wrb.pnts_tot.profiles -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_wrb.pnts_tot.profiles.png ./img/sol_wrb.pnts_tot.profiles.png")
}
```
(\#fig:sol\_wrb.pnts\_tot.profiles)A global compilation of soil profiles with WRB soil type classification: a subset with actual soil profiles.
We can export the final training points to Geopackage file by using e.g.:
```
sf::[write_sf](https://r-spatial.github.io/sf/reference/st_write.html)(sf::[st_as_sf](https://r-spatial.github.io/sf/reference/st_as_sf.html)(tr.pnts[!tr.pnts$source_db=="Italian_SPDB",],
coords = [c](https://rdrr.io/r/base/c.html)("longitude","latitude"), crs="EPSG:4326"),
[paste0](https://rdrr.io/r/base/paste.html)("./out/gpkg/sol_wrb.pnts_profiles.gpkg"), driver = "GPKG")
[saveRDS](https://rdrr.io/r/base/readRDS.html)(tr.pnts[!tr.pnts$source_db=="Italian_SPDB",], "./out/rds/sol_wrb.pnts_profiles.rds")
#save.image.pigz(file="soilwrb.RData")
```
7\.1 Overview
-------------
This section describes import steps used to produce a global compilation of
FAO’s IUSS’s [World Reference Base (WRB)](https://www.fao.org/soils-portal/data-hub/soil-classification/world-reference-base/en/) observations of soil types.
Classes are either used as\-is or a translated from some local or international system.
[Correlation tables](https://github.com/OpenGeoHub/SoilSamples/tree/main/correlation) are available in the folder `./correlation` and is based on
various literature sources. Correlation is not trivial and often not 1:1 so we
typically use 2–3 options for translating soil types ([Krasilnikov, Arnold, Marti, \& Shoba, 2009](references.html#ref-krasilnikov2009handbook)). Output compilations are
available in the [folder](https://github.com/OpenGeoHub/SoilSamples/tree/main/out) `./out`.
Please refer to the dataset version / DOI to ensure full reproducibility of your modeling.
This dataset is currently used to produce [soil type maps of the world](https://github.com/OpenGeoHub/SoilTypeMapping/)
at various spatial resolutions. To add new dataset please open a [new issue](https://github.com/OpenGeoHub/SoilSamples/issues) or do a merge request.
7\.2 WoSIS point datasets
-------------------------
There are currently several global datasets that reference distribution of WRB
classes. For building training points for predictive soil type mapping the most
comprehensive dataset seems to be World Soil Information Service (WoSIS) soil profile database (available via:
<https://www.isric.org/explore/wosis>) ([Batjes, Ribeiro, \& Van Oostrum, 2020](references.html#ref-batjes2020standardised)).
You can download the most up\-to\-date snapshot of the most up\-to\-date WOSIS Geopackage file directly
by using the [ISRIC’s Web Feature Service](https://www.isric.org/explore/wosis/accessing-wosis-derived-datasets#Access_data).
Below is an example of a snapshot downloaded on 10\-April\-2023:
```
wosis = sf::[read_sf](https://r-spatial.github.io/sf/reference/st_read.html)("/mnt/landmark/HWSDv2/wosis_latest_profiles.gpkg")
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(wosis$cwrb_version))
```
The WRB soil types can be generate by combining multiple columns:
```
wosis$wrb4 = [paste0](https://rdrr.io/r/base/paste.html)([ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(wosis$cwrb_prefix_qualifier), "", [paste0](https://rdrr.io/r/base/paste.html)(wosis$cwrb_prefix_qualifier, " ")), wosis$cwrb_reference_soil_group, [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)([is.na](https://rdrr.io/r/base/NA.html)(wosis$cwrb_suffix_qualifier), "", [paste0](https://rdrr.io/r/base/paste.html)(" ", wosis$cwrb_suffix_qualifier)))
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(wosis$wrb4), maxsum=20)
```
This finally gives about 30,000 soil points with soil classification:
```
wosis.wrb = wosis[!wosis$wrb4=="NA", [c](https://rdrr.io/r/base/c.html)("profile_id", "geom_accuracy", "latitude", "longitude", "wrb4", "dataset_id")]
[str](https://rdrr.io/r/utils/str.html)(wosis.wrb)
#plot(wosis.wrb[,c("longitude","latitude")])
```
We can write the [summary distribution](https://github.com/OpenGeoHub/SoilSamples/blob/main/correlation/wosis.wrb_summary.csv) of soil types by using:
```
xs0 = [summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(wosis.wrb$wrb4), maxsum = [length](https://rdrr.io/r/base/length.html)([levels](https://rdrr.io/r/base/levels.html)([as.factor](https://rdrr.io/r/base/factor.html)(wosis.wrb$wrb4))))
[write.csv](https://rdrr.io/r/utils/write.table.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(WRB=[attr](https://rdrr.io/r/base/attr.html)(xs0, "names"), count=xs0), "./correlation/wosis.wrb_summary.csv")
```
After some clean\-up, we can prepare a harmonized legend that now has consistently
WRB soil types up to the level of great\-group \+ prefix and with soil points
that have a location accuracy of not worse than 1 km:
```
h.wosis = [read.csv](https://rdrr.io/r/utils/read.table.html)("./correlation/WOSIS_WRB_legend.csv")
h.wosis$wrb4 = h.wosis$WRB
wosis.wrb$h_wrb4 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)([as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(wosis.wrb["wrb4"]), h.wosis)$h_wrb4
## copy values
wosis.wrb2 = wosis.wrb
wosis.wrb2$h_wrb4 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)([as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(wosis.wrb2["wrb4"]), h.wosis)$h_wrb4.2
#summary(as.factor(wosis.wrb$h_wrb4))
#str(levels(as.factor(wosis.wrb$h_wrb4)))
## remove points with poor location accuracy
h.wosis.wrb = [rbind](https://Rdatatable.gitlab.io/data.table/reference/rbindlist.html)(
[as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(wosis.wrb[!([is.na](https://rdrr.io/r/base/NA.html)(wosis.wrb$h_wrb4)|wosis.wrb$h_wrb4==""|wosis.wrb$geom_accuracy>0.08334),]),
[as.data.frame](https://rdrr.io/r/base/as.data.frame.html)(wosis.wrb2[!([is.na](https://rdrr.io/r/base/NA.html)(wosis.wrb2$h_wrb4)|wosis.wrb2$h_wrb4==""|wosis.wrb2$geom_accuracy>0.08334),]))
h.wosis.wrb$source_db = h.wosis.wrb$dataset_id
[dim](https://rdrr.io/r/base/dim.html)(h.wosis.wrb)
saveRDS.gz(h.wosis.wrb, "./correlation/h.wosis.wrb.rds")
```
which can be now considered an analysis\-ready point dataset with `h_wrb4` being the
harmonized value of the soil type:
```
h.wosis.wrb = [readRDS](https://rdrr.io/r/base/readRDS.html)("./correlation/h.wosis.wrb.rds")
[head](https://rdrr.io/r/utils/head.html)(h.wosis.wrb)
#> profile_id geom_accuracy latitude longitude wrb4
#> 1 47431 0.000100 6.5875 2.1525 Planosol
#> 2 47478 0.010000 6.5875 2.1900 Planosol
#> 3 47503 0.000100 6.5875 2.2262 Planosol
#> 4 47596 0.000100 6.8733 2.3458 Acrisol
#> 5 52678 0.000278 1.0700 34.9000 Vertisol
#> 6 52686 0.000278 0.9400 34.9500 Ferralsol
#> dataset_id geom h_wrb4
#> 1 {AF-AfSP,BJSOTER,WD-WISE} POINT (2.1525 6.5875) Eutric Planosols
#> 2 {AF-AfSP,BJSOTER,WD-WISE} POINT (2.19 6.5875) Eutric Planosols
#> 3 {AF-AfSP,BJSOTER,WD-WISE} POINT (2.2262 6.5875) Eutric Planosols
#> 4 {AF-AfSP,BJSOTER,WD-WISE} POINT (2.3458 6.8733) Haplic Acrisols
#> 5 {AF-AfSP,KE-SOTER,WD-WISE} POINT (34.9 1.07) Haplic Vertisols
#> 6 {AF-AfSP,KE-SOTER,WD-WISE} POINT (34.95 0.94) Geric Ferralsols
#> source_db
#> 1 {AF-AfSP,BJSOTER,WD-WISE}
#> 2 {AF-AfSP,BJSOTER,WD-WISE}
#> 3 {AF-AfSP,BJSOTER,WD-WISE}
#> 4 {AF-AfSP,BJSOTER,WD-WISE}
#> 5 {AF-AfSP,KE-SOTER,WD-WISE}
#> 6 {AF-AfSP,KE-SOTER,WD-WISE}
```
7\.3 HWSDv2
-----------
A disadvantage of using only legacy soil profiles, however, is that these are often
spatially clustered i.e. large gaps exists where almost no data available. This applies especially
for African and Asian continents. To increase spatial coverage of the training points for
[global soil type mapping](https://github.com/OpenGeoHub/SoilTypeMapping), we can add points generated from the global soil polygon map
e.g. [Harmonized World Soil Database](https://iiasa.ac.at/models-tools-data/hwsd) (HWSD) ([FAO \& IIASA, 2023](references.html#ref-hwsd2023)).
This can be done in three steps: first, we prepare [summary of all WRB soil types](https://github.com/OpenGeoHub/SoilSamples/blob/main/correlation/hwsd2_summary.csv) in the HWSDv2:
```
hwsd <- [mdb.get](https://rdrr.io/pkg/Hmisc/man/mdb.get.html)("/mnt/landmark/HWSDv2/HWSD2.mdb")
#str(hwsd)
wrb.leg = hwsd$D_WRB4
#str(wrb.leg)
wrb.leg$WRB4 = wrb.leg$CODE
layer = hwsd$HWSD2_LAYERS[,[c](https://rdrr.io/r/base/c.html)("HWSD2.SMU.ID", "WRB4")]
layer$VALUE = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(layer, wrb.leg[[c](https://rdrr.io/r/base/c.html)("VALUE","WRB4")], match="first")$VALUE
xs = [summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(layer$VALUE), maxsum = [nrow](https://rdrr.io/r/base/nrow.html)(wrb.leg))
[write.csv](https://rdrr.io/r/utils/write.table.html)([data.frame](https://rdrr.io/r/base/data.frame.html)(WRB4=[attr](https://rdrr.io/r/base/attr.html)(xs, "names"), count=xs), "./correlation/hwsd2_summary.csv")
```
The `./correlation/hwsd2_summary.csv` table now shows which are the most frequent soil types
for the world based on the HWSDv2\. Note, these are primarily expert\-based and of
unknown uncertainty / confidence, so should be used with caution, unlike WoSIS and
other legacy soil profiles which are based on actual soil observations and fieldwork.
Second, we can prepare a raster layer that we can use to randomly draw training points
using some probability sampling e.g. [Simple Random Sampling](https://opengeohub.github.io/spatial-sampling-ml/).
Because we want to draw samples from an equal area space, a good idea is to convert the
original HWSD raster to the [IGH projection](https://epsg.io/54052):
```
rgdal::[GDALinfo](http://rgdal.r-forge.r-project.org/reference/readGDAL.html)("/mnt/landmark/HWSDv2/HWSD2.bil")
## 1km resolution
te = [c](https://rdrr.io/r/base/c.html)(-20037508,-6728980, 20037508, 8421750)
gh.prj = "+proj=igh +ellps=WGS84 +units=m +no_defs"
[system](https://rdrr.io/r/base/system.html)([paste0](https://rdrr.io/r/base/paste.html)('gdalwarp /mnt/landmark/HWSDv2/HWSD2.bil /mnt/landmark/HWSDv2/HWSD2_gh_1km.tif -r \"near\"
--config CHECK_WITH_INVERT_PROJ TRUE -t_srs \"', gh.prj,
'\" -co \"COMPRESS=DEFLATE\" -tr 1000 1000 -overwrite -te ',
[paste](https://rdrr.io/r/base/paste.html)(te, collapse = " ")))
```
Now we can sample random points from the projected raster by using the terra package functionality ([Hijmans, 2019](references.html#ref-hijmans2019spatial)).
We generate 100,000 random points, although in principle we can later on subset the points to much less
points as needed.
```
rnd.hwsd = terra::[spatSample](https://rdrr.io/pkg/terra/man/sample.html)(terra::[rast](https://rdrr.io/pkg/terra/man/rast.html)("/mnt/landmark/HWSDv2/HWSD2_gh_1km.tif"),
size=1e5, method="random", na.rm=TRUE, xy=TRUE)
## merge and make WRB4 layer
[names](https://rdrr.io/r/base/names.html)(rnd.hwsd)[3] = "HWSD2.SMU.ID"
rnd.hwsd$WRB4 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(rnd.hwsd, layer[[c](https://rdrr.io/r/base/c.html)("HWSD2.SMU.ID","VALUE")], match="first")$VALUE
rnd.sf = sf::[st_as_sf](https://r-spatial.github.io/sf/reference/st_as_sf.html)(rnd.hwsd, coords = [c](https://rdrr.io/r/base/c.html)(1,2), crs=gh.prj)
rnd.ll <- rnd.sf [%>%](https://magrittr.tidyverse.org/reference/pipe.html) sf::[st_transform](https://r-spatial.github.io/sf/reference/st_transform.html)(4326)
[unlink](https://rdrr.io/r/base/unlink.html)("./correlation/sim_hwsdv2_pnts.gpkg")
sf::[write_sf](https://r-spatial.github.io/sf/reference/st_write.html)(rnd.ll, "./correlation/sim_hwsdv2_pnts.gpkg", driver = "GPKG")
[saveRDS](https://rdrr.io/r/base/readRDS.html)(rnd.ll, "./correlation/sim_hwsdv2_pnts.rds")
```
We can further harmonize values using a correlation legend:
```
rnd.ll = [readRDS](https://rdrr.io/r/base/readRDS.html)("./correlation/sim_hwsdv2_pnts.rds")
h.hwsd = [read.csv](https://rdrr.io/r/utils/read.table.html)("./correlation/HWSDv2_WRB_legend.csv")
hwsd.wrb = [as.data.frame](https://rdrr.io/r/base/as.data.frame.html)([cbind](https://rdrr.io/r/base/cbind.html)(rnd.ll, sf::[st_coordinates](https://r-spatial.github.io/sf/reference/st_coordinates.html)(rnd.ll)))
hwsd.wrb = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(hwsd.wrb, [c](https://rdrr.io/r/base/c.html)("X"="longitude", "Y"="latitude"))
hwsd.wrb$h_wrb4 = plyr::[join](https://rdrr.io/pkg/plyr/man/join.html)(hwsd.wrb, h.hwsd)$h_wrb4
#> Joining by: WRB4
hwsd.wrb$source_db = "HWSDv2"
hwsd.wrb$profile_id = [paste0](https://rdrr.io/r/base/paste.html)("SIM", 1:[nrow](https://rdrr.io/r/base/nrow.html)(hwsd.wrb))
## subset to complete points
hwsd.wrb = hwsd.wrb[,]
```
which now also has a harmonized column `h_wrb4` compatible to the column produced
using the WoSIS points:
```
[head](https://rdrr.io/r/utils/head.html)(hwsd.wrb)
#> HWSD2.SMU.ID WRB4 longitude latitude
#> 1 1467 Ferric Lixisols 8.290244 13.93512
#> 2 11321 Anthrosols 82.118452 41.73180
#> 3 26612 Calcaric Cambisols 33.212770 15.03106
#> 4 16664 Eutric Cambisols 38.366071 10.60237
#> 5 3387 Cambic Cryosols -64.448944 55.92456
#> 6 11805 Haplic Acrisols 118.358546 25.82881
#> geometry h_wrb4 source_db profile_id
#> 1 POINT (8.290244 13.93512) Ferric Lixisols HWSDv2 SIM1
#> 2 POINT (82.11845 41.7318) <NA> HWSDv2 SIM2
#> 3 POINT (33.21277 15.03106) Calcaric Cambisols HWSDv2 SIM3
#> 4 POINT (38.36607 10.60237) Eutric Cambisols HWSDv2 SIM4
#> 5 POINT (-64.44894 55.92456) Cambic Cryosols HWSDv2 SIM5
#> 6 POINT (118.3585 25.82881) Haplic Acrisols HWSDv2 SIM6
```
So in summary we have prepared two point datasets from WoSIS and HSWDv2\.
WoSIS are the actual observations of soil types and should be considered ground\-truth.
The HWSDv2 contains the WRB 2022 version soil types per mapping unit, but these are
potentially of variable accuracy and should be only used to fill gaps in training data.
We had to manually adjust harmonize some classes that we either outdated (old WRB versions)
or are missing prefix / suffix.
There are many more point datasets with WRB classification or compatible classification
that could be added to the list of training points. Below we list most recent
datasets with soil types that we import and add to WoSIS to produce the most up\-to\-date
compilation of soil training data.
7\.4 Additional point datasets with WRB classes
-----------------------------------------------
In addition to WoSIS and HWSDv2, we can also add some additional point datasets
that are not yet included in any global compilation but potentially contain *ground\-truth*
observations of soil types.
For example, we can use point observations coming from the land\-surface observations e.g.
to represent shifting sand and bare\-rock areas. For example, global land cover validation data
sets that are produced by photo\-interpretation of very high resolution satellite imagery
(e.g. 20 cm spatial resolution) often contain useful observations of the shifting sand,
permanent ice and bare\-rocks ([Tsendbazar et al., 2021](references.html#ref-tsendbazar2021towards)). These specific surface materials
are often missing or are systematically under\-represented in the legacy soil profile datasets.
Here is an example of almost 6000 observations of bare rock and shifting sands:
```
lc.xy = [readRDS](https://rdrr.io/r/base/readRDS.html)("./correlation/photointerpretation_leptosol.rds")
[summary](https://rdrr.io/r/base/summary.html)([as.factor](https://rdrr.io/r/base/factor.html)(lc.xy$h_wrb4))
#> Lithic Leptosols Shifting sands
#> 3928 2156
```
Note that we define the `Shifting sands` as an additional category of soil type as these
are arbitrarily either masked out or (controversially) classified to different soil types.
We consider that it is more consistent to map shifting sands as a separate category of land.
Another dataset interesting for soil type mapping are the legacy soil observations
focused on tropical peatlands, published in the literature, then digitized manually ([Gumbricht et al., 2017](references.html#ref-gumbricht2017expert); [Hengl, 2016](references.html#ref-hengl2016global)):
```
peat.xy = [read.csv](https://rdrr.io/r/utils/read.table.html)("/mnt/diskstation/data/Soil_points/INT/CIFOR_peatlands/SOC_literature_CIFOR_v1.csv", na.strings = [c](https://rdrr.io/r/base/c.html)("", "NA"))
peat.xy = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(peat.xy, [c](https://rdrr.io/r/base/c.html)("TAXNWRB"="h_wrb4", "modelling.x"="longitude", "modelling.y"="latitude", "SOURCEID"="profile_id"))
peat.xy$source_db = "CIFOR_peatlands"
peat.xy = peat.xy[]
```
Peatlands and inaccessible tropical soil types are often under\-represented and hence this
dataset helps reduce the bias in under\-representing tropical soils.
Croatian soil profile database also contains soil types still in FAO
classification system ([Antonić, Pernar, \& Jelaska, 2003](references.html#ref-antonic2003spatial)):
```
hr.xy = [readRDS](https://rdrr.io/r/base/readRDS.html)("./correlation/hrspdb_pnts.rds")
hr.xy2 = hr.xy[,-[which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(hr.xy) == "h_wrb4")]
hr.xy2 = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(hr.xy2, [c](https://rdrr.io/r/base/c.html)("h_wrb4.2"="h_wrb4"))
```
Legacy soil observations for Italy ([Righini, Costantini, \& Sulli, 2001](references.html#ref-righini2001banca); [Vecchio, Barberis, \& Bourlot, 2002](references.html#ref-vecchio2002regional)):
```
it.xy = [readRDS](https://rdrr.io/r/base/readRDS.html)("/mnt/landmark/HWSDv2/taxa/WRB_points_Italy.rds")
it.xy = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(it.xy, [c](https://rdrr.io/r/base/c.html)("wrb2015"="h_wrb4", "id_site"="profile_id", "lat"="latitude", "long"="longitude"))
it.xy$source_db = "Italian_SPDB"
```
German agricultural soil inventory dataset ([Poeplau et al., 2020](references.html#ref-poeplau2020stocks)):
```
de.xy = [readRDS](https://rdrr.io/r/base/readRDS.html)("/mnt/landmark/HWSDv2/taxa/WRB_points_Germany.rds")
de.xy = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(de.xy, [c](https://rdrr.io/r/base/c.html)("WRB_correlation_1"="h_wrb4", "PointID"="profile_id", "lat"="latitude", "lon"="longitude"))
de.xy$Specific.soil.subtype = NULL
de.xy$source_db = "BZE_LW"
de.xy2 = de.xy[,-[which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(de.xy) == "h_wrb4")]
de.xy2 = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(de.xy2, [c](https://rdrr.io/r/base/c.html)("WRB_correlation_2"="h_wrb4"))
```
French RMQS soil profile and monitoring dataset ([Saby et al., 2020](references.html#ref-AIQ9WS_2020)):
```
fr.xy = [readRDS](https://rdrr.io/r/base/readRDS.html)("/mnt/landmark/HWSDv2/taxa/WRB_points_France.rds")
fr.xy = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(fr.xy, [c](https://rdrr.io/r/base/c.html)("WRB_correlation_1"="h_wrb4", "id_site"="profile_id", "lat"="latitude", "lon"="longitude"))
fr.xy$signific_ger_95 = NULL
fr.xy$source_db = "RMQS1"
fr.xy2 = fr.xy[,-[which](https://rdrr.io/r/base/which.html)([names](https://rdrr.io/r/base/names.html)(fr.xy) == "h_wrb4")]
fr.xy2 = plyr::[rename](https://rdrr.io/pkg/plyr/man/rename.html)(fr.xy2, [c](https://rdrr.io/r/base/c.html)("WRB_correlation_2"="h_wrb4"))
```
7\.5 Final compilation of analysis\-ready points
------------------------------------------------
Multiple imported point datasets can be finally combined into a single global
consistent analysis\-ready training dataset (here we limit to max 20,000 simulated points):
```
sel.cn = [c](https://rdrr.io/r/base/c.html)("profile_id","latitude", "longitude", "h_wrb4", "source_db")
tr.pnts = plyr::[rbind.fill](https://rdrr.io/pkg/plyr/man/rbind.fill.html)([list](https://rdrr.io/r/base/list.html)(h.wosis.wrb[, sel.cn],
hwsd.wrb[[sample.int](https://rdrr.io/r/base/sample.html)(2e4, n = [nrow](https://rdrr.io/r/base/nrow.html)(hwsd.wrb)), sel.cn],
lc.xy, hr.xy2[, sel.cn],
peat.xy[, sel.cn],
hr.xy[, sel.cn],
it.xy[, sel.cn], de.xy[, sel.cn], de.xy2[, sel.cn],
fr.xy[, sel.cn], fr.xy2[, sel.cn]))
```
We can clean\-up some systematic naming issues common in many soil DBs:
```
tr.pnts$h_wrb4 = [gsub](https://rdrr.io/r/base/grep.html)(",", " ", tr.pnts$h_wrb4)
tr.pnts$h_wrb4 = [gsub](https://rdrr.io/r/base/grep.html)("distric", "dystric", tr.pnts$h_wrb4, ignore.case = TRUE)
tr.pnts$h_wrb4 = [gsub](https://rdrr.io/r/base/grep.html)("sol ", "sols ", tr.pnts$h_wrb4, ignore.case = TRUE)
tr.pnts$h_wrb4 = [gsub](https://rdrr.io/r/base/grep.html)("sol$", "sols", tr.pnts$h_wrb4, ignore.case = TRUE)
tr.pnts$h_wrb4 = [ifelse](https://Rdatatable.gitlab.io/data.table/reference/fifelse.html)(tr.pnts$h_wrb4=="", NA, tr.pnts$h_wrb4)
## subset to complete points:
tr.pnts = tr.pnts[,]
```
This gives a total of about 70,000 training points with WRB soil type. Note that correlation
between different national systems is not trivial and hence you should always check
the [folder](https://github.com/OpenGeoHub/SoilSamples/tree/main/correlation) `./correlation` to see if some soil types are possibly incorrectly
correlated. Also, IUSS and FAO are kind to maintain the [World Reference Base (WRB)](https://www.fao.org/soils-portal/data-hub/soil-classification/world-reference-base/en/),
but a table correlating various versions of WRB is [often not trivial](https://docs.google.com/spreadsheets/d/1GaNpiH65yiuHusNVkUrKog2FCiVUO6kz_wNdIzHbdfg/edit#gid=1418992436) and we have to
somewhat improvise (or ignore the issue) that some soil types have changed over time.
```
wrb.pnts_profiles_sf <- sf::[st_as_sf](https://r-spatial.github.io/sf/reference/st_as_sf.html)(tr.pnts, coords = [c](https://rdrr.io/r/base/c.html)("longitude","latitude"), crs="EPSG:4326")
if({
plot_gh(wrb.pnts_profiles_sf, out.pdf="./img/sol_wrb.pnts_profiles.pdf")
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_wrb.pnts_profiles.pdf ./img/sol_wrb.pnts_profiles -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_wrb.pnts_profiles.png ./img/sol_wrb.pnts_profiles.png")
}
```
(\#fig:sol\_wrb.pnts\_profiles)A global compilation of soil profiles with WRB soil type classification.
```
wrb.pnts_profilesL_sf <- sf::[st_as_sf](https://r-spatial.github.io/sf/reference/st_as_sf.html)(tr.pnts[!(tr.pnts$source_db [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("HWSDv2", "GlobalLCV")),], coords = [c](https://rdrr.io/r/base/c.html)("longitude","latitude"), crs="EPSG:4326")
if({
plot_gh(wrb.pnts_profilesL_sf, out.pdf="./img/sol_wrb.pnts_tot.profiles.pdf")
[system](https://rdrr.io/r/base/system.html)("pdftoppm ./img/sol_wrb.pnts_tot.profiles.pdf ./img/sol_wrb.pnts_tot.profiles -png -f 1 -singlefile")
[system](https://rdrr.io/r/base/system.html)("convert -crop 1280x575+36+114 ./img/sol_wrb.pnts_tot.profiles.png ./img/sol_wrb.pnts_tot.profiles.png")
}
```
(\#fig:sol\_wrb.pnts\_tot.profiles)A global compilation of soil profiles with WRB soil type classification: a subset with actual soil profiles.
We can export the final training points to Geopackage file by using e.g.:
```
sf::[write_sf](https://r-spatial.github.io/sf/reference/st_write.html)(sf::[st_as_sf](https://r-spatial.github.io/sf/reference/st_as_sf.html)(tr.pnts[!tr.pnts$source_db=="Italian_SPDB",],
coords = [c](https://rdrr.io/r/base/c.html)("longitude","latitude"), crs="EPSG:4326"),
[paste0](https://rdrr.io/r/base/paste.html)("./out/gpkg/sol_wrb.pnts_profiles.gpkg"), driver = "GPKG")
[saveRDS](https://rdrr.io/r/base/readRDS.html)(tr.pnts[!tr.pnts$source_db=="Italian_SPDB",], "./out/rds/sol_wrb.pnts_profiles.rds")
#save.image.pigz(file="soilwrb.RData")
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/software-information-and-conventions.html |
Software information and conventions
------------------------------------
Package names and inline code and file names are formatted in a typewriter font (e.g. `methylKit`). Function names are followed by parentheses (e.g. `genomation::ScoreMatrix()`). The double\-colon operator `::` means accessing an object from a package.
### Assignment operator convention
Traditionally, `<-` is the preferred assignment operator. However, throughout the book we use `=` and `<-` as the assignment operator interchangeably.
### Packages needed to run the book code
This book is primarily about using R packages to analyze genomics data, therefore if you want to reproduce the analysis in this book you need to install the relevant packages in each chapter using `install.packages` or `BiocManager::install` functions. In each chapter, we load the necessary packages with the `library()` or `require()` function when we use the needed functions from respective packages. By looking at calls, you can see which packages are needed for that code chunk or chapter. If you need to install all the package dependencies for the book, you can run the following command and have a cup of tea while waiting.
```
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install(c('qvalue','plot3D','ggplot2','pheatmap','cowplot',
'cluster', 'NbClust', 'fastICA', 'NMF','matrixStats',
'Rtsne', 'mosaic', 'knitr', 'genomation',
'ggbio', 'Gviz', 'DESeq2', 'RUVSeq',
'gProfileR', 'ggfortify', 'corrplot',
'gage', 'EDASeq', 'citr', 'formatR',
'svglite', 'Rqc', 'ShortRead', 'QuasR',
'methylKit','FactoMineR', 'iClusterPlus',
'enrichR','caret','xgboost','glmnet',
'DALEX','kernlab','pROC','nnet','RANN',
'ranger','GenomeInfoDb', 'GenomicRanges',
'GenomicAlignments', 'ComplexHeatmap', 'circlize',
'rtracklayer', 'BSgenome.Hsapiens.UCSC.hg38',
'BSgenome.Hsapiens.UCSC.hg19','tidyr',
'AnnotationHub', 'GenomicFeatures', 'normr',
'MotifDb', 'TFBSTools', 'rGADEM', 'JASPAR2018'
))
```
### Assignment operator convention
Traditionally, `<-` is the preferred assignment operator. However, throughout the book we use `=` and `<-` as the assignment operator interchangeably.
### Packages needed to run the book code
This book is primarily about using R packages to analyze genomics data, therefore if you want to reproduce the analysis in this book you need to install the relevant packages in each chapter using `install.packages` or `BiocManager::install` functions. In each chapter, we load the necessary packages with the `library()` or `require()` function when we use the needed functions from respective packages. By looking at calls, you can see which packages are needed for that code chunk or chapter. If you need to install all the package dependencies for the book, you can run the following command and have a cup of tea while waiting.
```
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install(c('qvalue','plot3D','ggplot2','pheatmap','cowplot',
'cluster', 'NbClust', 'fastICA', 'NMF','matrixStats',
'Rtsne', 'mosaic', 'knitr', 'genomation',
'ggbio', 'Gviz', 'DESeq2', 'RUVSeq',
'gProfileR', 'ggfortify', 'corrplot',
'gage', 'EDASeq', 'citr', 'formatR',
'svglite', 'Rqc', 'ShortRead', 'QuasR',
'methylKit','FactoMineR', 'iClusterPlus',
'enrichR','caret','xgboost','glmnet',
'DALEX','kernlab','pROC','nnet','RANN',
'ranger','GenomeInfoDb', 'GenomicRanges',
'GenomicAlignments', 'ComplexHeatmap', 'circlize',
'rtracklayer', 'BSgenome.Hsapiens.UCSC.hg38',
'BSgenome.Hsapiens.UCSC.hg19','tidyr',
'AnnotationHub', 'GenomicFeatures', 'normr',
'MotifDb', 'TFBSTools', 'rGADEM', 'JASPAR2018'
))
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/reproducibility-statement.html |
Reproducibility statement
-------------------------
This book is compiled with R 4\.0\.0 and the following packages. We only list the main packages and their versions but not their dependencies.
```
## qvalue_2.20.0 | plot3D_1.3 | ggplot2_3.3.1 | pheatmap_1.0.12
## cowplot_1.0.0 | cluster_2.1.0 | NbClust_3.0 | fastICA_1.2.2
## NMF_0.23.0 | matrixStats_0.56.0 | Rtsne_0.15 | mosaic_1.7.0
## knitr_1.28 | genomation_1.20.0 | ggbio_1.36.0 | Gviz_1.32.0
## DESeq2_1.28.1 | RUVSeq_1.22.0 | gProfileR_0.7.0 | ggfortify_0.4.10
## corrplot_0.84 | gage_2.37.0 | EDASeq_2.22.0 | citr_0.3.2
## formatR_1.7 | svglite_1.2.3 | Rqc_1.22.0 | ShortRead_1.46.0
## QuasR_1.28.0 | methylKit_1.14.2 | FactoMineR_2.3 | iClusterPlus_1.24.0
## enrichR_2.1 | caret_6.0.86 | xgboost_1.0.0.2 | glmnet_4.0
## DALEX_1.2.1 | kernlab_0.9.29 | pROC_1.16.2 | nnet_7.3.14
## RANN_2.6.1 | ranger_0.12.1 | GenomeInfoDb_1.24.0 | GenomicRanges_1.40.0
## GenomicAlignments_1.24.0 | ComplexHeatmap_2.4.2 | circlize_0.4.9 | rtracklayer_1.48.0
## tidyr_1.1.0 | AnnotationHub_2.20.0 | GenomicFeatures_1.40.0 | normr_1.14.0
## MotifDb_1.30.0 | TFBSTools_1.26.0 | rGADEM_2.36.0 | JASPAR2018_1.1.1
## BSgenome.Hsapiens.UCSC.hg38_1.4.3 | BSgenome.Hsapiens.UCSC.hg19_1.4.3
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/getting-started-with-r.html |
2\.2 Getting started with R
---------------------------
Download and install R ([http://cran.r\-project.org/](http://cran.r-project.org/)) and RStudio (<http://www.rstudio.com/>) if you do not have them already. Rstudio is optional but it is a great tool if you are just starting to learn R.
You will need specific data sets to run the code snippets in this book; we have explained how to install and use the data in the [Data for the book](data-for-the-book.html#data-for-the-book) section in the [Preface](index.html#preface). If you have not used Rstudio before, we recommend running it and familiarizing yourself with it first. To put it simply, this interface combines multiple features you will need while analyzing data. You can see your code, how it is executed, the plots you make, and your data all in one interface.
### 2\.2\.1 Installing packages
R packages are add\-ons to base R that help you achieve additional tasks that are not directly supported by base R. It is by the action of these extra functionality that R excels as a tool for computational genomics. The Bioconductor project (<http://bioconductor.org/>) is a dedicated package repository for computational biology\-related packages. However main package repository of R, called CRAN, also has computational biology related packages. In addition, R\-Forge ([http://r\-forge.r\-project.org/](http://r-forge.r-project.org/)), GitHub (<https://github.com/>), and Bitbucket (<http://www.bitbucket.org>) are some of the other locations where R packages might be hosted. The packages needed for the code snippets in this book and how to install them are explained in the [Packages needed to run the book code](software-information-and-conventions.html#packages-needed-to-run-the-book-code) section in the [Preface](index.html#preface) of the book.
You can install CRAN packages using `install.packages()` (\# is the comment character in R).
```
# install package named "randomForests" from CRAN
install.packages("randomForests")
```
You can install bioconductor packages with a specific installer script.
```
# get the installer package if you don't have
install.packages("BiocManager")
# install bioconductor package "rtracklayer"
BiocManager::install("rtracklayer")
```
You can install packages from GitHub using the `install_github()` function from `devtools` package.
```
library(devtools)
install_github("hadley/stringr")
```
Another way to install packages is from the source.
```
# download the source file
download.file(
"https://github.com/al2na/methylKit/releases/download/v0.99.2/methylKit_0.99.2.tar.gz",
destfile="methylKit_0.99.2.tar.gz")
# install the package from the source file
install.packages("methylKit_0.99.2.tar.gz",
repos=NULL,type="source")
# delete the source file
unlink("methylKit_0.99.2.tar.gz")
```
You can also update CRAN and Bioconductor packages.
```
# updating CRAN packages
update.packages()
# updating bioconductor packages
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install()
```
### 2\.2\.2 Installing packages in custom locations
If you will be using R on servers or computing clusters rather than your personal computer, it is unlikely that you will have administrator access to install packages. In that case, you can install packages in custom locations by telling R where to look for additional packages. This is done by setting up an `.Renviron` file in your home directory and add the following line:
```
R_LIBS=~/Rlibs
```
This tells R that the “Rlibs” directory at your home directory will be the first choice of locations to look for packages and install packages (the directory name and location is up to you, the above is just an example). You should go and create that directory now. After that, start a fresh R session and start installing packages. From now on, packages will be installed to your local directory where you have read\-write access.
### 2\.2\.3 Getting help on functions and packages
You can get help on functions by using `help()` and `help.search()` functions. You can list the functions in a package with the `ls()` function
```
library(MASS)
ls("package:MASS") # functions in the package
ls() # objects in your R enviroment
# get help on hist() function
?hist
help("hist")
# search the word "hist" in help pages
help.search("hist")
??hist
```
#### 2\.2\.3\.1 More help needed?
In addition, check package vignettes for help and practical understanding of the functions. All Bioconductor packages have vignettes that walk you through example analysis. Google search will always be helpful as well; there are many blogs and web pages that have posts about R. R\-help mailing list ([https://stat.ethz.ch/mailman/listinfo/r\-help](https://stat.ethz.ch/mailman/listinfo/r-help)), Stackoverflow.com and R\-bloggers.com are usually sources of good and reliable information.
### 2\.2\.1 Installing packages
R packages are add\-ons to base R that help you achieve additional tasks that are not directly supported by base R. It is by the action of these extra functionality that R excels as a tool for computational genomics. The Bioconductor project (<http://bioconductor.org/>) is a dedicated package repository for computational biology\-related packages. However main package repository of R, called CRAN, also has computational biology related packages. In addition, R\-Forge ([http://r\-forge.r\-project.org/](http://r-forge.r-project.org/)), GitHub (<https://github.com/>), and Bitbucket (<http://www.bitbucket.org>) are some of the other locations where R packages might be hosted. The packages needed for the code snippets in this book and how to install them are explained in the [Packages needed to run the book code](software-information-and-conventions.html#packages-needed-to-run-the-book-code) section in the [Preface](index.html#preface) of the book.
You can install CRAN packages using `install.packages()` (\# is the comment character in R).
```
# install package named "randomForests" from CRAN
install.packages("randomForests")
```
You can install bioconductor packages with a specific installer script.
```
# get the installer package if you don't have
install.packages("BiocManager")
# install bioconductor package "rtracklayer"
BiocManager::install("rtracklayer")
```
You can install packages from GitHub using the `install_github()` function from `devtools` package.
```
library(devtools)
install_github("hadley/stringr")
```
Another way to install packages is from the source.
```
# download the source file
download.file(
"https://github.com/al2na/methylKit/releases/download/v0.99.2/methylKit_0.99.2.tar.gz",
destfile="methylKit_0.99.2.tar.gz")
# install the package from the source file
install.packages("methylKit_0.99.2.tar.gz",
repos=NULL,type="source")
# delete the source file
unlink("methylKit_0.99.2.tar.gz")
```
You can also update CRAN and Bioconductor packages.
```
# updating CRAN packages
update.packages()
# updating bioconductor packages
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install()
```
### 2\.2\.2 Installing packages in custom locations
If you will be using R on servers or computing clusters rather than your personal computer, it is unlikely that you will have administrator access to install packages. In that case, you can install packages in custom locations by telling R where to look for additional packages. This is done by setting up an `.Renviron` file in your home directory and add the following line:
```
R_LIBS=~/Rlibs
```
This tells R that the “Rlibs” directory at your home directory will be the first choice of locations to look for packages and install packages (the directory name and location is up to you, the above is just an example). You should go and create that directory now. After that, start a fresh R session and start installing packages. From now on, packages will be installed to your local directory where you have read\-write access.
### 2\.2\.3 Getting help on functions and packages
You can get help on functions by using `help()` and `help.search()` functions. You can list the functions in a package with the `ls()` function
```
library(MASS)
ls("package:MASS") # functions in the package
ls() # objects in your R enviroment
# get help on hist() function
?hist
help("hist")
# search the word "hist" in help pages
help.search("hist")
??hist
```
#### 2\.2\.3\.1 More help needed?
In addition, check package vignettes for help and practical understanding of the functions. All Bioconductor packages have vignettes that walk you through example analysis. Google search will always be helpful as well; there are many blogs and web pages that have posts about R. R\-help mailing list ([https://stat.ethz.ch/mailman/listinfo/r\-help](https://stat.ethz.ch/mailman/listinfo/r-help)), Stackoverflow.com and R\-bloggers.com are usually sources of good and reliable information.
#### 2\.2\.3\.1 More help needed?
In addition, check package vignettes for help and practical understanding of the functions. All Bioconductor packages have vignettes that walk you through example analysis. Google search will always be helpful as well; there are many blogs and web pages that have posts about R. R\-help mailing list ([https://stat.ethz.ch/mailman/listinfo/r\-help](https://stat.ethz.ch/mailman/listinfo/r-help)), Stackoverflow.com and R\-bloggers.com are usually sources of good and reliable information.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/computations-in-r.html |
2\.3 Computations in R
----------------------
R can be used as an ordinary calculator, and some say it is an over\-grown calculator. Here are some examples. Remember that `#` is the comment character. The comments give details about the operations in case they are not clear.
```
2 + 3 * 5 # Note the order of operations.
log(10) # Natural logarithm with base e
5^2 # 5 raised to the second power
3/2 # Division
sqrt(16) # Square root
abs(3-7) # Absolute value of 3-7
pi # The number
exp(2) # exponential function
# This is a comment line
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/data-structures.html |
2\.4 Data structures
--------------------
R has multiple data structures. If you are familiar with Excel, you can think of a single Excel sheet as a table and data structures as building blocks of that table. Most of the time you will deal with tabular data sets or you will want to transform your raw data to a tabular data set, and you will try to manipulate this tabular data set in some way. For example, you may want to take sub\-sections of the table or extract all the values in a column. For these and similar purposes, it is essential to know the common data structures in R and how they can be used. R deals with named data structures, which means you can give names to data structures and manipulate or operate on them using those names. It will be clear soon what we mean by this if “named data structures” does not ring a bell.
### 2\.4\.1 Vectors
Vectors are one of the core R data structures. It is basically a list of elements of the same type (numeric, character or logical). Later you will see that every column of a table will be represented as a vector. R handles vectors easily and intuitively. You can create vectors with the `c()` function, however that is not the only way. The operations on vectors will propagate to all the elements of the vectors.
```
x<-c(1,3,2,10,5) #create a vector named x with 5 components
x = c(1,3,2,10,5)
x
```
```
## [1] 1 3 2 10 5
```
```
y<-1:5 #create a vector of consecutive integers y
y+2 #scalar addition
```
```
## [1] 3 4 5 6 7
```
```
2*y #scalar multiplication
```
```
## [1] 2 4 6 8 10
```
```
y^2 #raise each component to the second power
```
```
## [1] 1 4 9 16 25
```
```
2^y #raise 2 to the first through fifth power
```
```
## [1] 2 4 8 16 32
```
```
y #y itself has not been unchanged
```
```
## [1] 1 2 3 4 5
```
```
y<-y*2
y #it is now changed
```
```
## [1] 2 4 6 8 10
```
```
r1<-rep(1,3) # create a vector of 1s, length 3
length(r1) #length of the vector
```
```
## [1] 3
```
```
class(r1) # class of the vector
```
```
## [1] "numeric"
```
```
a<-1 # this is actually a vector length one
```
The standard assignment operator in R is `<-`. This operator is preferentially used in books and documentation. However, it is also possible to use the `=` operator for the assignment.
We have an example in the above code snippet and throughout the book we use `<-` and `=` interchangeably for assignment.
### 2\.4\.2 Matrices
A matrix refers to a numeric array of rows and columns. You can think of it as a stacked version of vectors where each row or column is a vector. One of the easiest ways to create a matrix is to combine vectors of equal length using `cbind()`, meaning ‘column bind’.
```
x<-c(1,2,3,4)
y<-c(4,5,6,7)
m1<-cbind(x,y);m1
```
```
## x y
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
## [4,] 4 7
```
```
t(m1) # transpose of m1
```
```
## [,1] [,2] [,3] [,4]
## x 1 2 3 4
## y 4 5 6 7
```
```
dim(m1) # 2 by 5 matrix
```
```
## [1] 4 2
```
You can also directly list the elements and specify the matrix:
```
m2<-matrix(c(1,3,2,5,-1,2,2,3,9),nrow=3)
m2
```
```
## [,1] [,2] [,3]
## [1,] 1 5 2
## [2,] 3 -1 3
## [3,] 2 2 9
```
Matrices and the next data structure, **data frames**, are tabular data structures. You can subset them using `[]` and providing desired rows and columns to subset. Figure [2\.1](data-structures.html#fig:slicingDataFrames) shows how that works conceptually.
FIGURE 2\.1: Slicing/subsetting of a matrix and a data frame.
### 2\.4\.3 Data frames
A data frame is more general than a matrix, in that different columns can have different modes (numeric, character, factor, etc.). A data frame can be constructed by the `data.frame()` function. For example, we illustrate how to construct a data frame from genomic intervals or coordinates.
```
chr <- c("chr1", "chr1", "chr2", "chr2")
strand <- c("-","-","+","+")
start<- c(200,4000,100,400)
end<-c(250,410,200,450)
mydata <- data.frame(chr,start,end,strand)
#change column names
names(mydata) <- c("chr","start","end","strand")
mydata # OR this will work too
```
```
## chr start end strand
## 1 chr1 200 250 -
## 2 chr1 4000 410 -
## 3 chr2 100 200 +
## 4 chr2 400 450 +
```
```
mydata <- data.frame(chr=chr,start=start,end=end,strand=strand)
mydata
```
```
## chr start end strand
## 1 chr1 200 250 -
## 2 chr1 4000 410 -
## 3 chr2 100 200 +
## 4 chr2 400 450 +
```
There are a variety of ways to extract the elements of a data frame. You can extract certain columns using column numbers or names, or you can extract certain rows by using row numbers. You can also extract data using logical arguments, such as extracting all rows that have a value in a column larger than your threshold.
```
mydata[,2:4] # columns 2,3,4 of data frame
```
```
## start end strand
## 1 200 250 -
## 2 4000 410 -
## 3 100 200 +
## 4 400 450 +
```
```
mydata[,c("chr","start")] # columns chr and start from data frame
```
```
## chr start
## 1 chr1 200
## 2 chr1 4000
## 3 chr2 100
## 4 chr2 400
```
```
mydata$start # variable start in the data frame
```
```
## [1] 200 4000 100 400
```
```
mydata[c(1,3),] # get 1st and 3rd rows
```
```
## chr start end strand
## 1 chr1 200 250 -
## 3 chr2 100 200 +
```
```
mydata[mydata$start>400,] # get all rows where start>400
```
```
## chr start end strand
## 2 chr1 4000 410 -
```
### 2\.4\.4 Lists
A list in R is an ordered collection of objects (components). A list allows you to gather a variety of (possibly unrelated) objects under one name. You can create a list with the `list()` function. Each object or element in the list has a numbered position and can have names. Below we show a few examples of how to create lists.
```
# example of a list with 4 components
# a string, a numeric vector, a matrix, and a scalar
w <- list(name="Fred",
mynumbers=c(1,2,3),
mymatrix=matrix(1:4,ncol=2),
age=5.3)
w
```
```
## $name
## [1] "Fred"
##
## $mynumbers
## [1] 1 2 3
##
## $mymatrix
## [,1] [,2]
## [1,] 1 3
## [2,] 2 4
##
## $age
## [1] 5.3
```
You can extract elements of a list using the `[[]]`, the double square\-bracket, convention using either its position in the list or its name.
```
w[[3]] # 3rd component of the list
```
```
## [,1] [,2]
## [1,] 1 3
## [2,] 2 4
```
```
w[["mynumbers"]] # component named mynumbers in list
```
```
## [1] 1 2 3
```
```
w$age
```
```
## [1] 5.3
```
### 2\.4\.5 Factors
Factors are used to store categorical data. They are important for statistical modeling since categorical variables are treated differently in statistical models than continuous variables. This ensures categorical data treated accordingly in statistical models.
```
features=c("promoter","exon","intron")
f.feat=factor(features)
```
An important thing to note is that when you are reading a data frame with `read.table()` or creating a data frame with `data.frame()` function, the character columns are stored as factors by default, to change this behavior you need to set `stringsAsFactors=FALSE` in `read.table()` and/or `data.frame()` function arguments.
### 2\.4\.1 Vectors
Vectors are one of the core R data structures. It is basically a list of elements of the same type (numeric, character or logical). Later you will see that every column of a table will be represented as a vector. R handles vectors easily and intuitively. You can create vectors with the `c()` function, however that is not the only way. The operations on vectors will propagate to all the elements of the vectors.
```
x<-c(1,3,2,10,5) #create a vector named x with 5 components
x = c(1,3,2,10,5)
x
```
```
## [1] 1 3 2 10 5
```
```
y<-1:5 #create a vector of consecutive integers y
y+2 #scalar addition
```
```
## [1] 3 4 5 6 7
```
```
2*y #scalar multiplication
```
```
## [1] 2 4 6 8 10
```
```
y^2 #raise each component to the second power
```
```
## [1] 1 4 9 16 25
```
```
2^y #raise 2 to the first through fifth power
```
```
## [1] 2 4 8 16 32
```
```
y #y itself has not been unchanged
```
```
## [1] 1 2 3 4 5
```
```
y<-y*2
y #it is now changed
```
```
## [1] 2 4 6 8 10
```
```
r1<-rep(1,3) # create a vector of 1s, length 3
length(r1) #length of the vector
```
```
## [1] 3
```
```
class(r1) # class of the vector
```
```
## [1] "numeric"
```
```
a<-1 # this is actually a vector length one
```
The standard assignment operator in R is `<-`. This operator is preferentially used in books and documentation. However, it is also possible to use the `=` operator for the assignment.
We have an example in the above code snippet and throughout the book we use `<-` and `=` interchangeably for assignment.
### 2\.4\.2 Matrices
A matrix refers to a numeric array of rows and columns. You can think of it as a stacked version of vectors where each row or column is a vector. One of the easiest ways to create a matrix is to combine vectors of equal length using `cbind()`, meaning ‘column bind’.
```
x<-c(1,2,3,4)
y<-c(4,5,6,7)
m1<-cbind(x,y);m1
```
```
## x y
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
## [4,] 4 7
```
```
t(m1) # transpose of m1
```
```
## [,1] [,2] [,3] [,4]
## x 1 2 3 4
## y 4 5 6 7
```
```
dim(m1) # 2 by 5 matrix
```
```
## [1] 4 2
```
You can also directly list the elements and specify the matrix:
```
m2<-matrix(c(1,3,2,5,-1,2,2,3,9),nrow=3)
m2
```
```
## [,1] [,2] [,3]
## [1,] 1 5 2
## [2,] 3 -1 3
## [3,] 2 2 9
```
Matrices and the next data structure, **data frames**, are tabular data structures. You can subset them using `[]` and providing desired rows and columns to subset. Figure [2\.1](data-structures.html#fig:slicingDataFrames) shows how that works conceptually.
FIGURE 2\.1: Slicing/subsetting of a matrix and a data frame.
### 2\.4\.3 Data frames
A data frame is more general than a matrix, in that different columns can have different modes (numeric, character, factor, etc.). A data frame can be constructed by the `data.frame()` function. For example, we illustrate how to construct a data frame from genomic intervals or coordinates.
```
chr <- c("chr1", "chr1", "chr2", "chr2")
strand <- c("-","-","+","+")
start<- c(200,4000,100,400)
end<-c(250,410,200,450)
mydata <- data.frame(chr,start,end,strand)
#change column names
names(mydata) <- c("chr","start","end","strand")
mydata # OR this will work too
```
```
## chr start end strand
## 1 chr1 200 250 -
## 2 chr1 4000 410 -
## 3 chr2 100 200 +
## 4 chr2 400 450 +
```
```
mydata <- data.frame(chr=chr,start=start,end=end,strand=strand)
mydata
```
```
## chr start end strand
## 1 chr1 200 250 -
## 2 chr1 4000 410 -
## 3 chr2 100 200 +
## 4 chr2 400 450 +
```
There are a variety of ways to extract the elements of a data frame. You can extract certain columns using column numbers or names, or you can extract certain rows by using row numbers. You can also extract data using logical arguments, such as extracting all rows that have a value in a column larger than your threshold.
```
mydata[,2:4] # columns 2,3,4 of data frame
```
```
## start end strand
## 1 200 250 -
## 2 4000 410 -
## 3 100 200 +
## 4 400 450 +
```
```
mydata[,c("chr","start")] # columns chr and start from data frame
```
```
## chr start
## 1 chr1 200
## 2 chr1 4000
## 3 chr2 100
## 4 chr2 400
```
```
mydata$start # variable start in the data frame
```
```
## [1] 200 4000 100 400
```
```
mydata[c(1,3),] # get 1st and 3rd rows
```
```
## chr start end strand
## 1 chr1 200 250 -
## 3 chr2 100 200 +
```
```
mydata[mydata$start>400,] # get all rows where start>400
```
```
## chr start end strand
## 2 chr1 4000 410 -
```
### 2\.4\.4 Lists
A list in R is an ordered collection of objects (components). A list allows you to gather a variety of (possibly unrelated) objects under one name. You can create a list with the `list()` function. Each object or element in the list has a numbered position and can have names. Below we show a few examples of how to create lists.
```
# example of a list with 4 components
# a string, a numeric vector, a matrix, and a scalar
w <- list(name="Fred",
mynumbers=c(1,2,3),
mymatrix=matrix(1:4,ncol=2),
age=5.3)
w
```
```
## $name
## [1] "Fred"
##
## $mynumbers
## [1] 1 2 3
##
## $mymatrix
## [,1] [,2]
## [1,] 1 3
## [2,] 2 4
##
## $age
## [1] 5.3
```
You can extract elements of a list using the `[[]]`, the double square\-bracket, convention using either its position in the list or its name.
```
w[[3]] # 3rd component of the list
```
```
## [,1] [,2]
## [1,] 1 3
## [2,] 2 4
```
```
w[["mynumbers"]] # component named mynumbers in list
```
```
## [1] 1 2 3
```
```
w$age
```
```
## [1] 5.3
```
### 2\.4\.5 Factors
Factors are used to store categorical data. They are important for statistical modeling since categorical variables are treated differently in statistical models than continuous variables. This ensures categorical data treated accordingly in statistical models.
```
features=c("promoter","exon","intron")
f.feat=factor(features)
```
An important thing to note is that when you are reading a data frame with `read.table()` or creating a data frame with `data.frame()` function, the character columns are stored as factors by default, to change this behavior you need to set `stringsAsFactors=FALSE` in `read.table()` and/or `data.frame()` function arguments.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/data-types.html |
2\.5 Data types
---------------
There are four common data types in R, they are `numeric`, `logical`, `character` and `integer`. All these data types can be used to create vectors natively.
```
#create a numeric vector x with 5 components
x<-c(1,3,2,10,5)
x
```
```
## [1] 1 3 2 10 5
```
```
#create a logical vector x
x<-c(TRUE,FALSE,TRUE)
x
```
```
## [1] TRUE FALSE TRUE
```
```
# create a character vector
x<-c("sds","sd","as")
x
```
```
## [1] "sds" "sd" "as"
```
```
class(x)
```
```
## [1] "character"
```
```
# create an integer vector
x<-c(1L,2L,3L)
x
```
```
## [1] 1 2 3
```
```
class(x)
```
```
## [1] "integer"
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/reading-and-writing-data.html |
2\.6 Reading and writing data
-----------------------------
Most of the genomics data sets are in the form of genomic intervals associated with a score. That means mostly the data will be in table format with columns denoting chromosome, start positions, end positions, strand and score. One of the popular formats is the BED format, which is used primarily by the UCSC genome browser but most other genome browsers and tools will support the BED file format. We have all the annotation data in BED format. You will read more about data formats in Chapter [6](genomicIntervals.html#genomicIntervals). In R, you can easily read tabular format data with the `read.table()` function.
```
enhancerFilePath=system.file("extdata",
"subset.enhancers.hg18.bed",
package="compGenomRData")
cpgiFilePath=system.file("extdata",
"subset.cpgi.hg18.bed",
package="compGenomRData")
# read enhancer marker BED file
enh.df <- read.table(enhancerFilePath, header = FALSE)
# read CpG island BED file
cpgi.df <- read.table(cpgiFilePath, header = FALSE)
# check first lines to see how the data looks like
head(enh.df)
```
```
## V1 V2 V3 V4 V5 V6 V7 V8 V9
## 1 chr20 266275 267925 . 1000 . 9.11 13.1693 -1
## 2 chr20 287400 294500 . 1000 . 10.53 13.0231 -1
## 3 chr20 300500 302500 . 1000 . 9.10 13.3935 -1
## 4 chr20 330400 331800 . 1000 . 6.39 13.5105 -1
## 5 chr20 341425 343400 . 1000 . 6.20 12.9852 -1
## 6 chr20 437975 439900 . 1000 . 6.31 13.5184 -1
```
```
head(cpgi.df)
```
```
## V1 V2 V3 V4
## 1 chr20 195575 195851 CpG:_28
## 2 chr20 207789 208148 CpG:_32
## 3 chr20 219055 219437 CpG:_33
## 4 chr20 225831 227155 CpG:_135
## 5 chr20 252826 256323 CpG:_286
## 6 chr20 275376 276977 CpG:_116
```
You can save your data by writing it to disk as a text file. A data frame or matrix can be written out by using the `write.table()` function. Now let us write out `cpgi.df`. We will write it out as a tab\-separated file; pay attention to the arguments.
```
write.table(cpgi.df,file="cpgi.txt",quote=FALSE,
row.names=FALSE,col.names=FALSE,sep="\t")
```
You can save your R objects directly into a file using `save()` and `saveRDS()` and load them back in with `load()` and `readRDS()`. By using these functions you can save any R object whether or not it is in data frame or matrix classes.
```
save(cpgi.df,enh.df,file="mydata.RData")
load("mydata.RData")
# saveRDS() can save one object at a type
saveRDS(cpgi.df,file="cpgi.rds")
x=readRDS("cpgi.rds")
head(x)
```
One important thing is that with `save()` you can save many objects at a time, and when they are loaded into memory with `load()` they retain their variable names. For example, in the above code when you use `load("mydata.RData")` in a fresh R session, an object named `cpg.df` will be created. That means you have to figure out what name you gave to the objects before saving them. Conversely, when you save an object by `saveRDS()` and read by `readRDS()`, the name of the object is not retained, and you need to assign the output of `readRDS()` to a new variable (`x` in the above code chunk).
### 2\.6\.1 Reading large files
Reading large files that contain tables with base R function `read.table()` might take a very long time. Therefore, there are additional packages that provide faster functions to read the files. The `data.table` and `readr` packages provide this functionality. Below, we show how to use them. These functions with provided parameters will return equivalent output to the `read.table()` function.
```
library(data.table)
df.f=d(enhancerFilePath, header = FALSE,data.table=FALSE)
library(readr)
df.f2=read_table(enhancerFilePath, col_names = FALSE)
```
### 2\.6\.1 Reading large files
Reading large files that contain tables with base R function `read.table()` might take a very long time. Therefore, there are additional packages that provide faster functions to read the files. The `data.table` and `readr` packages provide this functionality. Below, we show how to use them. These functions with provided parameters will return equivalent output to the `read.table()` function.
```
library(data.table)
df.f=d(enhancerFilePath, header = FALSE,data.table=FALSE)
library(readr)
df.f2=read_table(enhancerFilePath, col_names = FALSE)
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/plotting-in-r-with-base-graphics.html |
2\.7 Plotting in R with base graphics
-------------------------------------
R has great support for plotting and customizing plots by default. This basic capability for plotting in R is referred to as “base graphics” or “R base graphics”. We will show only a few below. Let us sample 50 values from the normal distribution and plot them as a histogram. A histogram is an approximate representation of a distribution. Bars show how frequently we observe certain values in our sample. The resulting histogram from the code chunk below is shown in Figure [2\.2](plotting-in-r-with-base-graphics.html#fig:sampleForPlots).
```
# sample 50 values from normal distribution
# and store them in vector x
x<-rnorm(50)
hist(x) # plot the histogram of those values
```
FIGURE 2\.2: Histogram of values sampled from normal distribution.
We can modify all the plots by providing certain arguments to the plotting function. Now let’s give a title to the plot using the `main` argument. We can also change the color of the bars using the `col` argument. You can simply provide the name of the color. Below, we are using `'red'` for the color. See Figure [2\.3](plotting-in-r-with-base-graphics.html#fig:makeHist) for the result of this code chunk.
```
hist(x,main="Hello histogram!!!",col="red")
```
FIGURE 2\.3: Histogram in red color.
Next, we will make a scatter plot. Scatter plots are one the most common plots you will encounter in data analysis. We will sample another set of 50 values and plot those against the ones we sampled earlier. The scatter plot shows values of two variables for a set of data points. It is useful to visualize relationships between two variables. It is frequently used in connection with correlation and linear regression. There are other variants of scatter plots which show density of the points with different colors. We will show examples of those scatter plots in later chapters. The scatter plot from our sampling experiment is shown in Figure [2\.4](plotting-in-r-with-base-graphics.html#fig:makeScatter). Notice that, in addition to `main` argument we used `xlab` and `ylab` arguments to give labels to the plot. You can customize the plots even more than this. See `?plot` and `?par` for more arguments that can help you customize the plots.
```
# randomly sample 50 points from normal distribution
y<-rnorm(50)
#plot a scatter plot
# control x-axis and y-axis labels
plot(x,y,main="scatterplot of random samples",
ylab="y values",xlab="x values")
```
FIGURE 2\.4: Scatter plot example.
We can also plot boxplots for vectors x and y. Boxplots depict groups of numerical data through their quartiles. The edges of the box denote the 1st and 3rd quartiles, and the line that crosses the box is the median. The distance between the 1st and the 3rd quartiles is called interquartile tange. The whiskers (lines extending from the boxes) are usually defined using the interquartile range for symmetric distributions as follows: `lowerWhisker=Q1-1.5[IQR]` and `upperWhisker=Q3+1.5[IQR]`.
In addition, outliers can be depicted as dots. In this case, outliers are the values that remain outside the whiskers. The resulting plot from the code snippet below is shown in Figure [2\.5](plotting-in-r-with-base-graphics.html#fig:makeBoxplot).
```
boxplot(x,y,main="boxplots of random samples")
```
FIGURE 2\.5: Boxplot example
Next up is the bar plot, which you can plot using the `barplot()` function. We are going to plot four imaginary percentage values and color them with two colors, and this time we will also show how to draw a legend on the plot using the `legend()` function. The resulting plot is in Figure [2\.6](plotting-in-r-with-base-graphics.html#fig:makebarplot).
```
perc=c(50,70,35,25)
barplot(height=perc,
names.arg=c("CpGi","exon","CpGi","exon"),
ylab="percentages",main="imagine %s",
col=c("red","red","blue","blue"))
legend("topright",legend=c("test","control"),
fill=c("red","blue"))
```
FIGURE 2\.6: Bar plot example
### 2\.7\.1 Combining multiple plots
In R, we can combine multiple plots in the same graphic. For this purpose, we use the `par()` function for simple combinations. More complicated arrangements with different sizes of sub\-plots can be created with the `layout()` function. Below we will show how to combine two plots side\-by\-side using `par(mfrow=c(1,2))`. The `mfrow=c(nrows, ncols)` construct will create a matrix of `nrows` x `ncols` plots that are filled in by row. The following code will produce a histogram and a scatter plot stacked side by side. The result is shown in Figure [2\.7](plotting-in-r-with-base-graphics.html#fig:combineBasePlots). If you want to see the plots on top of each other, simply change `mfrow=c(1,2)` to `mfrow=c(2,1)`.
```
par(mfrow=c(1,2)) #
# make the plots
hist(x,main="Hello histogram!!!",col="red")
plot(x,y,main="scatterplot",
ylab="y values",xlab="x values")
```
FIGURE 2\.7: Combining two plots, a histogram and a scatter plot, with `par()` function.
### 2\.7\.2 Saving plots
If you want to save your plots to an image file there are couple of ways of doing that. Normally, you will have to do the following:
1. Open a graphics device.
2. Create the plot.
3. Close the graphics device.
```
pdf("mygraphs/myplot.pdf",width=5,height=5)
plot(x,y)
dev.off()
```
Alternatively, you can first create the plot then copy the plot to a graphics device.
```
plot(x,y)
dev.copy(pdf,"mygraphs/myplot.pdf",width=7,height=5)
dev.off()
```
### 2\.7\.1 Combining multiple plots
In R, we can combine multiple plots in the same graphic. For this purpose, we use the `par()` function for simple combinations. More complicated arrangements with different sizes of sub\-plots can be created with the `layout()` function. Below we will show how to combine two plots side\-by\-side using `par(mfrow=c(1,2))`. The `mfrow=c(nrows, ncols)` construct will create a matrix of `nrows` x `ncols` plots that are filled in by row. The following code will produce a histogram and a scatter plot stacked side by side. The result is shown in Figure [2\.7](plotting-in-r-with-base-graphics.html#fig:combineBasePlots). If you want to see the plots on top of each other, simply change `mfrow=c(1,2)` to `mfrow=c(2,1)`.
```
par(mfrow=c(1,2)) #
# make the plots
hist(x,main="Hello histogram!!!",col="red")
plot(x,y,main="scatterplot",
ylab="y values",xlab="x values")
```
FIGURE 2\.7: Combining two plots, a histogram and a scatter plot, with `par()` function.
### 2\.7\.2 Saving plots
If you want to save your plots to an image file there are couple of ways of doing that. Normally, you will have to do the following:
1. Open a graphics device.
2. Create the plot.
3. Close the graphics device.
```
pdf("mygraphs/myplot.pdf",width=5,height=5)
plot(x,y)
dev.off()
```
Alternatively, you can first create the plot then copy the plot to a graphics device.
```
plot(x,y)
dev.copy(pdf,"mygraphs/myplot.pdf",width=7,height=5)
dev.off()
```
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/plotting-in-r-with-ggplot2.html |
2\.8 Plotting in R with ggplot2
-------------------------------
In R, there are other plotting systems besides “base graphics”, which is what we have shown until now. There is another popular plotting system called `ggplot2` which implements a different logic when constructing the plots. This system or logic is known as the “grammar of graphics”. This system defines a plot or graphics as a combination of different components. For example, in the scatter plot in [2\.4](plotting-in-r-with-base-graphics.html#fig:makeScatter), we have the points which are geometric shapes, we have the coordinate system and scales of data. In addition, data transformations are also part of a plot. In Figure [2\.3](plotting-in-r-with-base-graphics.html#fig:makeHist), the histogram has a binning operation and it puts the data into bins before displaying it as geometric shapes, the bars. The `ggplot2` system and its implementation of “grammar of graphics”[1](#fn1) allows us to build the plot layer by layer using the predefined components.
Next we will see how this works in practice. Let’s start with a simple scatter plot using `ggplot2`. In order to make basic plots in `ggplot2`, one needs to combine different components. First, we need the data and its transformation to a geometric object; for a scatter plot this would be mapping data to points, for histograms it would be binning the data and making bars. Second, we need the scales and coordinate system, which generates axes and legends so that we can see the values on the plot. And the last component is the plot annotation such as plot title and the background.
The main `ggplot2` function, called `ggplot()`, requires a data frame to work with, and this data frame is its first argument as shown in the code snippet below. The second thing you will notice is the `aes()` function in the `ggplot()` function. This function defines which columns in the data frame map to x and y coordinates and if they should be colored or have different shapes based on the values in a different column. These elements are the “aesthetic” elements, this is what we observe in the plot. The last line in the code represents the geometric object to be plotted. These geometric objects define the type of the plot. In this case, the object is a point, indicated by the `geom_point()`function. Another, peculiar thing in the code is the `+` operation. In `ggplot2`, this operation is used to add layers and modify the plot. The resulting scatter plot from the code snippet below can be seen in Figure [2\.8](plotting-in-r-with-ggplot2.html#fig:ggScatterchp3).
```
library(ggplot2)
myData=data.frame(col1=x,col2=y)
# the data is myData and I’m using col1 and col2
# columns on x and y axes
ggplot(myData, aes(x=col1, y=col2)) +
geom_point() # map x and y as points
```
FIGURE 2\.8: Scatter plot with ggplot2
Now, let’s re\-create the histogram we created before. For this, we will start again with the `ggplot()` function. We are interested only in the x\-axis in the histogram, so we will only use one column of the data frame. Then, we will add the histogram layer with the `geom_histogram()` function. In addition, we will be showing how to modify your plot further by adding an additional layer with the `labs()` function, which controls the axis labels and titles. The resulting plot from the code chunk below is shown in Figure [2\.9](plotting-in-r-with-ggplot2.html#fig:ggHistChp3).
```
ggplot(myData, aes(x=col1)) +
geom_histogram() + # map x and y as points
labs(title="Histogram for a random variable", x="my variable", y="Count")
```
FIGURE 2\.9: Histograms made with ggplot2, the left histogram contains additional modifications introduced by `labs()` function.
We can also plot boxplots using `ggplot2`. Let’s re\-create the boxplot we did in Figure [2\.5](plotting-in-r-with-base-graphics.html#fig:makeBoxplot). This time we will have to put all our data into a single data frame with extra columns denoting the group of our values. In the base graphics case, we could just input variables containing different vectors. However, `ggplot2` does not work like that and we need to create a data frame with the right format to use the `ggplot()` function. Below, we first concatenate the `x` and `y` vectors and create a second column denoting the group for the vectors. In this case, the x\-axis will be the “group” variable which is just a character denoting the group, and the y\-axis will be the numeric “values” for the `x` and `y` vectors. You can see how this is passed to the `aes()` function below. The resulting plot is shown in Figure [2\.10](plotting-in-r-with-ggplot2.html#fig:ggBoxplotchp3).
```
# data frame with group column showing which
# groups the vector x and y belong
myData2=rbind(data.frame(values=x,group="x"),
data.frame(values=y,group="y"))
# x-axis will be group and y-axis will be values
ggplot(myData2, aes(x=group,y=values)) +
geom_boxplot()
```
FIGURE 2\.10: Boxplots using ggplot2\.
### 2\.8\.1 Combining multiple plots
There are different options for combining multiple plots. If we are trying to make similar plots for the subsets of the same data set, we can use faceting. This is a built\-in and very useful feature of `ggplot2`. This feature is frequently used when investigating whether patterns are the same or different in different conditions or subsets of the data. It can be used via the `facet_grid()` function. Below, we will make two histograms faceted by the `group` variable in the input data frame. We will be using the same data frame we created for the boxplot in the previous section. The resulting plot is in Figure [2\.11](plotting-in-r-with-ggplot2.html#fig:facetHistChp3).
```
ggplot(myData2, aes(x=values)) +
geom_histogram() +facet_grid(.~group)
```
FIGURE 2\.11: Combining two plots using `ggplot2::facet_grid()` function.
Faceting only works when you are using the subsets of the same data set. However, you may want to combine different types of plots from different data sets. The base R functions such as `par()` and `layout()` will not work with `ggplot2` because it uses a different graphics system and this system does not recognize base R functionality for plotting. However, there are multiple ways you can combine plots from `ggplot2`. One way is using the `cowplot` package. This package aligns the individual plots in a grid and will help you create publication\-ready compound plots. Below, we will show how to combine a histogram and a scatter plot side by side. The resulting plot is shown in Figure [2\.12](plotting-in-r-with-ggplot2.html#fig:cowPlotChp3).
```
library(cowplot)
# histogram
p1 <- ggplot(myData2, aes(x=values,fill=group)) +
geom_histogram()
# scatterplot
p2 <- ggplot(myData, aes(x=col1, y=col2)) +
geom_point()
# plot two plots in a grid and label them as A and B
plot_grid(p1, p2, labels = c('A', 'B'), label_size = 12)
```
FIGURE 2\.12: Combining a histogram and scatter plot using `cowplot` package. The plots are labeled as A and B using the arguments in `plot_grid()` function.
### 2\.8\.2 ggplot2 and tidyverse
`ggplot2` is actually part of a larger ecosystem. You will need packages from this ecosystem when you want to use `ggplot2` in a more sophisticated manner or if you need additional functionality that is not readily available in base R or other packages. For example, when you want to make more complicated plots using `ggplot2`, you will need to modify your data frames to the formats required by the `ggplot()` function, and you will need to learn about the `dplyr` and `tidyr` packages for data formatting purposes. If you are working with character strings, `stringr` package might have functionality that is not available in base R. There are many more packages that users find useful in `tidyverse` and it could be important to know about this ecosystem of R packages.
**Want to know more ?**
* `ggplot2` has a free online book written by Hadley Wickham: [https://ggplot2\-book.org/](https://ggplot2-book.org/)
* The `tidyverse` packages and the ecosystem is described in their website: <https://www.tidyverse.org/>. There you will find extensive documentation and resources on `tidyverse` packages.
### 2\.8\.1 Combining multiple plots
There are different options for combining multiple plots. If we are trying to make similar plots for the subsets of the same data set, we can use faceting. This is a built\-in and very useful feature of `ggplot2`. This feature is frequently used when investigating whether patterns are the same or different in different conditions or subsets of the data. It can be used via the `facet_grid()` function. Below, we will make two histograms faceted by the `group` variable in the input data frame. We will be using the same data frame we created for the boxplot in the previous section. The resulting plot is in Figure [2\.11](plotting-in-r-with-ggplot2.html#fig:facetHistChp3).
```
ggplot(myData2, aes(x=values)) +
geom_histogram() +facet_grid(.~group)
```
FIGURE 2\.11: Combining two plots using `ggplot2::facet_grid()` function.
Faceting only works when you are using the subsets of the same data set. However, you may want to combine different types of plots from different data sets. The base R functions such as `par()` and `layout()` will not work with `ggplot2` because it uses a different graphics system and this system does not recognize base R functionality for plotting. However, there are multiple ways you can combine plots from `ggplot2`. One way is using the `cowplot` package. This package aligns the individual plots in a grid and will help you create publication\-ready compound plots. Below, we will show how to combine a histogram and a scatter plot side by side. The resulting plot is shown in Figure [2\.12](plotting-in-r-with-ggplot2.html#fig:cowPlotChp3).
```
library(cowplot)
# histogram
p1 <- ggplot(myData2, aes(x=values,fill=group)) +
geom_histogram()
# scatterplot
p2 <- ggplot(myData, aes(x=col1, y=col2)) +
geom_point()
# plot two plots in a grid and label them as A and B
plot_grid(p1, p2, labels = c('A', 'B'), label_size = 12)
```
FIGURE 2\.12: Combining a histogram and scatter plot using `cowplot` package. The plots are labeled as A and B using the arguments in `plot_grid()` function.
### 2\.8\.2 ggplot2 and tidyverse
`ggplot2` is actually part of a larger ecosystem. You will need packages from this ecosystem when you want to use `ggplot2` in a more sophisticated manner or if you need additional functionality that is not readily available in base R or other packages. For example, when you want to make more complicated plots using `ggplot2`, you will need to modify your data frames to the formats required by the `ggplot()` function, and you will need to learn about the `dplyr` and `tidyr` packages for data formatting purposes. If you are working with character strings, `stringr` package might have functionality that is not available in base R. There are many more packages that users find useful in `tidyverse` and it could be important to know about this ecosystem of R packages.
**Want to know more ?**
* `ggplot2` has a free online book written by Hadley Wickham: [https://ggplot2\-book.org/](https://ggplot2-book.org/)
* The `tidyverse` packages and the ecosystem is described in their website: <https://www.tidyverse.org/>. There you will find extensive documentation and resources on `tidyverse` packages.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/functions-and-control-structures-for-ifelse-etc-.html |
2\.9 Functions and control structures (for, if/else etc.)
---------------------------------------------------------
### 2\.9\.1 User\-defined functions
Functions are useful for transforming larger chunks of code to re\-usable pieces of code. Generally, if you need to execute certain tasks with variable parameters, then it is time you write a function. A function in R takes different arguments and returns a definite output, much like mathematical functions. Here is a simple function that takes two arguments, `x` and `y`, and returns the sum of their squares.
```
sqSum<-function(x,y){
result=x^2+y^2
return(result)
}
# now try the function out
sqSum(2,3)
```
```
## [1] 13
```
Functions can also output plots and/or messages to the terminal. Here is a function that prints a message to the terminal:
```
sqSumPrint<-function(x,y){
result=x^2+y^2
cat("here is the result:",result,"\n")
}
# now try the function out
sqSumPrint(2,3)
```
```
## here is the result: 13
```
Sometimes we would want to execute a certain part of the code only if a certain condition is satisfied. This condition can be anything from the type of an object (Ex: if the object is a matrix, execute certain code), or it can be more complicated, such as if the object value is between certain thresholds. Let us see how these if statements can be used. They can be used anywhere in your code; now we will use it in a function to decide if the CpG island is large, normal length or short.
```
cpgi.df <- read.table("intro2R_data/data/subset.cpgi.hg18.bed", header = FALSE)
# function takes input one row
# of CpGi data frame
largeCpGi<-function(bedRow){
cpglen=bedRow[3]-bedRow[2]+1
if(cpglen>1500){
cat("this is large\n")
}
else if(cpglen<=1500 & cpglen>700){
cat("this is normal\n")
}
else{
cat("this is short\n")
}
}
largeCpGi(cpgi.df[10,])
largeCpGi(cpgi.df[100,])
largeCpGi(cpgi.df[1000,])
```
### 2\.9\.2 Loops and looping structures in R
When you need to repeat a certain task or execute a function multiple times, you can do that with the help of loops. A loop will execute the task until a certain condition is reached. The loop below is called a “for\-loop” and it executes the task sequentially 10 times.
```
for(i in 1:10){ # number of repetitions
cat("This is iteration") # the task to be repeated
print(i)
}
```
```
## This is iteration[1] 1
## This is iteration[1] 2
## This is iteration[1] 3
## This is iteration[1] 4
## This is iteration[1] 5
## This is iteration[1] 6
## This is iteration[1] 7
## This is iteration[1] 8
## This is iteration[1] 9
## This is iteration[1] 10
```
The task above is a bit pointless. Normally in a loop, you would want to do something meaningful. Let us calculate the length of the CpG islands we read in earlier. Although this is not the most efficient way of doing that particular task, it serves as a good example for looping. The code below will execute a hundred times, and it will calculate the length of the CpG islands for the first 100 islands in
the data frame (by subtracting the end coordinate from the start coordinate).
**Note:**If you are going to run a loop that has a lot of repetitions, it is smart to try the loop with few repetitions first and check the results. This will help you make sure the code in the loop works before executing it thousands of times.
```
# this is where we will keep the lenghts
# for now it is an empty vector
result=c()
# start the loop
for(i in 1:100){
#calculate the length
len=cpgi.df[i,3]-cpgi.df[i,2]+1
#append the length to the result
result=c(result,len)
}
# check the results
head(result)
```
```
## [1] 277 360 383 1325 3498 1602
```
#### 2\.9\.2\.1 Apply family functions instead of loops
R has other ways of repeating tasks, which tend to be more efficient than using loops. They are known collectively as the “apply” family of functions, which include `apply`, `lapply`,`mapply` and `tapply` (and some other variants). All of these functions apply a given function to a set of instances and return the results of those functions for each instance. The difference between them is that they take different types of inputs. For example, `apply` works on data frames or matrices and applies the function on each row or column of the data structure. `lapply` works on lists or vectors and applies a function which takes the list element as an argument. Next we will demonstrate how to use `apply()` on a matrix. The example applies the sum function on the rows of a matrix; it basically sums up the values on each row of the matrix, which is conceptualized in Figure [2\.13](functions-and-control-structures-for-ifelse-etc-.html#fig:applyConcept).
FIGURE 2\.13: apply() concept in R.
```
mat=cbind(c(3,0,3,3),c(3,0,0,0),c(3,0,0,3),c(1,1,0,0),c(1,1,1,0),c(1,1,1,0))
result<-apply(mat,1,sum)
result
```
```
## [1] 12 3 5 6
```
```
# OR you can define the function as an argument to apply()
result<-apply(mat,1,function(x) sum(x))
result
```
```
## [1] 12 3 5 6
```
Notice that we used a second argument which equals 1, that indicates that rows of the matrix/ data frame will be the input for the function. If we change the second argument to 2, this will indicate that columns should be the input for the function that will be applied. See Figure [2\.14](functions-and-control-structures-for-ifelse-etc-.html#fig:applyConcept2) for the visualization of apply() on columns.
FIGURE 2\.14: apply() function on columns
```
result<-apply(mat,2,sum)
result
```
```
## [1] 9 3 6 2 3 3
```
Next, we will use `lapply()`, which applies a function on a list or a vector. The function that will be applied is a simple function that takes the square of a given number.
```
input=c(1,2,3)
lapply(input,function(x) x^2)
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 4
##
## [[3]]
## [1] 9
```
`mapply()` is another member of the `apply` family, it can apply a function on an unlimited set of vectors/lists, it is like a version of `lapply` that can handle multiple vectors as arguments. In this case, the argument to the `mapply()` is the function to be applied and the sets of parameters to be supplied as arguments of the function. As shown in the conceptualized Figure [2\.15](functions-and-control-structures-for-ifelse-etc-.html#fig:mapplyConcept), the function to be applied is a function that takes two arguments and sums them up. The arguments to be summed up are in the format of vectors `Xs` and `Ys`. `mapply()` applies the summation function to each pair in the `Xs` and `Ys` vector. Notice that the order of the input function and extra arguments are different for `mapply`.
FIGURE 2\.15: mapply() concept.
```
Xs=0:5
Ys=c(2,2,2,3,3,3)
result<-mapply(function(x,y) sum(x,y),Xs,Ys)
result
```
```
## [1] 2 3 4 6 7 8
```
#### 2\.9\.2\.2 Apply family functions on multiple cores
If you have large data sets, apply family functions can be slow (although probably still better than for loops). If that is the case, you can easily use the parallel versions of those functions from the parallel package. These functions essentially divide your tasks to smaller chunks, run them on separate CPUs, and merge the results from those parallel operations. This concept is visualized in Figure below [2\.16](functions-and-control-structures-for-ifelse-etc-.html#fig:mcapplyConcept), `mcapply` runs the summation function on three different processors. Each processor executes the summation function on a part of the data set, and the results are merged and returned as a single vector that has the same order as the input parameters Xs and Ys.
FIGURE 2\.16: mcapply() concept.
#### 2\.9\.2\.3 Vectorized functions in R
The above examples have been put forward to illustrate functions and loops in R because functions using sum() are not complicated and are easy to understand. You will probably need to use loops and looping structures with more complicated functions. In reality, most of the operations we used do not need the use of loops or looping structures because there are already vectorized functions that can achieve the same outcomes, meaning if the input arguments are R vectors, the output will be a vector as well, so no need for loops or vectorization.
For example, instead of using `mapply()` and `sum()` functions, we can just use the `+` operator and sum up `Xs` and `Ys`.
```
result=Xs+Ys
result
```
```
## [1] 2 3 4 6 7 8
```
In order to get the column or row sums, we can use the vectorized functions `colSums()` and `rowSums()`.
```
colSums(mat)
```
```
## [1] 9 3 6 2 3 3
```
```
rowSums(mat)
```
```
## [1] 12 3 5 6
```
However, remember that not every function is vectorized in R, so use the ones that are. But sooner or later, apply family functions will come in handy.
### 2\.9\.1 User\-defined functions
Functions are useful for transforming larger chunks of code to re\-usable pieces of code. Generally, if you need to execute certain tasks with variable parameters, then it is time you write a function. A function in R takes different arguments and returns a definite output, much like mathematical functions. Here is a simple function that takes two arguments, `x` and `y`, and returns the sum of their squares.
```
sqSum<-function(x,y){
result=x^2+y^2
return(result)
}
# now try the function out
sqSum(2,3)
```
```
## [1] 13
```
Functions can also output plots and/or messages to the terminal. Here is a function that prints a message to the terminal:
```
sqSumPrint<-function(x,y){
result=x^2+y^2
cat("here is the result:",result,"\n")
}
# now try the function out
sqSumPrint(2,3)
```
```
## here is the result: 13
```
Sometimes we would want to execute a certain part of the code only if a certain condition is satisfied. This condition can be anything from the type of an object (Ex: if the object is a matrix, execute certain code), or it can be more complicated, such as if the object value is between certain thresholds. Let us see how these if statements can be used. They can be used anywhere in your code; now we will use it in a function to decide if the CpG island is large, normal length or short.
```
cpgi.df <- read.table("intro2R_data/data/subset.cpgi.hg18.bed", header = FALSE)
# function takes input one row
# of CpGi data frame
largeCpGi<-function(bedRow){
cpglen=bedRow[3]-bedRow[2]+1
if(cpglen>1500){
cat("this is large\n")
}
else if(cpglen<=1500 & cpglen>700){
cat("this is normal\n")
}
else{
cat("this is short\n")
}
}
largeCpGi(cpgi.df[10,])
largeCpGi(cpgi.df[100,])
largeCpGi(cpgi.df[1000,])
```
### 2\.9\.2 Loops and looping structures in R
When you need to repeat a certain task or execute a function multiple times, you can do that with the help of loops. A loop will execute the task until a certain condition is reached. The loop below is called a “for\-loop” and it executes the task sequentially 10 times.
```
for(i in 1:10){ # number of repetitions
cat("This is iteration") # the task to be repeated
print(i)
}
```
```
## This is iteration[1] 1
## This is iteration[1] 2
## This is iteration[1] 3
## This is iteration[1] 4
## This is iteration[1] 5
## This is iteration[1] 6
## This is iteration[1] 7
## This is iteration[1] 8
## This is iteration[1] 9
## This is iteration[1] 10
```
The task above is a bit pointless. Normally in a loop, you would want to do something meaningful. Let us calculate the length of the CpG islands we read in earlier. Although this is not the most efficient way of doing that particular task, it serves as a good example for looping. The code below will execute a hundred times, and it will calculate the length of the CpG islands for the first 100 islands in
the data frame (by subtracting the end coordinate from the start coordinate).
**Note:**If you are going to run a loop that has a lot of repetitions, it is smart to try the loop with few repetitions first and check the results. This will help you make sure the code in the loop works before executing it thousands of times.
```
# this is where we will keep the lenghts
# for now it is an empty vector
result=c()
# start the loop
for(i in 1:100){
#calculate the length
len=cpgi.df[i,3]-cpgi.df[i,2]+1
#append the length to the result
result=c(result,len)
}
# check the results
head(result)
```
```
## [1] 277 360 383 1325 3498 1602
```
#### 2\.9\.2\.1 Apply family functions instead of loops
R has other ways of repeating tasks, which tend to be more efficient than using loops. They are known collectively as the “apply” family of functions, which include `apply`, `lapply`,`mapply` and `tapply` (and some other variants). All of these functions apply a given function to a set of instances and return the results of those functions for each instance. The difference between them is that they take different types of inputs. For example, `apply` works on data frames or matrices and applies the function on each row or column of the data structure. `lapply` works on lists or vectors and applies a function which takes the list element as an argument. Next we will demonstrate how to use `apply()` on a matrix. The example applies the sum function on the rows of a matrix; it basically sums up the values on each row of the matrix, which is conceptualized in Figure [2\.13](functions-and-control-structures-for-ifelse-etc-.html#fig:applyConcept).
FIGURE 2\.13: apply() concept in R.
```
mat=cbind(c(3,0,3,3),c(3,0,0,0),c(3,0,0,3),c(1,1,0,0),c(1,1,1,0),c(1,1,1,0))
result<-apply(mat,1,sum)
result
```
```
## [1] 12 3 5 6
```
```
# OR you can define the function as an argument to apply()
result<-apply(mat,1,function(x) sum(x))
result
```
```
## [1] 12 3 5 6
```
Notice that we used a second argument which equals 1, that indicates that rows of the matrix/ data frame will be the input for the function. If we change the second argument to 2, this will indicate that columns should be the input for the function that will be applied. See Figure [2\.14](functions-and-control-structures-for-ifelse-etc-.html#fig:applyConcept2) for the visualization of apply() on columns.
FIGURE 2\.14: apply() function on columns
```
result<-apply(mat,2,sum)
result
```
```
## [1] 9 3 6 2 3 3
```
Next, we will use `lapply()`, which applies a function on a list or a vector. The function that will be applied is a simple function that takes the square of a given number.
```
input=c(1,2,3)
lapply(input,function(x) x^2)
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 4
##
## [[3]]
## [1] 9
```
`mapply()` is another member of the `apply` family, it can apply a function on an unlimited set of vectors/lists, it is like a version of `lapply` that can handle multiple vectors as arguments. In this case, the argument to the `mapply()` is the function to be applied and the sets of parameters to be supplied as arguments of the function. As shown in the conceptualized Figure [2\.15](functions-and-control-structures-for-ifelse-etc-.html#fig:mapplyConcept), the function to be applied is a function that takes two arguments and sums them up. The arguments to be summed up are in the format of vectors `Xs` and `Ys`. `mapply()` applies the summation function to each pair in the `Xs` and `Ys` vector. Notice that the order of the input function and extra arguments are different for `mapply`.
FIGURE 2\.15: mapply() concept.
```
Xs=0:5
Ys=c(2,2,2,3,3,3)
result<-mapply(function(x,y) sum(x,y),Xs,Ys)
result
```
```
## [1] 2 3 4 6 7 8
```
#### 2\.9\.2\.2 Apply family functions on multiple cores
If you have large data sets, apply family functions can be slow (although probably still better than for loops). If that is the case, you can easily use the parallel versions of those functions from the parallel package. These functions essentially divide your tasks to smaller chunks, run them on separate CPUs, and merge the results from those parallel operations. This concept is visualized in Figure below [2\.16](functions-and-control-structures-for-ifelse-etc-.html#fig:mcapplyConcept), `mcapply` runs the summation function on three different processors. Each processor executes the summation function on a part of the data set, and the results are merged and returned as a single vector that has the same order as the input parameters Xs and Ys.
FIGURE 2\.16: mcapply() concept.
#### 2\.9\.2\.3 Vectorized functions in R
The above examples have been put forward to illustrate functions and loops in R because functions using sum() are not complicated and are easy to understand. You will probably need to use loops and looping structures with more complicated functions. In reality, most of the operations we used do not need the use of loops or looping structures because there are already vectorized functions that can achieve the same outcomes, meaning if the input arguments are R vectors, the output will be a vector as well, so no need for loops or vectorization.
For example, instead of using `mapply()` and `sum()` functions, we can just use the `+` operator and sum up `Xs` and `Ys`.
```
result=Xs+Ys
result
```
```
## [1] 2 3 4 6 7 8
```
In order to get the column or row sums, we can use the vectorized functions `colSums()` and `rowSums()`.
```
colSums(mat)
```
```
## [1] 9 3 6 2 3 3
```
```
rowSums(mat)
```
```
## [1] 12 3 5 6
```
However, remember that not every function is vectorized in R, so use the ones that are. But sooner or later, apply family functions will come in handy.
#### 2\.9\.2\.1 Apply family functions instead of loops
R has other ways of repeating tasks, which tend to be more efficient than using loops. They are known collectively as the “apply” family of functions, which include `apply`, `lapply`,`mapply` and `tapply` (and some other variants). All of these functions apply a given function to a set of instances and return the results of those functions for each instance. The difference between them is that they take different types of inputs. For example, `apply` works on data frames or matrices and applies the function on each row or column of the data structure. `lapply` works on lists or vectors and applies a function which takes the list element as an argument. Next we will demonstrate how to use `apply()` on a matrix. The example applies the sum function on the rows of a matrix; it basically sums up the values on each row of the matrix, which is conceptualized in Figure [2\.13](functions-and-control-structures-for-ifelse-etc-.html#fig:applyConcept).
FIGURE 2\.13: apply() concept in R.
```
mat=cbind(c(3,0,3,3),c(3,0,0,0),c(3,0,0,3),c(1,1,0,0),c(1,1,1,0),c(1,1,1,0))
result<-apply(mat,1,sum)
result
```
```
## [1] 12 3 5 6
```
```
# OR you can define the function as an argument to apply()
result<-apply(mat,1,function(x) sum(x))
result
```
```
## [1] 12 3 5 6
```
Notice that we used a second argument which equals 1, that indicates that rows of the matrix/ data frame will be the input for the function. If we change the second argument to 2, this will indicate that columns should be the input for the function that will be applied. See Figure [2\.14](functions-and-control-structures-for-ifelse-etc-.html#fig:applyConcept2) for the visualization of apply() on columns.
FIGURE 2\.14: apply() function on columns
```
result<-apply(mat,2,sum)
result
```
```
## [1] 9 3 6 2 3 3
```
Next, we will use `lapply()`, which applies a function on a list or a vector. The function that will be applied is a simple function that takes the square of a given number.
```
input=c(1,2,3)
lapply(input,function(x) x^2)
```
```
## [[1]]
## [1] 1
##
## [[2]]
## [1] 4
##
## [[3]]
## [1] 9
```
`mapply()` is another member of the `apply` family, it can apply a function on an unlimited set of vectors/lists, it is like a version of `lapply` that can handle multiple vectors as arguments. In this case, the argument to the `mapply()` is the function to be applied and the sets of parameters to be supplied as arguments of the function. As shown in the conceptualized Figure [2\.15](functions-and-control-structures-for-ifelse-etc-.html#fig:mapplyConcept), the function to be applied is a function that takes two arguments and sums them up. The arguments to be summed up are in the format of vectors `Xs` and `Ys`. `mapply()` applies the summation function to each pair in the `Xs` and `Ys` vector. Notice that the order of the input function and extra arguments are different for `mapply`.
FIGURE 2\.15: mapply() concept.
```
Xs=0:5
Ys=c(2,2,2,3,3,3)
result<-mapply(function(x,y) sum(x,y),Xs,Ys)
result
```
```
## [1] 2 3 4 6 7 8
```
#### 2\.9\.2\.2 Apply family functions on multiple cores
If you have large data sets, apply family functions can be slow (although probably still better than for loops). If that is the case, you can easily use the parallel versions of those functions from the parallel package. These functions essentially divide your tasks to smaller chunks, run them on separate CPUs, and merge the results from those parallel operations. This concept is visualized in Figure below [2\.16](functions-and-control-structures-for-ifelse-etc-.html#fig:mcapplyConcept), `mcapply` runs the summation function on three different processors. Each processor executes the summation function on a part of the data set, and the results are merged and returned as a single vector that has the same order as the input parameters Xs and Ys.
FIGURE 2\.16: mcapply() concept.
#### 2\.9\.2\.3 Vectorized functions in R
The above examples have been put forward to illustrate functions and loops in R because functions using sum() are not complicated and are easy to understand. You will probably need to use loops and looping structures with more complicated functions. In reality, most of the operations we used do not need the use of loops or looping structures because there are already vectorized functions that can achieve the same outcomes, meaning if the input arguments are R vectors, the output will be a vector as well, so no need for loops or vectorization.
For example, instead of using `mapply()` and `sum()` functions, we can just use the `+` operator and sum up `Xs` and `Ys`.
```
result=Xs+Ys
result
```
```
## [1] 2 3 4 6 7 8
```
In order to get the column or row sums, we can use the vectorized functions `colSums()` and `rowSums()`.
```
colSums(mat)
```
```
## [1] 9 3 6 2 3 3
```
```
rowSums(mat)
```
```
## [1] 12 3 5 6
```
However, remember that not every function is vectorized in R, so use the ones that are. But sooner or later, apply family functions will come in handy.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/exercises.html |
### 2\.10\.1 Computations in R
1. Sum 2 and 3 using the `+` operator. \[Difficulty: **Beginner**]
2. Take the square root of 36, use `sqrt()`. \[Difficulty: **Beginner**]
3. Take the log10 of 1000, use function `log10()`. \[Difficulty: **Beginner**]
4. Take the log2 of 32, use function `log2()`. \[Difficulty: **Beginner**]
5. Assign the sum of 2,3 and 4 to variable x. \[Difficulty: **Beginner**]
6. Find the absolute value of the expression `5 - 145` using the `abs()` function. \[Difficulty: **Beginner**]
7. Calculate the square root of 625, divide it by 5, and assign it to variable `x`.Ex: `y= log10(1000)/5`, the previous statement takes log10 of 1000, divides it by 5, and assigns the value to variable y. \[Difficulty: **Beginner**]
8. Multiply the value you get from previous exercise by 10000, assign it to variable x
Ex: `y=y*5`, multiplies `y` by 5 and assigns the value to `y`.
**KEY CONCEPT:** results of computations or arbitrary values can be stored in variables we can re\-use those variables later on and over\-write them with new values.
\[Difficulty: **Beginner**]
### 2\.10\.2 Data structures in R
10. Make a vector of 1,2,3,5 and 10 using `c()`, and assign it to the `vec` variable. Ex: `vec1=c(1,3,4)` makes a vector out of 1,3,4\. \[Difficulty: **Beginner**]
11. Check the length of your vector with length().
Ex: `length(vec1)` should return 3\. \[Difficulty: **Beginner**]
12. Make a vector of all numbers between 2 and 15\.
Ex: `vec=1:6` makes a vector of numbers between 1 and 6, and assigns it to the `vec` variable. \[Difficulty: **Beginner**]
13. Make a vector of 4s repeated 10 times using the `rep()` function. Ex: `rep(x=2,times=5)` makes a vector of 2s repeated 5 times. \[Difficulty: **Beginner**]
14. Make a logical vector with TRUE, FALSE values of length 4, use `c()`.
Ex: `c(TRUE,FALSE)`. \[Difficulty: **Beginner**]
15. Make a character vector of the gene names PAX6,ZIC2,OCT4 and SOX2\.
Ex: `avec=c("a","b","c")` makes a character vector of a,b and c. \[Difficulty: **Beginner**]
16. Subset the vector using `[]` notation, and get the 5th and 6th elements.
Ex: `vec1[1]` gets the first element. `vec1[c(1,3)]` gets the 1st and 3rd elements. \[Difficulty: **Beginner**]
17. You can also subset any vector using a logical vector in `[]`. Run the following:
```
myvec=1:5
# the length of the logical vector
# should be equal to length(myvec)
myvec[c(TRUE,TRUE,FALSE,FALSE,FALSE)]
myvec[c(TRUE,FALSE,FALSE,FALSE,TRUE)]
```
\[Difficulty: **Beginner**]
18. `==,>,<, >=, <=` operators create logical vectors. See the results of the following operations:
```
myvec > 3
myvec == 4
myvec <= 2
myvec != 4
```
\[Difficulty: **Beginner**]
19. Use the `>` operator in `myvec[ ]` to get elements larger than 2 in `myvec` which is described above. \[Difficulty: **Beginner**]
20. Make a 5x3 matrix (5 rows, 3 columns) using `matrix()`.
Ex: `matrix(1:6,nrow=3,ncol=2)` makes a 3x2 matrix using numbers between 1 and 6\. \[Difficulty: **Beginner**]
21. What happens when you use `byrow = TRUE` in your matrix() as an additional argument?
Ex: `mat=matrix(1:6,nrow=3,ncol=2,byrow = TRUE)`. \[Difficulty: **Beginner**]
22. Extract the first 3 columns and first 3 rows of your matrix using `[]` notation. \[Difficulty: **Beginner**]
23. Extract the last two rows of the matrix you created earlier.
Ex: `mat[2:3,]` or `mat[c(2,3),]` extracts the 2nd and 3rd rows.
\[Difficulty: **Beginner**]
24. Extract the first two columns and run `class()` on the result.
\[Difficulty: **Beginner**]
25. Extract the first column and run `class()` on the result, compare with the above exercise.
\[Difficulty: **Beginner**]
26. Make a data frame with 3 columns and 5 rows. Make sure first column is a sequence
of numbers 1:5, and second column is a character vector.
Ex: `df=data.frame(col1=1:3,col2=c("a","b","c"),col3=3:1) # 3x3 data frame`.
Remember you need to make a 3x5 data frame. \[Difficulty: **Beginner**]
27. Extract the first two columns and first two rows.
**HINT:** Use the same notation as matrices. \[Difficulty: **Beginner**]
28. Extract the last two rows of the data frame you made.
**HINT:** Same notation as matrices. \[Difficulty: **Beginner**]
29. Extract the last two columns using the column names of the data frame you made. \[Difficulty: **Beginner**]
30. Extract the second column using the column names.
You can use `[]` or `$` as in lists; use both in two different answers. \[Difficulty: **Beginner**]
31. Extract rows where the 1st column is larger than 3\.
**HINT:** You can get a logical vector using the `>` operator
, and logical vectors can be used in `[]` when subsetting. \[Difficulty: **Beginner**]
32. Extract rows where the 1st column is larger than or equal to 3\.
\[Difficulty: **Beginner**]
33. Convert a data frame to the matrix. **HINT:** Use `as.matrix()`.
Observe what happens to numeric values in the data frame. \[Difficulty: **Beginner**]
34. Make a list using the `list()` function. Your list should have 4 elements;
the one below has 2\. Ex: `mylist= list(a=c(1,2,3),b=c("apple,"orange"))`
\[Difficulty: **Beginner**]
35. Select the 1st element of the list you made using `$` notation.
Ex: `mylist$a` selects first element named “a”.
\[Difficulty: **Beginner**]
36. Select the 4th element of the list you made earlier using `$` notation. \[Difficulty: **Beginner**]
37. Select the 1st element of your list using `[ ]` notation.
Ex: `mylist[1]` selects the first element named “a”, and you get a list with one element. `mylist["a"]` selects the first element named “a”, and you get a list with one element.
\[Difficulty: **Beginner**]
38. Select the 4th element of your list using `[ ]` notation. \[Difficulty: **Beginner**]
39. Make a factor using factor(), with 5 elements.
Ex: `fa=factor(c("a","a","b"))`. \[Difficulty: **Beginner**]
40. Convert a character vector to a factor using `as.factor()`.
First, make a character vector using `c()` then use `as.factor()`.
\[Difficulty: **Intermediate**]
41. Convert the factor you made above to a character using `as.character()`. \[Difficulty: **Beginner**]
### 2\.10\.3 Reading in and writing data out in R
1. Read CpG island (CpGi) data from the compGenomRData package `CpGi.table.hg18.txt`. This is a tab\-separated file. Store it in a variable called `cpgi`. Use
```
cpgFilePath=system.file("extdata",
"CpGi.table.hg18.txt",
package="compGenomRData")
```
to get the file path within the installed `compGenomRData` package. \[Difficulty: **Beginner**]
2. Use `head()` on CpGi to see the first few rows. \[Difficulty: **Beginner**]
3. Why doesn’t the following work? See `sep` argument at `help(read.table)`. \[Difficulty: **Beginner**]
```
cpgtFilePath=system.file("extdata",
"CpGi.table.hg18.txt",
package="compGenomRData")
cpgtFilePath
cpgiSepComma=read.table(cpgtFilePath,header=TRUE,sep=",")
head(cpgiSepComma)
```
4. What happens when you set `stringsAsFactors=FALSE` in `read.table()`? \[Difficulty: **Beginner**]
```
cpgiHF=read.table("intro2R_data/data/CpGi.table.hg18.txt",
header=FALSE,sep="\t",
stringsAsFactors=FALSE)
```
5. Read only the first 10 rows of the CpGi table. \[Difficulty: **Beginner/Intermediate**]
6. Use `cpgFilePath=system.file("extdata","CpGi.table.hg18.txt",`
`package="compGenomRData")` to get the file path, then use
`read.table()` with argument `header=FALSE`. Use `head()` to see the results. \[Difficulty: **Beginner**]
7. Write CpG islands to a text file called “my.cpgi.file.txt”. Write the file
to your home folder; you can use `file="~/my.cpgi.file.txt"` in linux. `~/` denotes
home folder.\[Difficulty: **Beginner**]
8. Same as above but this time make sure to use the `quote=FALSE`,`sep="\t"` and `row.names=FALSE` arguments. Save the file to “my.cpgi.file2\.txt” and compare it with “my.cpgi.file.txt”. \[Difficulty: **Beginner**]
9. Write out the first 10 rows of the `cpgi` data frame.
**HINT:** Use subsetting for data frames we learned before. \[Difficulty: **Beginner**]
10. Write the first 3 columns of the `cpgi` data frame. \[Difficulty: **Beginner**]
11. Write CpG islands only on chr1\. **HINT:** Use subsetting with `[]`, feed a logical vector using `==` operator.\[Difficulty: **Beginner/Intermediate**]
12. Read two other data sets “rn4\.refseq.bed” and “rn4\.refseq2name.txt” with `header=FALSE`, and assign them to df1 and df2 respectively.
They are again included in the compGenomRData package, and you
can use the `system.file()` function to get the file paths. \[Difficulty: **Beginner**]
13. Use `head()` to see what is inside the data frames above. \[Difficulty: **Beginner**]
14. Merge data sets using `merge()` and assign the results to a variable named ‘new.df’, and use `head()` to see the results. \[Difficulty: **Intermediate**]
### 2\.10\.4 Plotting in R
Please run the following code snippet for the rest of the exercises.
```
set.seed(1001)
x1=1:100+rnorm(100,mean=0,sd=15)
y1=1:100
```
1. Make a scatter plot using the `x1` and `y1` vectors generated above. \[Difficulty: **Beginner**]
2. Use the `main` argument to give a title to `plot()` as in `plot(x,y,main="title")`. \[Difficulty: **Beginner**]
3. Use the `xlab` argument to set a label for the x\-axis. Use `ylab` argument to set a label for the y\-axis. \[Difficulty: **Beginner**]
4. Once you have the plot, run the following expression in R console. `mtext(side=3,text="hi there")` does. **HINT:** `mtext` stands for margin text. \[Difficulty: **Beginner**]
5. See what `mtext(side=2,text="hi there")` does. Check your plot after execution. \[Difficulty: **Beginner**]
6. Use *mtext()* and *paste()* to put a margin text on the plot. You can use `paste()` as ‘text’ argument in `mtext()`. **HINT:** `mtext(side=3,text=paste(...))`. See how `paste()` is used for below. \[Difficulty: **Beginner/Intermediate**]
```
paste("Text","here")
```
```
## [1] "Text here"
```
```
myText=paste("Text","here")
myText
```
```
## [1] "Text here"
```
7. `cor()` calculates the correlation between two vectors.
Pearson correlation is a measure of the linear correlation (dependence)
between two variables X and Y. Try using the `cor()` function on the `x1` and `y1` variables. \[Difficulty: **Intermediate**]
8. Try to use `mtext()`,`cor()` and `paste()` to display the correlation coefficient on your scatter plot. \[Difficulty: **Intermediate**]
9. Change the colors of your plot using the `col` argument.
Ex: `plot(x,y,col="red")`. \[Difficulty: **Beginner**]
10. Use `pch=19` as an argument in your `plot()` command. \[Difficulty: **Beginner**]
11. Use `pch=18` as an argument to your `plot()` command. \[Difficulty: **Beginner**]
12. Make a histogram of `x1` with the `hist()` function. A histogram is a graphical representation of the data distribution. \[Difficulty: **Beginner**]
13. You can change colors with ‘col’, add labels with ‘xlab’, ‘ylab’, and add a ‘title’ with ‘main’ arguments. Try all these in a histogram.
\[Difficulty: **Beginner**]
14. Make a boxplot of y1 with `boxplot()`.\[Difficulty: **Beginner**]
15. Make boxplots of `x1` and `y1` vectors in the same plot.\[Difficulty: **Beginner**]
16. In boxplot, use the `horizontal = TRUE` argument. \[Difficulty: **Beginner**]
17. Make multiple plots with `par(mfrow=c(2,1))`
* run `par(mfrow=c(2,1))`
* make a boxplot
* make a histogram
\[Difficulty: **Beginner/Intermediate**]
18. Do the same as above but this time with `par(mfrow=c(1,2))`. \[Difficulty: **Beginner/Intermediate**]
19. Save your plot using the “Export” button in Rstudio. \[Difficulty: **Beginner**]
20. You can make a scatter plot showing the density
of points rather than points themselves. If you use points it looks like this:
```
x2=1:1000+rnorm(1000,mean=0,sd=200)
y2=1:1000
plot(x2,y2,pch=19,col="blue")
```
If you use the `smoothScatter()` function, you get the densities.
```
smoothScatter(x2,y2,
colramp=colorRampPalette(c("white","blue",
"green","yellow","red")))
```
Now, plot with the `colramp=heat.colors` argument and then use a custom color scale using the following argument.
```
colramp = colorRampPalette(c("white","blue", "green","yellow","red")))
```
\[Difficulty: **Beginner/Intermediate**]
### 2\.10\.5 Functions and control structures (for, if/else, etc.)
Read CpG island data as shown below for the rest of the exercises.
```
cpgtFilePath=system.file("extdata",
"CpGi.table.hg18.txt",
package="compGenomRData")
cpgi=read.table(cpgtFilePath,header=TRUE,sep="\t")
head(cpgi)
```
```
## chrom chromStart chromEnd name length cpgNum gcNum perCpg perGc obsExp
## 1 chr1 18598 19673 CpG: 116 1075 116 787 21.6 73.2 0.83
## 2 chr1 124987 125426 CpG: 30 439 30 295 13.7 67.2 0.64
## 3 chr1 317653 318092 CpG: 29 439 29 295 13.2 67.2 0.62
## 4 chr1 427014 428027 CpG: 84 1013 84 734 16.6 72.5 0.64
## 5 chr1 439136 440407 CpG: 99 1271 99 777 15.6 61.1 0.84
## 6 chr1 523082 523977 CpG: 94 895 94 570 21.0 63.7 1.04
```
1. Check values in the perGc column using a histogram.
The ‘perGc’ column in the data stands for GC percent \=\> percentage of C\+G nucleotides. \[Difficulty: **Beginner**]
2. Make a boxplot for the ‘perGc’ column. \[Difficulty: **Beginner**]
3. Use if/else structure to decide if the given GC percent is high, low or medium.
If it is low, high, or medium: low \< 60, high\>75, medium is between 60 and 75;
use greater or less than operators, `<` or `>`. Fill in the values in the code below, where it is written ‘YOU\_FILL\_IN’. \[Difficulty: **Intermediate**]
```
GCper=65
# check if GC value is lower than 60,
# assign "low" to result
if('YOU_FILL_IN'){
result="low"
cat("low")
}
else if('YOU_FILL_IN'){ # check if GC value is higher than 75,
#assign "high" to result
result="high"
cat("high")
}else{ # if those two conditions fail then it must be "medium"
result="medium"
}
result
```
4. Write a function that takes a value of GC percent and decides
if it is low, high, or medium: low \< 60, high\>75, medium is between 60 and 75\.
Fill in the values in the code below, where it is written ‘YOU\_FILL\_IN’. \[Difficulty: **Intermediate/Advanced**]
```
GCclass<-function(my.gc){
YOU_FILL_IN
return(result)
}
GCclass(10) # should return "low"
GCclass(90) # should return "high"
GCclass(65) # should return "medium"
```
5. Use a for loop to get GC percentage classes for `gcValues` below. Use the function
you wrote above.\[Difficulty: **Intermediate/Advanced**]
```
gcValues=c(10,50,70,65,90)
for( i in YOU_FILL_IN){
YOU_FILL_IN
}
```
6. Use `lapply` to get GC percentage classes for `gcValues`. \[Difficulty: **Intermediate/Advanced**]
```
vec=c(1,2,4,5)
power2=function(x){ return(x^2) }
lapply(vec,power2)
```
7. Use sapply to get values to get GC percentage classes for `gcValues`. \[Difficulty: **Intermediate**]
8. Is there a way to decide on the GC percentage class of a given vector of `GCpercentages`
without using if/else structure and loops ? if so, how can you do it?
**HINT:** Subsetting using \< and \> operators.
\[Difficulty: **Intermediate**]
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html |
3\.1 How to summarize collection of data points: The idea behind statistical distributions
------------------------------------------------------------------------------------------
In biology and many other fields, data is collected via experimentation.
The nature of the experiments and natural variation in biology makes
it impossible to get the same exact measurements every time you measure something.
For example, if you are measuring gene expression values for
a certain gene, say PAX6, and let’s assume you are measuring expression
per sample and cell with any method (microarrays, rt\-qPCR, etc.). You will not
get the same expression value even if your samples are homogeneous, due
to technical bias in experiments or natural variation in the samples. Instead,
we would like to describe this collection of data some other way
that represents the general properties of the data. Figure [3\.1](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:pax6ReplicatesChp3) shows a sample of
20 expression values from the PAX6 gene.
FIGURE 3\.1: Expression of the PAX6 gene in 20 replicate experiments.
### 3\.1\.1 Describing the central tendency: Mean and median
As seen in Figure [3\.1](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:pax6ReplicatesChp3), the points from this sample are distributed around
a central value and the histogram below the dot plot shows the number of points in
each bin. Another observation is that there are some bins that have more points than others. If we want to summarize what we observe, we can try
to represent the collection of data points
with an expression value that is typical to get, something that represents the
general tendency we observe on the dot plot and the histogram. This value is
sometimes called the central
value or central tendency, and there are different ways to calculate such a value.
In Figure [3\.1](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:pax6ReplicatesChp3), we see that all the values are spread around 6\.13 (red line),
and that is indeed what we call the mean value of this sample of expression values.
It can be calculated with the following formula \\(\\overline{X}\=\\sum\_{i\=1}^n x\_i/n\\),
where \\(x\_i\\) is the expression value of an experiment and \\(n\\) is the number of
expression values obtained from the experiments. In R, the `mean()` function will calculate the
mean of a provided vector of numbers. This is called a “sample mean”. In reality, there are many more than 20 possible PAX6 expression values (provided each cell is of the
identical cell type and is in identical conditions). If we had the time and the funding to sample all cells and measure PAX6 expression we would
get a collection of values that would be called, in statistics, a “population”. In
our case, the population will look like the left hand side of the Figure [3\.2](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:pax6MorereplicatesChp3). What we have done with
our 20 data points is that we took a sample of PAX6 expression values from this
population, and calculated the sample mean.
FIGURE 3\.2: Expression of all possible PAX6 gene expression measures on all available biological samples (left). Expression of the PAX6 gene from the statistical sample, a random subset from the population of biological samples (right).
The mean of the population is calculated the same way but traditionally the
Greek letter \\(\\mu\\) is used to denote the population mean. Normally, we would not
have access to the population and we will use the sample mean and other quantities
derived from the sample to estimate the population properties. This is the basic
idea behind statistical inference, which we will see in action in later
sections as well. We
estimate the population parameters from the sample parameters and there is some
uncertainty associated with those estimates. We will be trying to assess those
uncertainties and make decisions in the presence of those uncertainties.
We are not yet done with measuring central tendency.
There are other ways to describe it, such as the median value. The
mean can be affected by outliers easily.
If certain values are very high or low compared to the
bulk of the sample, this will shift mean toward those outliers. However, the median is not affected by outliers. It is simply the value in a distribution where half
of the values are above and the other half are below. In R, the `median()` function
will calculate the mean of a provided vector of numbers. Let’s create a set of random numbers and calculate their mean and median using
R.
```
#create 10 random numbers from uniform distribution
x=runif(10)
# calculate mean
mean(x)
```
```
## [1] 0.3738963
```
```
# calculate median
median(x)
```
```
## [1] 0.3277896
```
### 3\.1\.2 Describing the spread: Measurements of variation
Another useful way to summarize a collection of data points is to measure
how variable the values are. You can simply describe the range of the values,
such as the minimum and maximum values. You can easily do that in R with the `range()`
function. A more common way to calculate variation is by calculating something
called “standard deviation” or the related quantity called “variance”. This is a
quantity that shows how variable the values are. A value around zero indicates
there is not much variation in the values of the data points, and a high value
indicates high variation in the values. The variance is the squared distance of
data points from the mean. Population variance is again a quantity we usually
do not have access to and is simply calculated as follows \\(\\sigma^2\=\\sum\_{i\=1}^n \\frac{(x\_i\-\\mu)^2}{n}\\), where \\(\\mu\\) is the population mean, \\(x\_i\\) is the \\(i\\)th
data point in the population and \\(n\\) is the population size. However, when we only have access to a sample, this formulation is biased. That means that it
underestimates the population variance, so we make a small adjustment when we
calculate the sample variance, denoted as \\(s^2\\):
\\\[
\\begin{aligned}
s^2\=\\sum\_{i\=1}^n \\frac{(x\_i\-\\overline{X})^2}{n\-1} \&\& \\text{ where $x\_i$ is the ith data point and
$\\overline{X}$ is the sample mean.}
\\end{aligned}
\\]
The sample standard deviation is simply the square root of the sample variance, \\(s\=\\sqrt{\\sum\_{i\=1}^n \\frac{(x\_i\-\\overline{X})^2}{n\-1}}\\).
The good thing about standard deviation is that it has the same unit as the mean
so it is more intuitive.
We can calculate the sample standard deviation and variation with the `sd()` and `var()`
functions in R. These functions take a vector of numeric values as input and
calculate the desired quantities. Below we use those functions on a randomly
generated vector of numbers.
```
x=rnorm(20,mean=6,sd=0.7)
var(x)
```
```
## [1] 0.2531495
```
```
sd(x)
```
```
## [1] 0.5031397
```
One potential problem with the variance is that it could be affected by
outliers. The points that are too far away from the mean will have a large
effect on the variance even though there might be few of them.
A way to measure variance that could be less affected by outliers is
looking at where the bulk of the distribution is. How do we define where the bulk is?
One common way is to look at the difference between 75th percentile and 25th
percentile, this effectively removes a lot of potential outliers which will be
towards the edges of the range of values.
This is called the interquartile range, and
can be easily calculated using R via the `IQR()` function and the quantiles of a vector
are calculated with the `quantile()` function.
Let us plot the boxplot for a random vector and also calculate IQR using R.
In the boxplot (Figure [3\.3](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:boxplot2Chp3)), 25th and 75th percentiles are the edges of the box, and
the median is marked with a thick line cutting through the box.
```
x=rnorm(20,mean=6,sd=0.7)
IQR(x)
```
```
## [1] 0.5010954
```
```
quantile(x)
```
```
## 0% 25% 50% 75% 100%
## 5.437119 5.742895 5.860302 6.243991 6.558112
```
```
boxplot(x,horizontal = T)
```
FIGURE 3\.3: Boxplot showing the 25th percentile and 75th percentile and median for a set of points sampled from a normal distribution with mean\=6 and standard deviation\=0\.7\.
#### 3\.1\.2\.1 Frequently used statistical distributions
The distributions have parameters (such as mean and variance) that
summarize them, but also they are functions that assign each outcome of a
statistical experiment to its probability of occurrence.
One distribution that you
will frequently encounter is the normal distribution or Gaussian distribution.
The normal distribution has a typical “bell\-curve” shape
and is characterized by mean and standard deviation. A set of data points
that
follow normal distribution will mostly be close to the mean
but spread around it, controlled by the standard deviation parameter. That
means that if we sample data points from a normal distribution, we are more
likely to sample data points near the mean and sometimes away from the mean.
The probability of an event occurring is higher if it is nearby the mean.
The effect
of the parameters for the normal distribution can be observed in the following
plot.
FIGURE 3\.4: Different parameters for normal distribution and effect of those on the shape of the distribution
The normal distribution is often denoted by \\(\\mathcal{N}(\\mu,\\,\\sigma^2\)\\). When a random variable \\(X\\) is distributed normally with mean \\(\\mu\\) and variance \\(\\sigma^2\\), we write:
\\\[X\\ \\sim\\ \\mathcal{N}(\\mu,\\,\\sigma^2\)\\]
The probability
density function of the normal distribution with mean \\(\\mu\\) and standard deviation
\\(\\sigma\\) is as follows:
\\\[P(x)\=\\frac{1}{\\sigma\\sqrt{2\\pi} } \\; e^{ \-\\frac{(x\-\\mu)^2}{2\\sigma^2} } \\]
The probability density function gives the probability of observing a value
on a normal distribution defined by the \\(\\mu\\) and
\\(\\sigma\\) parameters.
Oftentimes, we do not need the exact probability of a value, but we need the
probability of observing a value larger or smaller than a critical value or reference
point. For example, we might want to know the probability of \\(X\\) being smaller than or
equal to \-2 for a normal distribution with mean \\(0\\) and standard deviation \\(2\\): \\(P(X \<\= \-2 \\; \| \\; \\mu\=0,\\sigma\=2\)\\). In this case, what we want is the area under the
curve shaded in dark blue. To be able to do that, we need to integrate the probability
density function but we will usually let software do that. Traditionally,
one calculates a Z\-score which is simply \\((X\-\\mu)/\\sigma\=(\-2\-0\)/2\= \-1\\), and
corresponds to how many standard deviations you are away from the mean.
This is also called “standardization”, the corresponding value is distributed in “standard normal distribution” where \\(\\mathcal{N}(0,\\,1\)\\). After calculating the Z\-score,
we can look up the area under the curve for the left and right sides of the Z\-score in a table, but again, we use software for that.
The tables are outdated when you can use a computer.
Below in Figure [3\.5](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:zscore), we show the Z\-score and the associated probabilities derived
from the calculation above for \\(P(X \<\= \-2 \\; \| \\; \\mu\=0,\\sigma\=2\)\\).
FIGURE 3\.5: Z\-score and associated probabilities for Z\= \-1
In R, the family of `*norm` functions (`rnorm`,`dnorm`,`qnorm` and `pnorm`) can
be used to
operate with the normal distribution, such as calculating probabilities and
generating random numbers drawn from a normal distribution. We show some of those capabilities below.
```
# get the value of probability density function when X= -2,
# where mean=0 and sd=2
dnorm(-2, mean=0, sd=2)
```
```
## [1] 0.1209854
```
```
# get the probability of P(X =< -2) where mean=0 and sd=2
pnorm(-2, mean=0, sd=2)
```
```
## [1] 0.1586553
```
```
# get the probability of P(X > -2) where mean=0 and sd=2
pnorm(-2, mean=0, sd=2,lower.tail = FALSE)
```
```
## [1] 0.8413447
```
```
# get 5 random numbers from normal dist with mean=0 and sd=2
rnorm(5, mean=0 , sd=2)
```
```
## [1] -1.8109030 -1.9220710 -0.5146717 0.8216728 -0.7900804
```
```
# get y value corresponding to P(X > y) = 0.15 with mean=0 and sd=2
qnorm( 0.15, mean=0 , sd=2)
```
```
## [1] -2.072867
```
There are many other distribution functions in R that can be used the same
way. You have to enter the distribution\-specific parameters along
with your critical value, quantiles, or number of random numbers depending
on which function you are using in the family. We will list some of those functions below.
* `dbinom` is for the binomial distribution. This distribution is usually used
to model fractional data and binary data. Examples from genomics include
methylation data.
* `dpois` is used for the Poisson distribution and `dnbinom` is used for
the negative binomial distribution. These distributions are used to model count
data such as sequencing read counts.
* `df` (F distribution) and `dchisq` (Chi\-Squared distribution) are used
in relation to the distribution of variation. The F distribution is used to model
ratios of variation and Chi\-Squared distribution is used to model
distribution of variations. You will frequently encounter these in linear models and generalized linear models.
### 3\.1\.3 Precision of estimates: Confidence intervals
When we take a random sample from a population and compute a statistic, such as
the mean, we are trying to approximate the mean of the population. How well this
sample statistic estimates the population value will always be a
concern. A confidence interval addresses this concern because it provides a
range of values which will plausibly contain the population parameter of interest.
Normally, we would not have access to a population. If we did, we would not have to estimate the population parameters and their precision.
When we do not have access
to the population, one way to estimate intervals is to repeatedly take samples from the
original sample with replacement, that is, we take a data point from the sample
we replace, and we take another data point until we have sample size of the
original sample. Then, we calculate the parameter of interest, in this case the mean, and
repeat this process a large number of times, such as 1000\. At this point, we would have a distribution of re\-sampled
means. We can then calculate the 2\.5th and 97\.5th percentiles and these will
be our so\-called 95% confidence interval. This procedure, resampling with replacement to
estimate the precision of population parameter estimates, is known as the **bootstrap resampling** or **bootstraping**.
Let’s see how we can do this in practice. We simulate a sample
coming from a normal distribution (but we pretend we don’t know the
population parameters). We will estimate the precision
of the mean of the sample using bootstrapping to build confidence intervals, the resulting plot after this procedure is shown in Figure [3\.6](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:bootstrapChp3).
```
library(mosaic)
set.seed(21)
sample1= rnorm(50,20,5) # simulate a sample
# do bootstrap resampling, sampling with replacement
boot.means=do(1000) * mean(resample(sample1))
# get percentiles from the bootstrap means
q=quantile(boot.means[,1],p=c(0.025,0.975))
# plot the histogram
hist(boot.means[,1],col="cornflowerblue",border="white",
xlab="sample means")
abline(v=c(q[1], q[2] ),col="red")
text(x=q[1],y=200,round(q[1],3),adj=c(1,0))
text(x=q[2],y=200,round(q[2],3),adj=c(0,0))
```
FIGURE 3\.6: Precision estimate of the sample mean using 1000 bootstrap samples. Confidence intervals derived from the bootstrap samples are shown with red lines.
If we had a convenient mathematical method to calculate the confidence interval,
we could also do without resampling methods. It turns out that if we take
repeated
samples from a population with sample size \\(n\\), the distribution of means
(\\(\\overline{X}\\)) of those samples
will be approximately normal with mean \\(\\mu\\) and standard deviation
\\(\\sigma/\\sqrt{n}\\). This is also known as the **Central Limit Theorem(CLT)** and
is one of the most important theorems in statistics. This also means that
\\(\\frac{\\overline{X}\-\\mu}{\\sigma\\sqrt{n}}\\) has a standard normal
distribution and we can calculate the Z\-score, and then we can get
the percentiles associated with the Z\-score. Below, we are showing the
Z\-score
calculation for the distribution of \\(\\overline{X}\\), and then
we are deriving the confidence intervals starting with the fact that
the probability of Z being between \\(\-1\.96\\) and \\(1\.96\\) is \\(0\.95\\). We then use algebra
to show that the probability that unknown \\(\\mu\\) is captured between
\\(\\overline{X}\-1\.96\\sigma/\\sqrt{n}\\) and \\(\\overline{X}\+1\.96\\sigma/\\sqrt{n}\\) is \\(0\.95\\), which is commonly known as the 95% confidence interval.
\\\[\\begin{array}{ccc}
Z\=\\frac{\\overline{X}\-\\mu}{\\sigma/\\sqrt{n}}\\\\
P(\-1\.96 \< Z \< 1\.96\)\=0\.95 \\\\
P(\-1\.96 \< \\frac{\\overline{X}\-\\mu}{\\sigma/\\sqrt{n}} \< 1\.96\)\=0\.95\\\\
P(\\mu\-1\.96\\sigma/\\sqrt{n} \< \\overline{X} \< \\mu\+1\.96\\sigma/\\sqrt{n})\=0\.95\\\\
P(\\overline{X}\-1\.96\\sigma/\\sqrt{n} \< \\mu \< \\overline{X}\+1\.96\\sigma/\\sqrt{n})\=0\.95\\\\
confint\=\[\\overline{X}\-1\.96\\sigma/\\sqrt{n},\\overline{X}\+1\.96\\sigma/\\sqrt{n}]
\\end{array}\\]
A 95% confidence interval for the population mean is the most common interval to use, and would
mean that we would expect 95% of the interval estimates to include the
population parameter, in this case, the mean. However, we can pick any value
such as 99% or 90%. We can generalize the confidence interval for
\\(100(1\-\\alpha)\\) as follows:
\\\[\\overline{X} \\pm Z\_{\\alpha/2}\\sigma/\\sqrt{n}\\]
In R, we can do this using the `qnorm()` function to get Z\-scores associated
with \\({\\alpha/2}\\) and \\({1\-\\alpha/2}\\). As you can see, the confidence intervals we calculated using CLT are very
similar to the ones we got from the bootstrap for the same sample. For bootstrap we got \\(\[19\.21, 21\.989]\\) and for the CLT\-based estimate we got \\(\[19\.23638, 22\.00819]\\).
```
alpha=0.05
sd=5
n=50
mean(sample1)+qnorm(c(alpha/2,1-alpha/2))*sd/sqrt(n)
```
```
## [1] 19.23638 22.00819
```
The good thing about CLT is, as long as the sample size is large, regardless of
the population distribution, the distribution of sample means drawn from
that population will always be normal. In Figure [3\.7](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:sampleMeanschp3), we repeatedly
draw samples 1000 times with sample size \\(n\=10\\),\\(30\\), and \\(100\\) from a bimodal,
exponential and a uniform distribution and we are getting sample mean distributions
following normal distribution.
FIGURE 3\.7: Sample means are normally distributed regardless of the population distribution they are drawn from.
However, we should note that how we constructed the confidence interval
using standard normal distribution, \\(N(0,1\)\\), only works when we know the
population standard deviation. In reality, we usually have only access
to a sample and have no idea about the population standard deviation. If
this is the case, we should estimate the standard deviation using
the sample standard deviation and use something called the *t distribution* instead
of the standard normal distribution in our interval calculation. Our confidence interval becomes
\\(\\overline{X} \\pm t\_{\\alpha/2}s/\\sqrt{n}\\), with t distribution
parameter \\(d.f\=n\-1\\), since now the following quantity is t distributed \\(\\frac{\\overline{X}\-\\mu}{s/\\sqrt{n}}\\) instead of standard normal distribution.
The t distribution is similar to the standard normal distribution and has mean \\(0\\) but its spread is larger than the normal distribution
especially when the sample size is small, and has one parameter \\(v\\) for
the degrees of freedom, which is \\(n\-1\\) in this case. Degrees of freedom
is simply the number of data points minus the number of parameters estimated.Here we are estimating the mean from the data, therefore the degrees of freedom is \\(n\-1\\). The resulting distributions are shown in Figure [3\.8](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:tdistChp3).
FIGURE 3\.8: Normal distribution and t distribution with different degrees of freedom. With increasing degrees of freedom, the t distribution approximates the normal distribution better.
### 3\.1\.1 Describing the central tendency: Mean and median
As seen in Figure [3\.1](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:pax6ReplicatesChp3), the points from this sample are distributed around
a central value and the histogram below the dot plot shows the number of points in
each bin. Another observation is that there are some bins that have more points than others. If we want to summarize what we observe, we can try
to represent the collection of data points
with an expression value that is typical to get, something that represents the
general tendency we observe on the dot plot and the histogram. This value is
sometimes called the central
value or central tendency, and there are different ways to calculate such a value.
In Figure [3\.1](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:pax6ReplicatesChp3), we see that all the values are spread around 6\.13 (red line),
and that is indeed what we call the mean value of this sample of expression values.
It can be calculated with the following formula \\(\\overline{X}\=\\sum\_{i\=1}^n x\_i/n\\),
where \\(x\_i\\) is the expression value of an experiment and \\(n\\) is the number of
expression values obtained from the experiments. In R, the `mean()` function will calculate the
mean of a provided vector of numbers. This is called a “sample mean”. In reality, there are many more than 20 possible PAX6 expression values (provided each cell is of the
identical cell type and is in identical conditions). If we had the time and the funding to sample all cells and measure PAX6 expression we would
get a collection of values that would be called, in statistics, a “population”. In
our case, the population will look like the left hand side of the Figure [3\.2](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:pax6MorereplicatesChp3). What we have done with
our 20 data points is that we took a sample of PAX6 expression values from this
population, and calculated the sample mean.
FIGURE 3\.2: Expression of all possible PAX6 gene expression measures on all available biological samples (left). Expression of the PAX6 gene from the statistical sample, a random subset from the population of biological samples (right).
The mean of the population is calculated the same way but traditionally the
Greek letter \\(\\mu\\) is used to denote the population mean. Normally, we would not
have access to the population and we will use the sample mean and other quantities
derived from the sample to estimate the population properties. This is the basic
idea behind statistical inference, which we will see in action in later
sections as well. We
estimate the population parameters from the sample parameters and there is some
uncertainty associated with those estimates. We will be trying to assess those
uncertainties and make decisions in the presence of those uncertainties.
We are not yet done with measuring central tendency.
There are other ways to describe it, such as the median value. The
mean can be affected by outliers easily.
If certain values are very high or low compared to the
bulk of the sample, this will shift mean toward those outliers. However, the median is not affected by outliers. It is simply the value in a distribution where half
of the values are above and the other half are below. In R, the `median()` function
will calculate the mean of a provided vector of numbers. Let’s create a set of random numbers and calculate their mean and median using
R.
```
#create 10 random numbers from uniform distribution
x=runif(10)
# calculate mean
mean(x)
```
```
## [1] 0.3738963
```
```
# calculate median
median(x)
```
```
## [1] 0.3277896
```
### 3\.1\.2 Describing the spread: Measurements of variation
Another useful way to summarize a collection of data points is to measure
how variable the values are. You can simply describe the range of the values,
such as the minimum and maximum values. You can easily do that in R with the `range()`
function. A more common way to calculate variation is by calculating something
called “standard deviation” or the related quantity called “variance”. This is a
quantity that shows how variable the values are. A value around zero indicates
there is not much variation in the values of the data points, and a high value
indicates high variation in the values. The variance is the squared distance of
data points from the mean. Population variance is again a quantity we usually
do not have access to and is simply calculated as follows \\(\\sigma^2\=\\sum\_{i\=1}^n \\frac{(x\_i\-\\mu)^2}{n}\\), where \\(\\mu\\) is the population mean, \\(x\_i\\) is the \\(i\\)th
data point in the population and \\(n\\) is the population size. However, when we only have access to a sample, this formulation is biased. That means that it
underestimates the population variance, so we make a small adjustment when we
calculate the sample variance, denoted as \\(s^2\\):
\\\[
\\begin{aligned}
s^2\=\\sum\_{i\=1}^n \\frac{(x\_i\-\\overline{X})^2}{n\-1} \&\& \\text{ where $x\_i$ is the ith data point and
$\\overline{X}$ is the sample mean.}
\\end{aligned}
\\]
The sample standard deviation is simply the square root of the sample variance, \\(s\=\\sqrt{\\sum\_{i\=1}^n \\frac{(x\_i\-\\overline{X})^2}{n\-1}}\\).
The good thing about standard deviation is that it has the same unit as the mean
so it is more intuitive.
We can calculate the sample standard deviation and variation with the `sd()` and `var()`
functions in R. These functions take a vector of numeric values as input and
calculate the desired quantities. Below we use those functions on a randomly
generated vector of numbers.
```
x=rnorm(20,mean=6,sd=0.7)
var(x)
```
```
## [1] 0.2531495
```
```
sd(x)
```
```
## [1] 0.5031397
```
One potential problem with the variance is that it could be affected by
outliers. The points that are too far away from the mean will have a large
effect on the variance even though there might be few of them.
A way to measure variance that could be less affected by outliers is
looking at where the bulk of the distribution is. How do we define where the bulk is?
One common way is to look at the difference between 75th percentile and 25th
percentile, this effectively removes a lot of potential outliers which will be
towards the edges of the range of values.
This is called the interquartile range, and
can be easily calculated using R via the `IQR()` function and the quantiles of a vector
are calculated with the `quantile()` function.
Let us plot the boxplot for a random vector and also calculate IQR using R.
In the boxplot (Figure [3\.3](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:boxplot2Chp3)), 25th and 75th percentiles are the edges of the box, and
the median is marked with a thick line cutting through the box.
```
x=rnorm(20,mean=6,sd=0.7)
IQR(x)
```
```
## [1] 0.5010954
```
```
quantile(x)
```
```
## 0% 25% 50% 75% 100%
## 5.437119 5.742895 5.860302 6.243991 6.558112
```
```
boxplot(x,horizontal = T)
```
FIGURE 3\.3: Boxplot showing the 25th percentile and 75th percentile and median for a set of points sampled from a normal distribution with mean\=6 and standard deviation\=0\.7\.
#### 3\.1\.2\.1 Frequently used statistical distributions
The distributions have parameters (such as mean and variance) that
summarize them, but also they are functions that assign each outcome of a
statistical experiment to its probability of occurrence.
One distribution that you
will frequently encounter is the normal distribution or Gaussian distribution.
The normal distribution has a typical “bell\-curve” shape
and is characterized by mean and standard deviation. A set of data points
that
follow normal distribution will mostly be close to the mean
but spread around it, controlled by the standard deviation parameter. That
means that if we sample data points from a normal distribution, we are more
likely to sample data points near the mean and sometimes away from the mean.
The probability of an event occurring is higher if it is nearby the mean.
The effect
of the parameters for the normal distribution can be observed in the following
plot.
FIGURE 3\.4: Different parameters for normal distribution and effect of those on the shape of the distribution
The normal distribution is often denoted by \\(\\mathcal{N}(\\mu,\\,\\sigma^2\)\\). When a random variable \\(X\\) is distributed normally with mean \\(\\mu\\) and variance \\(\\sigma^2\\), we write:
\\\[X\\ \\sim\\ \\mathcal{N}(\\mu,\\,\\sigma^2\)\\]
The probability
density function of the normal distribution with mean \\(\\mu\\) and standard deviation
\\(\\sigma\\) is as follows:
\\\[P(x)\=\\frac{1}{\\sigma\\sqrt{2\\pi} } \\; e^{ \-\\frac{(x\-\\mu)^2}{2\\sigma^2} } \\]
The probability density function gives the probability of observing a value
on a normal distribution defined by the \\(\\mu\\) and
\\(\\sigma\\) parameters.
Oftentimes, we do not need the exact probability of a value, but we need the
probability of observing a value larger or smaller than a critical value or reference
point. For example, we might want to know the probability of \\(X\\) being smaller than or
equal to \-2 for a normal distribution with mean \\(0\\) and standard deviation \\(2\\): \\(P(X \<\= \-2 \\; \| \\; \\mu\=0,\\sigma\=2\)\\). In this case, what we want is the area under the
curve shaded in dark blue. To be able to do that, we need to integrate the probability
density function but we will usually let software do that. Traditionally,
one calculates a Z\-score which is simply \\((X\-\\mu)/\\sigma\=(\-2\-0\)/2\= \-1\\), and
corresponds to how many standard deviations you are away from the mean.
This is also called “standardization”, the corresponding value is distributed in “standard normal distribution” where \\(\\mathcal{N}(0,\\,1\)\\). After calculating the Z\-score,
we can look up the area under the curve for the left and right sides of the Z\-score in a table, but again, we use software for that.
The tables are outdated when you can use a computer.
Below in Figure [3\.5](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:zscore), we show the Z\-score and the associated probabilities derived
from the calculation above for \\(P(X \<\= \-2 \\; \| \\; \\mu\=0,\\sigma\=2\)\\).
FIGURE 3\.5: Z\-score and associated probabilities for Z\= \-1
In R, the family of `*norm` functions (`rnorm`,`dnorm`,`qnorm` and `pnorm`) can
be used to
operate with the normal distribution, such as calculating probabilities and
generating random numbers drawn from a normal distribution. We show some of those capabilities below.
```
# get the value of probability density function when X= -2,
# where mean=0 and sd=2
dnorm(-2, mean=0, sd=2)
```
```
## [1] 0.1209854
```
```
# get the probability of P(X =< -2) where mean=0 and sd=2
pnorm(-2, mean=0, sd=2)
```
```
## [1] 0.1586553
```
```
# get the probability of P(X > -2) where mean=0 and sd=2
pnorm(-2, mean=0, sd=2,lower.tail = FALSE)
```
```
## [1] 0.8413447
```
```
# get 5 random numbers from normal dist with mean=0 and sd=2
rnorm(5, mean=0 , sd=2)
```
```
## [1] -1.8109030 -1.9220710 -0.5146717 0.8216728 -0.7900804
```
```
# get y value corresponding to P(X > y) = 0.15 with mean=0 and sd=2
qnorm( 0.15, mean=0 , sd=2)
```
```
## [1] -2.072867
```
There are many other distribution functions in R that can be used the same
way. You have to enter the distribution\-specific parameters along
with your critical value, quantiles, or number of random numbers depending
on which function you are using in the family. We will list some of those functions below.
* `dbinom` is for the binomial distribution. This distribution is usually used
to model fractional data and binary data. Examples from genomics include
methylation data.
* `dpois` is used for the Poisson distribution and `dnbinom` is used for
the negative binomial distribution. These distributions are used to model count
data such as sequencing read counts.
* `df` (F distribution) and `dchisq` (Chi\-Squared distribution) are used
in relation to the distribution of variation. The F distribution is used to model
ratios of variation and Chi\-Squared distribution is used to model
distribution of variations. You will frequently encounter these in linear models and generalized linear models.
#### 3\.1\.2\.1 Frequently used statistical distributions
The distributions have parameters (such as mean and variance) that
summarize them, but also they are functions that assign each outcome of a
statistical experiment to its probability of occurrence.
One distribution that you
will frequently encounter is the normal distribution or Gaussian distribution.
The normal distribution has a typical “bell\-curve” shape
and is characterized by mean and standard deviation. A set of data points
that
follow normal distribution will mostly be close to the mean
but spread around it, controlled by the standard deviation parameter. That
means that if we sample data points from a normal distribution, we are more
likely to sample data points near the mean and sometimes away from the mean.
The probability of an event occurring is higher if it is nearby the mean.
The effect
of the parameters for the normal distribution can be observed in the following
plot.
FIGURE 3\.4: Different parameters for normal distribution and effect of those on the shape of the distribution
The normal distribution is often denoted by \\(\\mathcal{N}(\\mu,\\,\\sigma^2\)\\). When a random variable \\(X\\) is distributed normally with mean \\(\\mu\\) and variance \\(\\sigma^2\\), we write:
\\\[X\\ \\sim\\ \\mathcal{N}(\\mu,\\,\\sigma^2\)\\]
The probability
density function of the normal distribution with mean \\(\\mu\\) and standard deviation
\\(\\sigma\\) is as follows:
\\\[P(x)\=\\frac{1}{\\sigma\\sqrt{2\\pi} } \\; e^{ \-\\frac{(x\-\\mu)^2}{2\\sigma^2} } \\]
The probability density function gives the probability of observing a value
on a normal distribution defined by the \\(\\mu\\) and
\\(\\sigma\\) parameters.
Oftentimes, we do not need the exact probability of a value, but we need the
probability of observing a value larger or smaller than a critical value or reference
point. For example, we might want to know the probability of \\(X\\) being smaller than or
equal to \-2 for a normal distribution with mean \\(0\\) and standard deviation \\(2\\): \\(P(X \<\= \-2 \\; \| \\; \\mu\=0,\\sigma\=2\)\\). In this case, what we want is the area under the
curve shaded in dark blue. To be able to do that, we need to integrate the probability
density function but we will usually let software do that. Traditionally,
one calculates a Z\-score which is simply \\((X\-\\mu)/\\sigma\=(\-2\-0\)/2\= \-1\\), and
corresponds to how many standard deviations you are away from the mean.
This is also called “standardization”, the corresponding value is distributed in “standard normal distribution” where \\(\\mathcal{N}(0,\\,1\)\\). After calculating the Z\-score,
we can look up the area under the curve for the left and right sides of the Z\-score in a table, but again, we use software for that.
The tables are outdated when you can use a computer.
Below in Figure [3\.5](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:zscore), we show the Z\-score and the associated probabilities derived
from the calculation above for \\(P(X \<\= \-2 \\; \| \\; \\mu\=0,\\sigma\=2\)\\).
FIGURE 3\.5: Z\-score and associated probabilities for Z\= \-1
In R, the family of `*norm` functions (`rnorm`,`dnorm`,`qnorm` and `pnorm`) can
be used to
operate with the normal distribution, such as calculating probabilities and
generating random numbers drawn from a normal distribution. We show some of those capabilities below.
```
# get the value of probability density function when X= -2,
# where mean=0 and sd=2
dnorm(-2, mean=0, sd=2)
```
```
## [1] 0.1209854
```
```
# get the probability of P(X =< -2) where mean=0 and sd=2
pnorm(-2, mean=0, sd=2)
```
```
## [1] 0.1586553
```
```
# get the probability of P(X > -2) where mean=0 and sd=2
pnorm(-2, mean=0, sd=2,lower.tail = FALSE)
```
```
## [1] 0.8413447
```
```
# get 5 random numbers from normal dist with mean=0 and sd=2
rnorm(5, mean=0 , sd=2)
```
```
## [1] -1.8109030 -1.9220710 -0.5146717 0.8216728 -0.7900804
```
```
# get y value corresponding to P(X > y) = 0.15 with mean=0 and sd=2
qnorm( 0.15, mean=0 , sd=2)
```
```
## [1] -2.072867
```
There are many other distribution functions in R that can be used the same
way. You have to enter the distribution\-specific parameters along
with your critical value, quantiles, or number of random numbers depending
on which function you are using in the family. We will list some of those functions below.
* `dbinom` is for the binomial distribution. This distribution is usually used
to model fractional data and binary data. Examples from genomics include
methylation data.
* `dpois` is used for the Poisson distribution and `dnbinom` is used for
the negative binomial distribution. These distributions are used to model count
data such as sequencing read counts.
* `df` (F distribution) and `dchisq` (Chi\-Squared distribution) are used
in relation to the distribution of variation. The F distribution is used to model
ratios of variation and Chi\-Squared distribution is used to model
distribution of variations. You will frequently encounter these in linear models and generalized linear models.
### 3\.1\.3 Precision of estimates: Confidence intervals
When we take a random sample from a population and compute a statistic, such as
the mean, we are trying to approximate the mean of the population. How well this
sample statistic estimates the population value will always be a
concern. A confidence interval addresses this concern because it provides a
range of values which will plausibly contain the population parameter of interest.
Normally, we would not have access to a population. If we did, we would not have to estimate the population parameters and their precision.
When we do not have access
to the population, one way to estimate intervals is to repeatedly take samples from the
original sample with replacement, that is, we take a data point from the sample
we replace, and we take another data point until we have sample size of the
original sample. Then, we calculate the parameter of interest, in this case the mean, and
repeat this process a large number of times, such as 1000\. At this point, we would have a distribution of re\-sampled
means. We can then calculate the 2\.5th and 97\.5th percentiles and these will
be our so\-called 95% confidence interval. This procedure, resampling with replacement to
estimate the precision of population parameter estimates, is known as the **bootstrap resampling** or **bootstraping**.
Let’s see how we can do this in practice. We simulate a sample
coming from a normal distribution (but we pretend we don’t know the
population parameters). We will estimate the precision
of the mean of the sample using bootstrapping to build confidence intervals, the resulting plot after this procedure is shown in Figure [3\.6](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:bootstrapChp3).
```
library(mosaic)
set.seed(21)
sample1= rnorm(50,20,5) # simulate a sample
# do bootstrap resampling, sampling with replacement
boot.means=do(1000) * mean(resample(sample1))
# get percentiles from the bootstrap means
q=quantile(boot.means[,1],p=c(0.025,0.975))
# plot the histogram
hist(boot.means[,1],col="cornflowerblue",border="white",
xlab="sample means")
abline(v=c(q[1], q[2] ),col="red")
text(x=q[1],y=200,round(q[1],3),adj=c(1,0))
text(x=q[2],y=200,round(q[2],3),adj=c(0,0))
```
FIGURE 3\.6: Precision estimate of the sample mean using 1000 bootstrap samples. Confidence intervals derived from the bootstrap samples are shown with red lines.
If we had a convenient mathematical method to calculate the confidence interval,
we could also do without resampling methods. It turns out that if we take
repeated
samples from a population with sample size \\(n\\), the distribution of means
(\\(\\overline{X}\\)) of those samples
will be approximately normal with mean \\(\\mu\\) and standard deviation
\\(\\sigma/\\sqrt{n}\\). This is also known as the **Central Limit Theorem(CLT)** and
is one of the most important theorems in statistics. This also means that
\\(\\frac{\\overline{X}\-\\mu}{\\sigma\\sqrt{n}}\\) has a standard normal
distribution and we can calculate the Z\-score, and then we can get
the percentiles associated with the Z\-score. Below, we are showing the
Z\-score
calculation for the distribution of \\(\\overline{X}\\), and then
we are deriving the confidence intervals starting with the fact that
the probability of Z being between \\(\-1\.96\\) and \\(1\.96\\) is \\(0\.95\\). We then use algebra
to show that the probability that unknown \\(\\mu\\) is captured between
\\(\\overline{X}\-1\.96\\sigma/\\sqrt{n}\\) and \\(\\overline{X}\+1\.96\\sigma/\\sqrt{n}\\) is \\(0\.95\\), which is commonly known as the 95% confidence interval.
\\\[\\begin{array}{ccc}
Z\=\\frac{\\overline{X}\-\\mu}{\\sigma/\\sqrt{n}}\\\\
P(\-1\.96 \< Z \< 1\.96\)\=0\.95 \\\\
P(\-1\.96 \< \\frac{\\overline{X}\-\\mu}{\\sigma/\\sqrt{n}} \< 1\.96\)\=0\.95\\\\
P(\\mu\-1\.96\\sigma/\\sqrt{n} \< \\overline{X} \< \\mu\+1\.96\\sigma/\\sqrt{n})\=0\.95\\\\
P(\\overline{X}\-1\.96\\sigma/\\sqrt{n} \< \\mu \< \\overline{X}\+1\.96\\sigma/\\sqrt{n})\=0\.95\\\\
confint\=\[\\overline{X}\-1\.96\\sigma/\\sqrt{n},\\overline{X}\+1\.96\\sigma/\\sqrt{n}]
\\end{array}\\]
A 95% confidence interval for the population mean is the most common interval to use, and would
mean that we would expect 95% of the interval estimates to include the
population parameter, in this case, the mean. However, we can pick any value
such as 99% or 90%. We can generalize the confidence interval for
\\(100(1\-\\alpha)\\) as follows:
\\\[\\overline{X} \\pm Z\_{\\alpha/2}\\sigma/\\sqrt{n}\\]
In R, we can do this using the `qnorm()` function to get Z\-scores associated
with \\({\\alpha/2}\\) and \\({1\-\\alpha/2}\\). As you can see, the confidence intervals we calculated using CLT are very
similar to the ones we got from the bootstrap for the same sample. For bootstrap we got \\(\[19\.21, 21\.989]\\) and for the CLT\-based estimate we got \\(\[19\.23638, 22\.00819]\\).
```
alpha=0.05
sd=5
n=50
mean(sample1)+qnorm(c(alpha/2,1-alpha/2))*sd/sqrt(n)
```
```
## [1] 19.23638 22.00819
```
The good thing about CLT is, as long as the sample size is large, regardless of
the population distribution, the distribution of sample means drawn from
that population will always be normal. In Figure [3\.7](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:sampleMeanschp3), we repeatedly
draw samples 1000 times with sample size \\(n\=10\\),\\(30\\), and \\(100\\) from a bimodal,
exponential and a uniform distribution and we are getting sample mean distributions
following normal distribution.
FIGURE 3\.7: Sample means are normally distributed regardless of the population distribution they are drawn from.
However, we should note that how we constructed the confidence interval
using standard normal distribution, \\(N(0,1\)\\), only works when we know the
population standard deviation. In reality, we usually have only access
to a sample and have no idea about the population standard deviation. If
this is the case, we should estimate the standard deviation using
the sample standard deviation and use something called the *t distribution* instead
of the standard normal distribution in our interval calculation. Our confidence interval becomes
\\(\\overline{X} \\pm t\_{\\alpha/2}s/\\sqrt{n}\\), with t distribution
parameter \\(d.f\=n\-1\\), since now the following quantity is t distributed \\(\\frac{\\overline{X}\-\\mu}{s/\\sqrt{n}}\\) instead of standard normal distribution.
The t distribution is similar to the standard normal distribution and has mean \\(0\\) but its spread is larger than the normal distribution
especially when the sample size is small, and has one parameter \\(v\\) for
the degrees of freedom, which is \\(n\-1\\) in this case. Degrees of freedom
is simply the number of data points minus the number of parameters estimated.Here we are estimating the mean from the data, therefore the degrees of freedom is \\(n\-1\\). The resulting distributions are shown in Figure [3\.8](how-to-summarize-collection-of-data-points-the-idea-behind-statistical-distributions.html#fig:tdistChp3).
FIGURE 3\.8: Normal distribution and t distribution with different degrees of freedom. With increasing degrees of freedom, the t distribution approximates the normal distribution better.
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/how-to-test-for-differences-between-samples.html |
3\.2 How to test for differences between samples
------------------------------------------------
Oftentimes we would want to compare sets of samples. Such comparisons include
if wild\-type samples have different expression compared to mutants or if healthy
samples are different from disease samples in some measurable feature (blood count,
gene expression, methylation of certain loci). Since there is variability in our
measurements, we need to take that into account when comparing the sets of samples.
We can simply subtract the means of two samples, but given the variability
of sampling, at the very least we need to decide a cutoff value for differences
of means; small differences of means can be explained by random chance due to
sampling. That means we need to compare the difference we get to a value that
is typical to get if the difference between two group means were only due to
sampling. If you followed the logic above, here we actually introduced two core
ideas of something called “hypothesis testing”, which is simply using
statistics to
determine the probability that a given hypothesis (Ex: if two sample sets
are from the same population or not) is true. Formally, expanded version of those two core ideas are as follows:
1. Decide on a hypothesis to test, often called the “null hypothesis” (\\(H\_0\\)). In our
case, the hypothesis is that there is no difference between sets of samples. An “alternative hypothesis” (\\(H\_1\\)) is that there is a difference between the
samples.
2. Decide on a statistic to test the truth of the null hypothesis.
3. Calculate the statistic.
4. Compare it to a reference value to establish significance, the P\-value. Based on that, either reject or not reject the null hypothesis, \\(H\_0\\).
### 3\.2\.1 Randomization\-based testing for difference of the means
There is one intuitive way to go about this. If we believe there are no
differences between samples, that means the sample labels (test vs. control or
healthy vs. disease) have no meaning. So, if we randomly assign labels to the
samples and calculate the difference of the means, this creates a null
distribution for \\(H\_0\\) where we can compare the real difference and
measure how unlikely it is to get such a value under the expectation of the
null hypothesis. We can calculate all possible permutations to calculate
the null distribution. However, sometimes that is not very feasible and the
equivalent approach would be generating the null distribution by taking a
smaller number of random samples with shuffled group membership.
Below, we are doing this process in R. We are first simulating two samples
from two different distributions.
These would be equivalent to gene expression measurements obtained under
different conditions. Then, we calculate the differences in the means
and do the randomization procedure to get a null distribution when we
assume there is no difference between samples, \\(H\_0\\). We then calculate how
often we would get the original difference we calculated under the
assumption that \\(H\_0\\) is true. The resulting null distribution and the original value is shown in Figure [3\.9](how-to-test-for-differences-between-samples.html#fig:randomTestchp3).
```
set.seed(100)
gene1=rnorm(30,mean=4,sd=2)
gene2=rnorm(30,mean=2,sd=2)
org.diff=mean(gene1)-mean(gene2)
gene.df=data.frame(exp=c(gene1,gene2),
group=c( rep("test",30),rep("control",30) ) )
exp.null <- do(1000) * diff(mosaic::mean(exp ~ shuffle(group), data=gene.df))
hist(exp.null[,1],xlab="null distribution | no difference in samples",
main=expression(paste(H[0]," :no difference in means") ),
xlim=c(-2,2),col="cornflowerblue",border="white")
abline(v=quantile(exp.null[,1],0.95),col="red" )
abline(v=org.diff,col="blue" )
text(x=quantile(exp.null[,1],0.95),y=200,"0.05",adj=c(1,0),col="red")
text(x=org.diff,y=200,"org. diff.",adj=c(1,0),col="blue")
```
FIGURE 3\.9: The null distribution for differences of means obtained via randomization. The original difference is marked via the blue line. The red line marks the value that corresponds to P\-value of 0\.05
```
p.val=sum(exp.null[,1]>org.diff)/length(exp.null[,1])
p.val
```
```
## [1] 0.001
```
After doing random permutations and getting a null distribution, it is possible to get a confidence interval for the distribution of difference in means.
This is simply the \\(2\.5th\\) and \\(97\.5th\\) percentiles of the null distribution, and
directly related to the P\-value calculation above.
### 3\.2\.2 Using t\-test for difference of the means between two samples
We can also calculate the difference between means using a t\-test. Sometimes we will have too few data points in a sample to do a meaningful
randomization test, also randomization takes more time than doing a t\-test.
This is a test that depends on the t distribution. The line of thought follows
from the CLT and we can show differences in means are t distributed.
There are a couple of variants of the t\-test for this purpose. If we assume
the population variances are equal we can use the following version
\\\[t \= \\frac{\\bar {X}\_1 \- \\bar{X}\_2}{s\_{X\_1X\_2} \\cdot \\sqrt{\\frac{1}{n\_1}\+\\frac{1}{n\_2}}}\\]
where
\\\[s\_{X\_1X\_2} \= \\sqrt{\\frac{(n\_1\-1\)s\_{X\_1}^2\+(n\_2\-1\)s\_{X\_2}^2}{n\_1\+n\_2\-2}}\\]
In the first equation above, the quantity is t distributed with \\(n\_1\+n\_2\-2\\) degrees of freedom. We can calculate the quantity and then use software
to look for the percentile of that value in that t distribution, which is our P\-value. When we cannot assume equal variances, we use “Welch’s t\-test”
which is the default t\-test in R and also works well when variances and
the sample sizes are the same. For this test we calculate the following
quantity:
\\\[t \= \\frac{\\overline{X}\_1 \- \\overline{X}\_2}{s\_{\\overline{X}\_1 \- \\overline{X}\_2}}\\]
where
\\\[s\_{\\overline{X}\_1 \- \\overline{X}\_2} \= \\sqrt{\\frac{s\_1^2 }{ n\_1} \+ \\frac{s\_2^2 }{n\_2}}\\]
and the degrees of freedom equals to
\\\[\\mathrm{d.f.} \= \\frac{(s\_1^2/n\_1 \+ s\_2^2/n\_2\)^2}{(s\_1^2/n\_1\)^2/(n\_1\-1\) \+ (s\_2^2/n\_2\)^2/(n\_2\-1\)}
\\]
Luckily, R does all those calculations for us. Below we will show the use of `t.test()` function in R. We will use it on the samples we simulated
above.
```
# Welch's t-test
stats::t.test(gene1,gene2)
```
```
##
## Welch Two Sample t-test
##
## data: gene1 and gene2
## t = 3.7653, df = 47.552, p-value = 0.0004575
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.872397 2.872761
## sample estimates:
## mean of x mean of y
## 4.057728 2.185149
```
```
# t-test with equal variance assumption
stats::t.test(gene1,gene2,var.equal=TRUE)
```
```
##
## Two Sample t-test
##
## data: gene1 and gene2
## t = 3.7653, df = 58, p-value = 0.0003905
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.8770753 2.8680832
## sample estimates:
## mean of x mean of y
## 4.057728 2.185149
```
A final word on t\-tests: they generally assume a population where samples coming
from them have a normal
distribution, however it is been shown t\-test can tolerate deviations from
normality, especially, when two distributions are moderately skewed in the
same direction. This is due to the central limit theorem, which says that the means of
samples will be distributed normally no matter the population distribution
if sample sizes are large.
### 3\.2\.3 Multiple testing correction
We should think of hypothesis testing as a non\-error\-free method of making
decisions. There will be times when we declare something significant and accept
\\(H\_1\\) but we will be wrong.
These decisions are also called “false positives” or “false discoveries”, and are also known as “type I errors”. Similarly, we can fail to reject a hypothesis
when we actually should. These cases are known as “false negatives”, also known
as “type II errors”.
The ratio of true negatives to the sum of
true negatives and false positives (\\(\\frac{TN}{FP\+TN}\\)) is known as specificity.
And we usually want to decrease the FP and get higher specificity.
The ratio of true positives to the sum of
true positives and false negatives (\\(\\frac{TP}{TP\+FN}\\)) is known as sensitivity.
And, again, we usually want to decrease the FN and get higher sensitivity.
Sensitivity is also known as the “power of a test” in the context of hypothesis
testing. More powerful tests will be highly sensitive and will have fewer type
II errors. For the t\-test, the power is positively associated with sample size
and the effect size. The larger the sample size, the smaller the standard error, and
looking for the larger effect sizes will similarly increase the power.
The general summary of these different decision combinations are
included in the table below.
| | \\(H\_0\\) is TRUE, \[Gene is NOT differentially expressed] | \\(H\_1\\) is TRUE, \[Gene is differentially expressed] | |
| --- | --- | --- | --- |
| Accept \\(H\_0\\) (claim that the gene is not differentially expressed) | True Negatives (TN) | False Negatives (FN) ,type II error | \\(m\_0\\): number of truly null hypotheses |
| reject \\(H\_0\\) (claim that the gene is differentially expressed) | False Positives (FP) ,type I error | True Positives (TP) | \\(m\-m\_0\\): number of truly alternative hypotheses |
We expect to make more type I errors as the number of tests increase, which
means we will reject the null hypothesis by mistake. For example, if we
perform a test at the 5% significance level, there is a 5% chance of
incorrectly rejecting the null hypothesis if the null hypothesis is true.
However, if we make 1000 tests where all null hypotheses are true for
each of them, the average number of incorrect rejections is 50\. And if we
apply the rules of probability, there is almost a 100% chance that
we will have at least one incorrect rejection.
There are multiple statistical techniques to prevent this from happening.
These techniques generally push the P\-values obtained from multiple
tests to higher values; if the individual P\-value is low enough it survives
this process. The simplest method is just to multiply the individual
P\-value (\\(p\_i\\)) by the number of tests (\\(m\\)), \\(m \\cdot p\_i\\). This is
called “Bonferroni correction”. However, this is too harsh if you have thousands
of tests. Other methods are developed to remedy this. Those methods
rely on ranking the P\-values and dividing \\(m \\cdot p\_i\\) by the
rank, \\(i\\), :\\(\\frac{m \\cdot p\_i }{i}\\), which is derived from the Benjamini–Hochberg
procedure. This procedure is developed to control for “False Discovery Rate (FDR)”
, which is the proportion of false positives among all significant tests. And in
practical terms, we get the “FDR\-adjusted P\-value” from the procedure described
above. This gives us an estimate of the proportion of false discoveries for a given
test. To elaborate, p\-value of 0\.05 implies that 5% of all tests will be false positives. An FDR\-adjusted p\-value of 0\.05 implies that 5% of significant tests will be false positives. The FDR\-adjusted P\-values will result in a lower number of false positives.
One final method that is also popular is called the “q\-value”
method and related to the method above. This procedure relies on estimating the proportion of true null
hypotheses from the distribution of raw p\-values and using that quantity
to come up with what is called a “q\-value”, which is also an FDR\-adjusted P\-value (Storey and Tibshirani [2003](#ref-Storey2003-nv)). That can be practically defined
as “the proportion of significant features that turn out to be false
leads.” A q\-value 0\.01 would mean 1% of the tests called significant at this
level will be truly null on average. Within the genomics community
q\-value and FDR adjusted P\-value are synonymous although they can be
calculated differently.
In R, the base function `p.adjust()` implements most of the p\-value correction
methods described above. For the q\-value, we can use the `qvalue` package from
Bioconductor. Below we demonstrate how to use them on a set of simulated
p\-values. The plot in Figure [3\.10](how-to-test-for-differences-between-samples.html#fig:multtest) shows that Bonferroni correction does a terrible job. FDR(BH) and q\-value
approach are better but, the q\-value approach is more permissive than FDR(BH).
```
library(qvalue)
data(hedenfalk)
qvalues <- qvalue(hedenfalk$p)$q
bonf.pval=p.adjust(hedenfalk$p,method ="bonferroni")
fdr.adj.pval=p.adjust(hedenfalk$p,method ="fdr")
plot(hedenfalk$p,qvalues,pch=19,ylim=c(0,1),
xlab="raw P-values",ylab="adjusted P-values")
points(hedenfalk$p,bonf.pval,pch=19,col="red")
points(hedenfalk$p,fdr.adj.pval,pch=19,col="blue")
legend("bottomright",legend=c("q-value","FDR (BH)","Bonferroni"),
fill=c("black","blue","red"))
```
FIGURE 3\.10: Adjusted P\-values via different methods and their relationship to raw P\-values
### 3\.2\.4 Moderated t\-tests: Using information from multiple comparisons
In genomics, we usually do not do one test but many, as described above. That means we
may be able to use the information from the parameters obtained from all
comparisons to influence the individual parameters. For example, if you have many variances
calculated for thousands of genes across samples, you can force individual
variance estimates to shrink toward the mean or the median of the distribution
of variances. This usually creates better performance in individual variance
estimates and therefore better performance in significance testing, which
depends on variance estimates. How much the values are shrunk toward a common
value depends on the exact method used. These tests in general are called moderated
t\-tests or shrinkage t\-tests. One approach popularized by Limma software is
to use so\-called “Empirical Bayes methods”. The main formulation in these
methods is \\(\\hat{V\_g} \= aV\_0 \+ bV\_g\\), where \\(V\_0\\) is the background variability and \\(V\_g\\) is the individual variability. Then, these methods estimate \\(a\\) and \\(b\\) in various ways to come up with a “shrunk” version of the variability, \\(\\hat{V\_g}\\). Bayesian inference can make use of prior knowledge to make inference about properties of the data. In a Bayesian viewpoint,
the prior knowledge, in this case variability of other genes, can be used to calculate the variability of an individual gene. In our
case, \\(V\_0\\) would be the prior knowledge we have on the variability of
the genes and we
use that knowledge to influence our estimate for the individual genes.
Below we are simulating a gene expression matrix with 1000 genes, and 3 test
and 3 control groups. Each row is a gene, and in normal circumstances we would
like to find differentially expressed genes. In this case, we are simulating
them from the same distribution, so in reality we do not expect any differences.
We then use the adjusted standard error estimates in empirical Bayesian spirit but, in a very crude way. We just shrink the gene\-wise standard error estimates towards the median with equal \\(a\\) and \\(b\\) weights. That is to say, we add the individual estimate to the
median of the standard error distribution from all genes and divide that quantity by 2\. So if we plug that into the above formula, what we do is:
\\\[ \\hat{V\_g} \= (V\_0 \+ V\_g)/2 \\]
In the code below, we are avoiding for loops or apply family functions
by using vectorized operations. The code below samples gene expression values from a hypothetical distribution. Since all the values come from the same distribution, we do not expect differences between groups. We then calculate moderated and unmoderated t\-test statistics and plot the P\-value distributions for tests. The results are shown in Figure [3\.11](how-to-test-for-differences-between-samples.html#fig:modTtestChp3).
```
set.seed(100)
#sample data matrix from normal distribution
gset=rnorm(3000,mean=200,sd=70)
data=matrix(gset,ncol=6)
# set groups
group1=1:3
group2=4:6
n1=3
n2=3
dx=rowMeans(data[,group1])-rowMeans(data[,group2])
require(matrixStats)
# get the esimate of pooled variance
stderr = sqrt( (rowVars(data[,group1])*(n1-1) +
rowVars(data[,group2])*(n2-1)) / (n1+n2-2) * ( 1/n1 + 1/n2 ))
# do the shrinking towards median
mod.stderr = (stderr + median(stderr)) / 2 # moderation in variation
# esimate t statistic with moderated variance
t.mod <- dx / mod.stderr
# calculate P-value of rejecting null
p.mod = 2*pt( -abs(t.mod), n1+n2-2 )
# esimate t statistic without moderated variance
t = dx / stderr
# calculate P-value of rejecting null
p = 2*pt( -abs(t), n1+n2-2 )
par(mfrow=c(1,2))
hist(p,col="cornflowerblue",border="white",main="",xlab="P-values t-test")
mtext(paste("signifcant tests:",sum(p<0.05)) )
hist(p.mod,col="cornflowerblue",border="white",main="",
xlab="P-values mod. t-test")
mtext(paste("signifcant tests:",sum(p.mod<0.05)) )
```
FIGURE 3\.11: The distributions of P\-values obtained by t\-tests and moderated t\-tests
**Want to know more ?**
* Basic statistical concepts
+ “Cartoon guide to statistics” by Gonick \& Smith (Gonick and Smith [2005](#ref-gonick2005cartoon)). Provides central concepts depicted as cartoons in a funny but clear and accurate manner.
+ “OpenIntro Statistics” (Diez, Barr, Çetinkaya\-Rundel, et al. [2015](#ref-diez2015openintro)) (Free e\-book <http://openintro.org>). This book provides fundamental statistical concepts in a clear and easy way. It includes R code.
* Hands\-on statistics recipes with R
+ “The R book” (Crawley [2012](#ref-crawley2012r)). This is the main R book for anyone interested in statistical concepts and their application in R. It requires some background in statistics since the main focus is applications in R.
* Moderated tests
+ Comparison of moderated tests for differential expression (De Hertogh, De Meulder, Berger, et al. [2010](#ref-de2010benchmark)) [http://bmcbioinformatics.biomedcentral.com/articles/10\.1186/1471\-2105\-11\-17](http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-17)
+ Limma method developed for testing differential expression between genes using a moderated test (Smyth Gordon [2004](#ref-smyth2004linear)) <http://www.statsci.org/smyth/pubs/ebayes.pdf>
### 3\.2\.1 Randomization\-based testing for difference of the means
There is one intuitive way to go about this. If we believe there are no
differences between samples, that means the sample labels (test vs. control or
healthy vs. disease) have no meaning. So, if we randomly assign labels to the
samples and calculate the difference of the means, this creates a null
distribution for \\(H\_0\\) where we can compare the real difference and
measure how unlikely it is to get such a value under the expectation of the
null hypothesis. We can calculate all possible permutations to calculate
the null distribution. However, sometimes that is not very feasible and the
equivalent approach would be generating the null distribution by taking a
smaller number of random samples with shuffled group membership.
Below, we are doing this process in R. We are first simulating two samples
from two different distributions.
These would be equivalent to gene expression measurements obtained under
different conditions. Then, we calculate the differences in the means
and do the randomization procedure to get a null distribution when we
assume there is no difference between samples, \\(H\_0\\). We then calculate how
often we would get the original difference we calculated under the
assumption that \\(H\_0\\) is true. The resulting null distribution and the original value is shown in Figure [3\.9](how-to-test-for-differences-between-samples.html#fig:randomTestchp3).
```
set.seed(100)
gene1=rnorm(30,mean=4,sd=2)
gene2=rnorm(30,mean=2,sd=2)
org.diff=mean(gene1)-mean(gene2)
gene.df=data.frame(exp=c(gene1,gene2),
group=c( rep("test",30),rep("control",30) ) )
exp.null <- do(1000) * diff(mosaic::mean(exp ~ shuffle(group), data=gene.df))
hist(exp.null[,1],xlab="null distribution | no difference in samples",
main=expression(paste(H[0]," :no difference in means") ),
xlim=c(-2,2),col="cornflowerblue",border="white")
abline(v=quantile(exp.null[,1],0.95),col="red" )
abline(v=org.diff,col="blue" )
text(x=quantile(exp.null[,1],0.95),y=200,"0.05",adj=c(1,0),col="red")
text(x=org.diff,y=200,"org. diff.",adj=c(1,0),col="blue")
```
FIGURE 3\.9: The null distribution for differences of means obtained via randomization. The original difference is marked via the blue line. The red line marks the value that corresponds to P\-value of 0\.05
```
p.val=sum(exp.null[,1]>org.diff)/length(exp.null[,1])
p.val
```
```
## [1] 0.001
```
After doing random permutations and getting a null distribution, it is possible to get a confidence interval for the distribution of difference in means.
This is simply the \\(2\.5th\\) and \\(97\.5th\\) percentiles of the null distribution, and
directly related to the P\-value calculation above.
### 3\.2\.2 Using t\-test for difference of the means between two samples
We can also calculate the difference between means using a t\-test. Sometimes we will have too few data points in a sample to do a meaningful
randomization test, also randomization takes more time than doing a t\-test.
This is a test that depends on the t distribution. The line of thought follows
from the CLT and we can show differences in means are t distributed.
There are a couple of variants of the t\-test for this purpose. If we assume
the population variances are equal we can use the following version
\\\[t \= \\frac{\\bar {X}\_1 \- \\bar{X}\_2}{s\_{X\_1X\_2} \\cdot \\sqrt{\\frac{1}{n\_1}\+\\frac{1}{n\_2}}}\\]
where
\\\[s\_{X\_1X\_2} \= \\sqrt{\\frac{(n\_1\-1\)s\_{X\_1}^2\+(n\_2\-1\)s\_{X\_2}^2}{n\_1\+n\_2\-2}}\\]
In the first equation above, the quantity is t distributed with \\(n\_1\+n\_2\-2\\) degrees of freedom. We can calculate the quantity and then use software
to look for the percentile of that value in that t distribution, which is our P\-value. When we cannot assume equal variances, we use “Welch’s t\-test”
which is the default t\-test in R and also works well when variances and
the sample sizes are the same. For this test we calculate the following
quantity:
\\\[t \= \\frac{\\overline{X}\_1 \- \\overline{X}\_2}{s\_{\\overline{X}\_1 \- \\overline{X}\_2}}\\]
where
\\\[s\_{\\overline{X}\_1 \- \\overline{X}\_2} \= \\sqrt{\\frac{s\_1^2 }{ n\_1} \+ \\frac{s\_2^2 }{n\_2}}\\]
and the degrees of freedom equals to
\\\[\\mathrm{d.f.} \= \\frac{(s\_1^2/n\_1 \+ s\_2^2/n\_2\)^2}{(s\_1^2/n\_1\)^2/(n\_1\-1\) \+ (s\_2^2/n\_2\)^2/(n\_2\-1\)}
\\]
Luckily, R does all those calculations for us. Below we will show the use of `t.test()` function in R. We will use it on the samples we simulated
above.
```
# Welch's t-test
stats::t.test(gene1,gene2)
```
```
##
## Welch Two Sample t-test
##
## data: gene1 and gene2
## t = 3.7653, df = 47.552, p-value = 0.0004575
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.872397 2.872761
## sample estimates:
## mean of x mean of y
## 4.057728 2.185149
```
```
# t-test with equal variance assumption
stats::t.test(gene1,gene2,var.equal=TRUE)
```
```
##
## Two Sample t-test
##
## data: gene1 and gene2
## t = 3.7653, df = 58, p-value = 0.0003905
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.8770753 2.8680832
## sample estimates:
## mean of x mean of y
## 4.057728 2.185149
```
A final word on t\-tests: they generally assume a population where samples coming
from them have a normal
distribution, however it is been shown t\-test can tolerate deviations from
normality, especially, when two distributions are moderately skewed in the
same direction. This is due to the central limit theorem, which says that the means of
samples will be distributed normally no matter the population distribution
if sample sizes are large.
### 3\.2\.3 Multiple testing correction
We should think of hypothesis testing as a non\-error\-free method of making
decisions. There will be times when we declare something significant and accept
\\(H\_1\\) but we will be wrong.
These decisions are also called “false positives” or “false discoveries”, and are also known as “type I errors”. Similarly, we can fail to reject a hypothesis
when we actually should. These cases are known as “false negatives”, also known
as “type II errors”.
The ratio of true negatives to the sum of
true negatives and false positives (\\(\\frac{TN}{FP\+TN}\\)) is known as specificity.
And we usually want to decrease the FP and get higher specificity.
The ratio of true positives to the sum of
true positives and false negatives (\\(\\frac{TP}{TP\+FN}\\)) is known as sensitivity.
And, again, we usually want to decrease the FN and get higher sensitivity.
Sensitivity is also known as the “power of a test” in the context of hypothesis
testing. More powerful tests will be highly sensitive and will have fewer type
II errors. For the t\-test, the power is positively associated with sample size
and the effect size. The larger the sample size, the smaller the standard error, and
looking for the larger effect sizes will similarly increase the power.
The general summary of these different decision combinations are
included in the table below.
| | \\(H\_0\\) is TRUE, \[Gene is NOT differentially expressed] | \\(H\_1\\) is TRUE, \[Gene is differentially expressed] | |
| --- | --- | --- | --- |
| Accept \\(H\_0\\) (claim that the gene is not differentially expressed) | True Negatives (TN) | False Negatives (FN) ,type II error | \\(m\_0\\): number of truly null hypotheses |
| reject \\(H\_0\\) (claim that the gene is differentially expressed) | False Positives (FP) ,type I error | True Positives (TP) | \\(m\-m\_0\\): number of truly alternative hypotheses |
We expect to make more type I errors as the number of tests increase, which
means we will reject the null hypothesis by mistake. For example, if we
perform a test at the 5% significance level, there is a 5% chance of
incorrectly rejecting the null hypothesis if the null hypothesis is true.
However, if we make 1000 tests where all null hypotheses are true for
each of them, the average number of incorrect rejections is 50\. And if we
apply the rules of probability, there is almost a 100% chance that
we will have at least one incorrect rejection.
There are multiple statistical techniques to prevent this from happening.
These techniques generally push the P\-values obtained from multiple
tests to higher values; if the individual P\-value is low enough it survives
this process. The simplest method is just to multiply the individual
P\-value (\\(p\_i\\)) by the number of tests (\\(m\\)), \\(m \\cdot p\_i\\). This is
called “Bonferroni correction”. However, this is too harsh if you have thousands
of tests. Other methods are developed to remedy this. Those methods
rely on ranking the P\-values and dividing \\(m \\cdot p\_i\\) by the
rank, \\(i\\), :\\(\\frac{m \\cdot p\_i }{i}\\), which is derived from the Benjamini–Hochberg
procedure. This procedure is developed to control for “False Discovery Rate (FDR)”
, which is the proportion of false positives among all significant tests. And in
practical terms, we get the “FDR\-adjusted P\-value” from the procedure described
above. This gives us an estimate of the proportion of false discoveries for a given
test. To elaborate, p\-value of 0\.05 implies that 5% of all tests will be false positives. An FDR\-adjusted p\-value of 0\.05 implies that 5% of significant tests will be false positives. The FDR\-adjusted P\-values will result in a lower number of false positives.
One final method that is also popular is called the “q\-value”
method and related to the method above. This procedure relies on estimating the proportion of true null
hypotheses from the distribution of raw p\-values and using that quantity
to come up with what is called a “q\-value”, which is also an FDR\-adjusted P\-value (Storey and Tibshirani [2003](#ref-Storey2003-nv)). That can be practically defined
as “the proportion of significant features that turn out to be false
leads.” A q\-value 0\.01 would mean 1% of the tests called significant at this
level will be truly null on average. Within the genomics community
q\-value and FDR adjusted P\-value are synonymous although they can be
calculated differently.
In R, the base function `p.adjust()` implements most of the p\-value correction
methods described above. For the q\-value, we can use the `qvalue` package from
Bioconductor. Below we demonstrate how to use them on a set of simulated
p\-values. The plot in Figure [3\.10](how-to-test-for-differences-between-samples.html#fig:multtest) shows that Bonferroni correction does a terrible job. FDR(BH) and q\-value
approach are better but, the q\-value approach is more permissive than FDR(BH).
```
library(qvalue)
data(hedenfalk)
qvalues <- qvalue(hedenfalk$p)$q
bonf.pval=p.adjust(hedenfalk$p,method ="bonferroni")
fdr.adj.pval=p.adjust(hedenfalk$p,method ="fdr")
plot(hedenfalk$p,qvalues,pch=19,ylim=c(0,1),
xlab="raw P-values",ylab="adjusted P-values")
points(hedenfalk$p,bonf.pval,pch=19,col="red")
points(hedenfalk$p,fdr.adj.pval,pch=19,col="blue")
legend("bottomright",legend=c("q-value","FDR (BH)","Bonferroni"),
fill=c("black","blue","red"))
```
FIGURE 3\.10: Adjusted P\-values via different methods and their relationship to raw P\-values
### 3\.2\.4 Moderated t\-tests: Using information from multiple comparisons
In genomics, we usually do not do one test but many, as described above. That means we
may be able to use the information from the parameters obtained from all
comparisons to influence the individual parameters. For example, if you have many variances
calculated for thousands of genes across samples, you can force individual
variance estimates to shrink toward the mean or the median of the distribution
of variances. This usually creates better performance in individual variance
estimates and therefore better performance in significance testing, which
depends on variance estimates. How much the values are shrunk toward a common
value depends on the exact method used. These tests in general are called moderated
t\-tests or shrinkage t\-tests. One approach popularized by Limma software is
to use so\-called “Empirical Bayes methods”. The main formulation in these
methods is \\(\\hat{V\_g} \= aV\_0 \+ bV\_g\\), where \\(V\_0\\) is the background variability and \\(V\_g\\) is the individual variability. Then, these methods estimate \\(a\\) and \\(b\\) in various ways to come up with a “shrunk” version of the variability, \\(\\hat{V\_g}\\). Bayesian inference can make use of prior knowledge to make inference about properties of the data. In a Bayesian viewpoint,
the prior knowledge, in this case variability of other genes, can be used to calculate the variability of an individual gene. In our
case, \\(V\_0\\) would be the prior knowledge we have on the variability of
the genes and we
use that knowledge to influence our estimate for the individual genes.
Below we are simulating a gene expression matrix with 1000 genes, and 3 test
and 3 control groups. Each row is a gene, and in normal circumstances we would
like to find differentially expressed genes. In this case, we are simulating
them from the same distribution, so in reality we do not expect any differences.
We then use the adjusted standard error estimates in empirical Bayesian spirit but, in a very crude way. We just shrink the gene\-wise standard error estimates towards the median with equal \\(a\\) and \\(b\\) weights. That is to say, we add the individual estimate to the
median of the standard error distribution from all genes and divide that quantity by 2\. So if we plug that into the above formula, what we do is:
\\\[ \\hat{V\_g} \= (V\_0 \+ V\_g)/2 \\]
In the code below, we are avoiding for loops or apply family functions
by using vectorized operations. The code below samples gene expression values from a hypothetical distribution. Since all the values come from the same distribution, we do not expect differences between groups. We then calculate moderated and unmoderated t\-test statistics and plot the P\-value distributions for tests. The results are shown in Figure [3\.11](how-to-test-for-differences-between-samples.html#fig:modTtestChp3).
```
set.seed(100)
#sample data matrix from normal distribution
gset=rnorm(3000,mean=200,sd=70)
data=matrix(gset,ncol=6)
# set groups
group1=1:3
group2=4:6
n1=3
n2=3
dx=rowMeans(data[,group1])-rowMeans(data[,group2])
require(matrixStats)
# get the esimate of pooled variance
stderr = sqrt( (rowVars(data[,group1])*(n1-1) +
rowVars(data[,group2])*(n2-1)) / (n1+n2-2) * ( 1/n1 + 1/n2 ))
# do the shrinking towards median
mod.stderr = (stderr + median(stderr)) / 2 # moderation in variation
# esimate t statistic with moderated variance
t.mod <- dx / mod.stderr
# calculate P-value of rejecting null
p.mod = 2*pt( -abs(t.mod), n1+n2-2 )
# esimate t statistic without moderated variance
t = dx / stderr
# calculate P-value of rejecting null
p = 2*pt( -abs(t), n1+n2-2 )
par(mfrow=c(1,2))
hist(p,col="cornflowerblue",border="white",main="",xlab="P-values t-test")
mtext(paste("signifcant tests:",sum(p<0.05)) )
hist(p.mod,col="cornflowerblue",border="white",main="",
xlab="P-values mod. t-test")
mtext(paste("signifcant tests:",sum(p.mod<0.05)) )
```
FIGURE 3\.11: The distributions of P\-values obtained by t\-tests and moderated t\-tests
**Want to know more ?**
* Basic statistical concepts
+ “Cartoon guide to statistics” by Gonick \& Smith (Gonick and Smith [2005](#ref-gonick2005cartoon)). Provides central concepts depicted as cartoons in a funny but clear and accurate manner.
+ “OpenIntro Statistics” (Diez, Barr, Çetinkaya\-Rundel, et al. [2015](#ref-diez2015openintro)) (Free e\-book <http://openintro.org>). This book provides fundamental statistical concepts in a clear and easy way. It includes R code.
* Hands\-on statistics recipes with R
+ “The R book” (Crawley [2012](#ref-crawley2012r)). This is the main R book for anyone interested in statistical concepts and their application in R. It requires some background in statistics since the main focus is applications in R.
* Moderated tests
+ Comparison of moderated tests for differential expression (De Hertogh, De Meulder, Berger, et al. [2010](#ref-de2010benchmark)) [http://bmcbioinformatics.biomedcentral.com/articles/10\.1186/1471\-2105\-11\-17](http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-17)
+ Limma method developed for testing differential expression between genes using a moderated test (Smyth Gordon [2004](#ref-smyth2004linear)) <http://www.statsci.org/smyth/pubs/ebayes.pdf>
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/relationship-between-variables-linear-models-and-correlation.html |
3\.3 Relationship between variables: Linear models and correlation
------------------------------------------------------------------
In genomics, we would often need to measure or model the relationship between
variables. We might want to know about expression of a particular gene in liver
in relation to the dosage of a drug that patient receives. Or, we may want to know
DNA methylation of a certain locus in the genome in relation to the age of the sample donor. Or, we might be interested in the relationship between histone
modifications and gene expression. Is there a linear relationship, the more
histone modification the more the gene is expressed ?
In these
situations and many more, linear regression or linear models can be used to
model the relationship with a “dependent” or “response” variable (expression or
methylation
in the above examples) and one or more “independent” or “explanatory” variables (age, drug dosage or histone modification in the above examples). Our simple linear model has the
following components.
\\\[ Y\= \\beta\_0\+\\beta\_1X \+ \\epsilon \\]
In the equation above, \\(Y\\) is the response variable and \\(X\\) is the explanatory
variable. \\(\\epsilon\\) is the mean\-zero error term. Since the line fit will not
be able to precisely predict the \\(Y\\) values, there will be some error associated
with each prediction when we compare it to the original \\(Y\\) values. This error
is captured in the \\(\\epsilon\\) term. We can alternatively write the model as
follows to emphasize that the model approximates \\(Y\\), in this case notice that we removed the \\(\\epsilon\\) term: \\(Y \\sim \\beta\_0\+\\beta\_1X\\).
The plot below in Figure [3\.12](relationship-between-variables-linear-models-and-correlation.html#fig:histoneLmChp3) shows the relationship between
histone modification (trimethylated forms of histone H3 at lysine 4, aka H3K4me3\)
and gene expression for 100 genes. The blue line is our model with estimated
coefficients (\\(\\hat{y}\=\\hat{\\beta}\_0 \+ \\hat{\\beta}\_1X\\), where \\(\\hat{\\beta}\_0\\)
and \\(\\hat{\\beta}\_1\\) are the estimated values of \\(\\beta\_0\\) and
\\(\\beta\_1\\), and \\(\\hat{y}\\) indicates the prediction). The red lines indicate the individual
errors per data point, indicated as \\(\\epsilon\\) in the formula above.
FIGURE 3\.12: Relationship between histone modification score and gene expression. Increasing histone modification, H3K4me3, seems to be associated with increasing gene expression. Each dot is a gene
There could be more than one explanatory variable. We then simply add more \\(X\\)
and \\(\\beta\\) to our model. If there are two explanatory variables our model
will look like this:
\\\[ Y\= \\beta\_0\+\\beta\_1X\_1 \+\\beta\_2X\_2 \+ \\epsilon \\]
In this case, we will be fitting a plane rather than a line. However, the fitting
process which we will describe in the later sections will not change for our
gene expression problem. We can introduce one more histone modification, H3K27me3\. We will then have a linear model with 2 explanatory variables and the
fitted plane will look like the one in Figure [3\.13](relationship-between-variables-linear-models-and-correlation.html#fig:histoneLm2chp3). The gene expression values are shown
as dots below and above the fitted plane. Linear regression and its extensions which make use of other distributions (generalized linear models) are central in computational genomics for statistical tests. We will see more of how regression is used in statistical hypothesis testing for computational genomics in Chapters [8](rnaseqanalysis.html#rnaseqanalysis) and [10](bsseq.html#bsseq).
FIGURE 3\.13: Association of gene expression with H3K4me3 and H3K27me3 histone modifications.
#### 3\.3\.0\.1 Matrix notation for linear models
We can naturally have more explanatory variables than just two. The formula
below has \\(n\\) explanatory variables.
\\\[Y\= \\beta\_0\+\\beta\_1X\_1\+\\beta\_2X\_2 \+ \\beta\_3X\_3 \+ .. \+ \\beta\_nX\_n \+\\epsilon\\]
If there are many variables, it would be easier
to write the model in matrix notation. The matrix form of linear model with
two explanatory variables will look like the one
below. The first matrix would be our data matrix. This contains our explanatory
variables and a column of 1s. The second term is a column vector of \\(\\beta\\)
values. We also add a vector of error terms, \\(\\epsilon\\)s, to the matrix multiplication.
\\\[
\\mathbf{Y} \= \\left\[\\begin{array}{rrr}
1 \& X\_{1,1} \& X\_{1,2} \\\\
1 \& X\_{2,1} \& X\_{2,2} \\\\
1 \& X\_{3,1} \& X\_{3,2} \\\\
1 \& X\_{4,1} \& X\_{4,2}
\\end{array}\\right]
%
\\left\[\\begin{array}{rrr}
\\beta\_0 \\\\
\\beta\_1 \\\\
\\beta\_2
\\end{array}\\right]
%
\+
\\left\[\\begin{array}{rrr}
\\epsilon\_1 \\\\
\\epsilon\_2 \\\\
\\epsilon\_3 \\\\
\\epsilon\_0
\\end{array}\\right]
\\]
The multiplication of the data matrix and \\(\\beta\\) vector and addition of the
error terms simply results in the following set of equations per data point:
\\\[
\\begin{aligned}
Y\_1\= \\beta\_0\+\\beta\_1X\_{1,1}\+\\beta\_2X\_{1,2} \+\\epsilon\_1 \\\\
Y\_2\= \\beta\_0\+\\beta\_1X\_{2,1}\+\\beta\_2X\_{2,2} \+\\epsilon\_2 \\\\
Y\_3\= \\beta\_0\+\\beta\_1X\_{3,1}\+\\beta\_2X\_{3,2} \+\\epsilon\_3 \\\\
Y\_4\= \\beta\_0\+\\beta\_1X\_{4,1}\+\\beta\_2X\_{4,2} \+\\epsilon\_4
\\end{aligned}
\\]
This expression involving the multiplication of the data matrix, the
\\(\\beta\\) vector and vector of error terms (\\(\\epsilon\\))
could be simply written as follows.
\\\[Y\=X\\beta \+ \\epsilon\\]
In the equation, above \\(Y\\) is the vector of response variables, \\(X\\) is the
data matrix, and \\(\\beta\\) is the vector of coefficients.
This notation is more concise and often used in scientific papers. However, this
also means you need some understanding of linear algebra to follow the math
laid out in such resources.
### 3\.3\.1 How to fit a line
At this point a major question is left unanswered: How did we fit this line?
We basically need to define \\(\\beta\\) values in a structured way.
There are multiple ways of understanding how
to do this, all of which converge to the same
end point. We will describe them one by one.
#### 3\.3\.1\.1 The cost or loss function approach
This is the first approach and in my opinion is easiest to understand.
We try to optimize a function, often called the “cost function” or “loss function”.
The cost function
is the sum of squared differences between the predicted \\(\\hat{Y}\\) values from our model
and the original \\(Y\\) values. The optimization procedure tries to find \\(\\beta\\) values
that minimize this difference between the reality and predicted values.
\\\[min \\sum{(y\_i\-(\\beta\_0\+\\beta\_1x\_i))^2}\\]
Note that this is related to the error term, \\(\\epsilon\\), we already mentioned
above. We are trying to minimize the squared sum of \\(\\epsilon\_i\\) for each data
point. We can do this minimization by a bit of calculus.
The rough algorithm is as follows:
1. Pick a random starting point, random \\(\\beta\\) values.
2. Take the partial derivatives of the cost function to see which direction is
the way to go in the cost function.
3. Take a step toward the direction that minimizes the cost function.
* Step size is a parameter to choose, there are many variants.
4. Repeat step 2,3 until convergence.
This is the basis of the “gradient descent” algorithm. With the help of partial
derivatives we define a “gradient” on the cost function and follow that through
multiple iterations until convergence, meaning until the results do not
improve defined by a margin. The algorithm usually converges to optimum \\(\\beta\\)
values. In Figure [3\.14](relationship-between-variables-linear-models-and-correlation.html#fig:3dcostfunc), we show the cost function over various \\(\\beta\_0\\) and \\(\\beta\_1\\)
values for the histone modification and gene expression data set. The algorithm
will pick a point on this graph and traverse it incrementally based on the
derivatives and converge to the bottom of the cost function “well”. Such optimization methods are the core of machine learning methods we will cover later in Chapters [4](unsupervisedLearning.html#unsupervisedLearning) and
[5](supervisedLearning.html#supervisedLearning).
FIGURE 3\.14: Cost function landscape for linear regression with changing beta values. The optimization process tries to find the lowest point in this landscape by implementing a strategy for updating beta values toward the lowest point in the landscape.
#### 3\.3\.1\.2 Not cost function but maximum likelihood function
We can also think of this problem from a more statistical point of view. In
essence, we are looking for best statistical parameters, in this
case \\(\\beta\\) values, for our model that are most likely to produce such a
scatter of data points given the explanatory variables. This is called the
“maximum likelihood” approach. The approach assumes that a given response variable \\(y\_i\\) follows a normal distribution with mean \\(\\beta\_0\+\\beta\_1x\_i\\) and variance \\(s^2\\). Therefore the probability of observing any given \\(y\_i\\) value is dependent on the \\(\\beta\_0\\) and \\(\\beta\_1\\) values. Since \\(x\_i\\), the explanatory variable, is fixed within our data set, we can maximize the probability of observing any given \\(y\_i\\) by varying \\(\\beta\_0\\) and \\(\\beta\_1\\) values. The trick is to find \\(\\beta\_0\\) and \\(\\beta\_1\\) values that maximizes the probability of observing all the response variables in the dataset given the explanatory variables. The probability of observing a response variable \\(y\_i\\) with assumptions we described above is shown below. Note that this assumes variance is constant and \\(s^2\=\\frac{\\sum{\\epsilon\_i}}{n\-2}\\) is an unbiased estimation for population variance, \\(\\sigma^2\\).
\\\[P(y\_{i})\=\\frac{1}{s\\sqrt{2\\pi} }e^{\-\\frac{1}{2}\\left(\\frac{y\_i\-(\\beta\_0 \+ \\beta\_1x\_i)}{s}\\right)^2}\\]
Following from the probability equation above, the likelihood function (shown as \\(L\\) below) for
linear regression is multiplication of \\(P(y\_{i})\\) for all data points.
\\\[L\=P(y\_1\)P(y\_2\)P(y\_3\)..P(y\_n)\=\\prod\\limits\_{i\=1}^n{P\_i}\\]
This can be simplified to the following equation by some algebra, assumption of normal distribution, and taking logs (since it is
easier to add than multiply).
\\\[ln(L) \= \-nln(s\\sqrt{2\\pi}) \- \\frac{1}{2s^2} \\sum\\limits\_{i\=1}^n{(y\_i\-(\\beta\_0 \+ \\beta\_1x\_i))^2} \\]
As you can see, the right part of the function is the negative of the cost function
defined above. If we wanted to optimize this function we would need to take the derivative of
the function with respect to the \\(\\beta\\) parameters. That means we can ignore the
first part since there are no \\(\\beta\\) terms there. This simply reduces to the
negative of the cost function. Hence, this approach produces exactly the same
result as the cost function approach. The difference is that we defined our
problem
within the domain of statistics. This particular function has still to be optimized. This can be done with some calculus without the need for an
iterative approach.
The maximum likelihood approach also opens up other possibilities for regression. For the case above, we assumed that the points around the mean are distributed by normal distribution. However, there are other cases where this assumption may not hold. For example, for the count data the mean and variance relationship is not constant; the higher the mean counts, the higher the variance. In these cases, the regression framework with maximum likelihood estimation can still be used. We simply change the underlying assumptions about the distribution and calculate the likelihood with a new distribution in mind,
and maximize the parameters for that likelihood. This gives way to “generalized linear model” approach where errors for the response variables can have other distributions than normal distribution. We will see examples of these generalized linear models in Chapter [8](rnaseqanalysis.html#rnaseqanalysis) and [10](bsseq.html#bsseq).
#### 3\.3\.1\.3 Linear algebra and closed\-form solution to linear regression
The last approach we will describe is the minimization process using linear
algebra. If you find this concept challenging, feel free to skip it, but scientific publications and other books frequently use matrix notation and linear algebra to define and solve regression problems. In this case, we do not use an iterative approach. Instead, we will
minimize the cost function by explicitly taking its derivatives with respect to
\\(\\beta\\)’s and setting them to zero. This is doable by employing linear algebra
and matrix calculus. This approach is also called “ordinary least squares”. We
will not
show the whole derivation here, but the following expression
is what we are trying to minimize in matrix notation, which is basically a
different notation of the same minimization problem defined above. Remember
\\(\\epsilon\_i\=Y\_i\-(\\beta\_0\+\\beta\_1x\_i)\\)
\\\[
\\begin{aligned}
\\sum\\epsilon\_{i}^2\=\\epsilon^T\\epsilon\=(Y\-{\\beta}{X})^T(Y\-{\\beta}{X}) \\\\
\=Y^T{Y}\-2{\\beta}^T{Y}\+{\\beta}^TX^TX{\\beta}
\\end{aligned}
\\]
After rearranging the terms, we take the derivative of \\(\\epsilon^T\\epsilon\\)
with respect to \\(\\beta\\), and equalize that to zero. We then arrive at
the following for estimated \\(\\beta\\) values, \\(\\hat{\\beta}\\):
\\\[\\hat{\\beta}\=(X^TX)^{\-1}X^TY\\]
This requires you to calculate the inverse of the \\(X^TX\\) term, which could
be slow for large matrices. Using an iterative approach over the cost function
derivatives will be faster for larger problems.
The linear algebra notation is something you will see in the papers
or other resources often. If you input the data matrix X and solve the \\((X^TX)^{\-1}\\)
,
you get the following values for \\(\\beta\_0\\) and \\(\\beta\_1\\) for simple regression . However, we should note that this simple linear regression case can easily
be solved algebraically without the need for matrix operations. This can be done
by taking the derivative of \\(\\sum{(y\_i\-(\\beta\_0\+\\beta\_1x\_i))^2}\\) with respect to
\\(\\beta\_1\\), rearranging the terms and equalizing the derivative to zero.
\\\[\\hat{\\beta\_1}\=\\frac{\\sum{(x\_i\-\\overline{X})(y\_i\-\\overline{Y})}}{ \\sum{(x\_i\-\\overline{X})^2} }\\]
\\\[\\hat{\\beta\_0}\=\\overline{Y}\-\\hat{\\beta\_1}\\overline{X}\\]
#### 3\.3\.1\.4 Fitting lines in R
After all this theory, you will be surprised how easy it is to fit lines in R.
This is achieved just by the `lm()` function, which stands for linear models. Let’s do this
for a simulated data set and plot the fit. The first step is to simulate the
data. We will decide on \\(\\beta\_0\\) and \\(\\beta\_1\\) values. Then we will decide
on the variance parameter, \\(\\sigma\\), to be used in simulation of error terms,
\\(\\epsilon\\). We will first find \\(Y\\) values, just using the linear equation
\\(Y\=\\beta0\+\\beta\_1X\\), for
a set of \\(X\\) values. Then, we will add the error terms to get our simulated values.
```
# set random number seed, so that the random numbers from the text
# is the same when you run the code.
set.seed(32)
# get 50 X values between 1 and 100
x = runif(50,1,100)
# set b0,b1 and variance (sigma)
b0 = 10
b1 = 2
sigma = 20
# simulate error terms from normal distribution
eps = rnorm(50,0,sigma)
# get y values from the linear equation and addition of error terms
y = b0 + b1*x+ eps
```
Now let us fit a line using the `lm()` function. The function requires a formula, and
optionally a data frame. We need to pass the following expression within the
`lm()` function, `y~x`, where `y` is the simulated \\(Y\\) values and `x` is the explanatory variables \\(X\\). We will then use the `abline()` function to draw the fit. The resulting plot is shown in Figure [3\.15](relationship-between-variables-linear-models-and-correlation.html#fig:geneExpLinearModel).
```
mod1=lm(y~x)
# plot the data points
plot(x,y,pch=20,
ylab="Gene Expression",xlab="Histone modification score")
# plot the linear fit
abline(mod1,col="blue")
```
FIGURE 3\.15: Gene expression and histone modification score modeled by linear regression.
### 3\.3\.2 How to estimate the error of the coefficients
Since we are using a sample to estimate the coefficients, they are
not exact; with every random sample they will vary. In Figure [3\.16](relationship-between-variables-linear-models-and-correlation.html#fig:regCoeffRandomSamples), we
take multiple samples from the population and fit lines to each
sample; with each sample the lines slightly change. We are overlaying the
points and the lines for each sample on top of the other samples. When we take 200 samples and fit lines for each of them, the line fits are
variable. And,
we get a normal\-like distribution of \\(\\beta\\) values with a defined mean
and standard deviation, which is called standard error of the
coefficients.
FIGURE 3\.16: Regression coefficients vary with every random sample. The figure illustrates the variability of regression coefficients when regression is done using a sample of data points. Histograms depict this variability for \\(b\_0\\) and \\(b\_1\\) coefficients.
Normally, we will not have access to the population to do repeated sampling,
model fitting, and estimation of the standard error for the coefficients. But
there is statistical theory that helps us infer the population properties from
the sample. When we assume that error terms have constant variance and mean zero
, we can model the uncertainty in the regression coefficients, \\(\\beta\\)s.
The estimates for standard errors of \\(\\beta\\)s for simple regression are as
follows and shown without derivation.
\\\[
\\begin{aligned}
s\=RSE\=\\sqrt{\\frac{\\sum{(y\_i\-(\\beta\_0\+\\beta\_1x\_i))^2}}{n\-2} } \=\\sqrt{\\frac{\\sum{\\epsilon^2}}{n\-2} } \\\\
SE(\\hat{\\beta\_1})\=\\frac{s}{\\sqrt{\\sum{(x\_i\-\\overline{X})^2}}} \\\\
SE(\\hat{\\beta\_0})\=s\\sqrt{ \\frac{1}{n} \+ \\frac{\\overline{X}^2}{\\sum{(x\_i\-\\overline{X})^2} } }
\\end{aligned}
\\]
Notice that that \\(SE(\\beta\_1\)\\) depends on the estimate of variance of
residuals shown as \\(s\\) or **Residual Standard Error (RSE)**.
Notice also the standard error depends on the spread of \\(X\\). If \\(X\\) values have more
variation, the standard error will be lower. This intuitively makes sense since if the
spread of \\(X\\) is low, the regression line will be able to wiggle more
compared to a regression line that is fit to the same number of points but
covers a greater range on the X\-axis.
The standard error estimates can also be used to calculate confidence intervals and test
hypotheses, since the following quantity, called t\-score, approximately follows a
t\-distribution with \\(n\-p\\) degrees of freedom, where \\(n\\) is the number
of data points and \\(p\\) is the number of coefficients estimated.
\\\[ \\frac{\\hat{\\beta\_i}\-\\beta\_test}{SE(\\hat{\\beta\_i})}\\]
Often, we would like to test the null hypothesis if a coefficient is equal to
zero or not. For simple regression, this could mean if there is a relationship
between the explanatory variable and the response variable. We would calculate the
t\-score as follows \\(\\frac{\\hat{\\beta\_i}\-0}{SE(\\hat{\\beta\_i})}\\), and compare it
to the t\-distribution with \\(d.f.\=n\-p\\) to get the p\-value.
We can also
calculate the uncertainty of the regression coefficients using confidence
intervals, the range of values that are likely to contain \\(\\beta\_i\\). The 95%
confidence interval for \\(\\hat{\\beta\_i}\\) is
\\(\\hat{\\beta\_i}\\) ± \\(t\_{0\.975}SE(\\hat{\\beta\_i})\\).
\\(t\_{0\.975}\\) is the 97\.5% percentile of
the t\-distribution with \\(d.f. \= n – p\\).
In R, the `summary()` function will test all the coefficients for the null hypothesis
\\(\\beta\_i\=0\\). The function takes the model output obtained from the `lm()`
function. To demonstrate this, let us first get some data. The procedure below
simulates data to be used in a regression setting and it is useful to examine
what the linear model expects to model the data.
Since we have the data, we can build our model and call the `summary` function.
We will then use the `confint()` function to get the confidence intervals on the
coefficients and the `coef()` function to pull out the estimated coefficients from
the model.
```
mod1=lm(y~x)
summary(mod1)
```
```
##
## Call:
## lm(formula = y ~ x)
##
## Residuals:
## Min 1Q Median 3Q Max
## -77.11 -18.44 0.33 16.06 57.23
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 13.24538 6.28869 2.106 0.0377 *
## x 0.49954 0.05131 9.736 4.54e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 28.77 on 98 degrees of freedom
## Multiple R-squared: 0.4917, Adjusted R-squared: 0.4865
## F-statistic: 94.78 on 1 and 98 DF, p-value: 4.537e-16
```
```
# get confidence intervals
confint(mod1)
```
```
## 2.5 % 97.5 %
## (Intercept) 0.7656777 25.7250883
## x 0.3977129 0.6013594
```
```
# pull out coefficients from the model
coef(mod1)
```
```
## (Intercept) x
## 13.2453830 0.4995361
```
The `summary()` function prints out an extensive list of values.
The “Coefficients” section has the estimates, their standard error, t score,
and the p\-value from the hypothesis test \\(H\_0:\\beta\_i\=0\\). As you can see, the
estimate we get for the coefficients and their standard errors are close to
the ones we get from repeatedly sampling and getting a distribution of
coefficients. This is statistical inference at work, so we can estimate the
population properties within a certain error using just a sample.
### 3\.3\.3 Accuracy of the model
If you have observed the table output of the `summary()` function, you must have noticed there are some other outputs, such as “Residual standard error”,
“Multiple R\-squared” and “F\-statistic”. These are metrics that are useful
for assessing the accuracy of the model. We will explain them one by one.
**RSE** is simply the square\-root of
the sum of squared error terms, divided by degrees of freedom, \\(n\-p\\). For the simple
linear regression case, degrees of freedom is \\(n\-2\\). Sum of the squares of the error terms is also
called the **“Residual sum of squares”**, RSS. So the RSE is
calculated as follows:
\\\[ s\=RSE\=\\sqrt{\\frac{\\sum{(y\_i\-\\hat{Y\_i})^2 }}{n\-p}}\=\\sqrt{\\frac{RSS}{n\-p}}\\]
The RSE is a way of assessing the model fit. The larger the RSE the worse the
model is. However, this is an absolute measure in the units of \\(Y\\) and we have nothing to
compare against. One idea is that we divide it by the RSS of a simpler model
for comparative purposes. That simpler model is in this case is the model
with the intercept, \\(\\beta\_0\\). A very bad model will have close to zero
coefficients for explanatory variables, and the RSS of that model
will be close to the RSS of the model with only the intercept. In such
a model the intercept will be equal to \\(\\overline{Y}\\). As it turns out, the RSS of the model with
just the intercept is called the *“Total Sum of Squares” or TSS*. A good model will have a low \\(RSS/TSS\\). The metric \\(R^2\\) uses these quantities to calculate a score between 0 and 1, and the closer to 1, the better the model. Here is how
it is calculated:
\\\[R^2\=1\-\\frac{RSS}{TSS}\=\\frac{TSS\-RSS}{TSS}\=1\-\\frac{RSS}{TSS}\\]
The \\(TSS\-RSS\\) part of the formula is often referred to as “explained variability” in
the model. The bottom part is for “total variability”. With this interpretation, the higher
the “explained variability”, the better the model. For simple linear regression
with one explanatory variable, the square root of \\(R^2\\) is a quantity known
as the absolute value of the correlation coefficient, which can be calculated for any pair of variables, not only
the
response and the explanatory variables. *Correlation* is the general measure of
linear
relationship between two variables. One
of the most popular flavors of correlation is the Pearson correlation coefficient. Formally, it is the
*covariance* of X and Y divided by multiplication of standard deviations of
X and Y. In R, it can be calculated with the `cor()` function.
\\\[
r\_{xy}\=\\frac{cov(X,Y)}{\\sigma\_x\\sigma\_y}
\=\\frac{\\sum\\limits\_{i\=1}^n (x\_i\-\\bar{x})(y\_i\-\\bar{y})}
{\\sqrt{\\sum\\limits\_{i\=1}^n (x\_i\-\\bar{x})^2 \\sum\\limits\_{i\=1}^n (y\_i\-\\bar{y})^2}}
\\]
In the equation above, \\(cov\\) is the covariance; this is again a measure of
how much two variables change together, like correlation. If two variables
show similar behavior, they will usually have a positive covariance value. If they have opposite behavior, the covariance will have a negative value.
However, these values are boundless. A normalized way of looking at
covariance is to divide covariance by the multiplication of standard
errors of X and Y. This bounds the values to \-1 and 1, and as mentioned
above, is called Pearson correlation coefficient. The values that change in a similar manner will have a positive coefficient, the values that change in
an opposite manner will have a negative coefficient, and pairs that do not have
a linear relationship will have \\(0\\) or near \\(0\\) correlation. In
Figure [3\.17](relationship-between-variables-linear-models-and-correlation.html#fig:CorCovar), we are showing \\(R^2\\), the correlation
coefficient, and covariance for different scatter plots.
FIGURE 3\.17: Correlation and covariance for different scatter plots.
For simple linear regression, correlation can be used to assess the model. However, this becomes useless as a measure of general accuracy
if there is more than one explanatory variable as in multiple linear regression. In that case, \\(R^2\\) is a measure
of accuracy for the model. Interestingly, the square of the
correlation of predicted values
and original response variables (\\((cor(Y,\\hat{Y}))^2\\) ) equals \\(R^2\\) for
multiple linear regression.
The last accuracy measure, or the model fit in general we are going to explain is *F\-statistic*. This is a quantity that depends on the RSS and TSS again. It can also answer one important question that other metrics cannot easily answer. That question is whether or not any of the explanatory
variables have predictive value or in other words if all the explanatory variables are zero. We can write the null hypothesis as follows:
\\\[H\_0: \\beta\_1\=\\beta\_2\=\\beta\_3\=...\=\\beta\_p\=0 \\]
where the alternative is:
\\\[H\_1: \\text{at least one } \\beta\_i \\neq 0 \\]
Remember that \\(TSS\-RSS\\) is analogous to “explained variability” and the RSS is
analogous to “unexplained variability”. For the F\-statistic, we divide explained variance by
unexplained variance. Explained variance is just the \\(TSS\-RSS\\) divided
by degrees of freedom, and unexplained variance is the RSE.
The ratio will follow the F\-distribution
with two parameters, the degrees of freedom for the explained variance and
the degrees of freedom for the unexplained variance. The F\-statistic for a linear model is calculated as follows.
\\\[F\=\\frac{(TSS\-RSS)/(p\-1\)}{RSS/(n\-p)}\=\\frac{(TSS\-RSS)/(p\-1\)}{RSE} \\sim F(p\-1,n\-p)\\]
If the variances are the same, the ratio will be 1, and when \\(H\_0\\) is true, then
it can be shown that expected value of \\((TSS\-RSS)/(p\-1\)\\) will be \\(\\sigma^2\\), which is estimated by the RSE. So, if the variances are significantly different,
the ratio will need to be significantly bigger than 1\.
If the ratio is large enough we can reject the null hypothesis. To assess that,
we need to use software or look up the tables for F statistics with calculated
parameters. In R, function `qf()` can be used to calculate critical value of the
ratio. Benefit of the F\-test over
looking at significance of coefficients one by one is that we circumvent
multiple testing problem. If there are lots of explanatory variables
at least 5% of the time (assuming we use 0\.05 as P\-value significance
cutoff), p\-values from coefficient t\-tests will be wrong. In summary, F\-test is a better choice for testing if there is any association
between the explanatory variables and the response variable.
### 3\.3\.4 Regression with categorical variables
An important feature of linear regression is that categorical variables can
be used as explanatory variables, this feature is very useful in genomics
where explanatory variables can often be categorical. To put it in
context, in our histone modification example we can also include if
promoters have CpG islands or not as a variable. In addition, in
differential gene expression, we usually test the difference between
different conditions, which can be encoded as categorical variables in
a linear regression. We can sure use the t\-test for that as well if there are only 2 conditions, but if there are more conditions and other variables
to control for, such as age or sex of the samples, we need to take those
into account for our statistics, and the t\-test alone cannot handle such
complexity. In addition, when we have categorical variables we can also
have numeric variables in the model and we certainly do not have to include
only one type of variable in a model.
The simplest model with categorical variables includes two levels that
can be encoded in 0 and 1\. Below, we show linear regression with categorical variables. We then plot the fitted line. This plot is shown in Figure [3\.18](relationship-between-variables-linear-models-and-correlation.html#fig:LMcategorical).
```
set.seed(100)
gene1=rnorm(30,mean=4,sd=2)
gene2=rnorm(30,mean=2,sd=2)
gene.df=data.frame(exp=c(gene1,gene2),
group=c( rep(1,30),rep(0,30) ) )
mod2=lm(exp~group,data=gene.df)
summary(mod2)
```
```
##
## Call:
## lm(formula = exp ~ group, data = gene.df)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4.7290 -1.0664 0.0122 1.3840 4.5629
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.1851 0.3517 6.214 6.04e-08 ***
## group 1.8726 0.4973 3.765 0.000391 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.926 on 58 degrees of freedom
## Multiple R-squared: 0.1964, Adjusted R-squared: 0.1826
## F-statistic: 14.18 on 1 and 58 DF, p-value: 0.0003905
```
```
require(mosaic)
plotModel(mod2)
```
FIGURE 3\.18: Linear model with a categorical variable coded as 0 and 1\.
We can even compare more levels, and we do not even have to encode them
ourselves. We can pass categorical variables to the `lm()` function.
```
gene.df=data.frame(exp=c(gene1,gene2,gene2),
group=c( rep("A",30),rep("B",30),rep("C",30) )
)
mod3=lm(exp~group,data=gene.df)
summary(mod3)
```
```
##
## Call:
## lm(formula = exp ~ group, data = gene.df)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4.7290 -1.0793 -0.0976 1.4844 4.5629
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.0577 0.3781 10.731 < 2e-16 ***
## groupB -1.8726 0.5348 -3.502 0.000732 ***
## groupC -1.8726 0.5348 -3.502 0.000732 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.071 on 87 degrees of freedom
## Multiple R-squared: 0.1582, Adjusted R-squared: 0.1388
## F-statistic: 8.174 on 2 and 87 DF, p-value: 0.0005582
```
### 3\.3\.5 Regression pitfalls
In most cases one should look at the error terms (residuals) vs. the fitted
values plot. Any structure in this plot indicates problems such as
non\-linearity, correlation of error terms, non\-constant variance or
unusual values driving the fit. Below we briefly explain the potential
issues with the linear regression.
##### 3\.3\.5\.0\.1 Non\-linearity
If the true relationship is far from linearity, prediction accuracy
is reduced and all the other conclusions are questionable. In some cases,
transforming the data with \\(logX\\), \\(\\sqrt{X}\\), and \\(X^2\\) could resolve
the issue.
##### 3\.3\.5\.0\.2 Correlation of explanatory variables
If the explanatory variables are correlated that could lead to something
known as multicolinearity. When this happens SE estimates of the coefficients will be too large. This is usually observed in time\-course
data.
##### 3\.3\.5\.0\.3 Correlation of error terms
This assumes that the errors of the response variables are uncorrelated with each other. If they are, the confidence intervals of the coefficients
might be too narrow.
##### 3\.3\.5\.0\.4 Non\-constant variance of error terms
This means that different response variables have the same variance in their errors, regardless of the values of the predictor variables. If
the errors are not constant (ex: the errors grow as X values increase), this
will result in unreliable estimates in standard errors as the model
assumes constant variance. Transformation of data, such as
\\(logX\\) and \\(\\sqrt{X}\\), could help in some cases.
##### 3\.3\.5\.0\.5 Outliers and high leverage points
Outliers are extreme values for Y and high leverage points are unusual
X values. Both of these extremes have the power to affect the fitted line
and the standard errors. In some cases (Ex: if there are measurement errors), they can be
removed from the data for a better fit.
**Want to know more ?**
* Linear models and derivations of equations including matrix notation
+ *Applied Linear Statistical Models* by Kutner, Nachtsheim, et al. (Kutner, Nachtsheim, and Neter [2003](#ref-kutner2003applied))
+ *Elements of Statistical Learning* by Hastie \& Tibshirani (J. Friedman, Hastie, and Tibshirani [2001](#ref-friedman2001elements))
+ *An Introduction to Statistical Learning* by James, Witten, et al. (James, Witten, Hastie, et al. [2013](#ref-james2013introduction))
#### 3\.3\.0\.1 Matrix notation for linear models
We can naturally have more explanatory variables than just two. The formula
below has \\(n\\) explanatory variables.
\\\[Y\= \\beta\_0\+\\beta\_1X\_1\+\\beta\_2X\_2 \+ \\beta\_3X\_3 \+ .. \+ \\beta\_nX\_n \+\\epsilon\\]
If there are many variables, it would be easier
to write the model in matrix notation. The matrix form of linear model with
two explanatory variables will look like the one
below. The first matrix would be our data matrix. This contains our explanatory
variables and a column of 1s. The second term is a column vector of \\(\\beta\\)
values. We also add a vector of error terms, \\(\\epsilon\\)s, to the matrix multiplication.
\\\[
\\mathbf{Y} \= \\left\[\\begin{array}{rrr}
1 \& X\_{1,1} \& X\_{1,2} \\\\
1 \& X\_{2,1} \& X\_{2,2} \\\\
1 \& X\_{3,1} \& X\_{3,2} \\\\
1 \& X\_{4,1} \& X\_{4,2}
\\end{array}\\right]
%
\\left\[\\begin{array}{rrr}
\\beta\_0 \\\\
\\beta\_1 \\\\
\\beta\_2
\\end{array}\\right]
%
\+
\\left\[\\begin{array}{rrr}
\\epsilon\_1 \\\\
\\epsilon\_2 \\\\
\\epsilon\_3 \\\\
\\epsilon\_0
\\end{array}\\right]
\\]
The multiplication of the data matrix and \\(\\beta\\) vector and addition of the
error terms simply results in the following set of equations per data point:
\\\[
\\begin{aligned}
Y\_1\= \\beta\_0\+\\beta\_1X\_{1,1}\+\\beta\_2X\_{1,2} \+\\epsilon\_1 \\\\
Y\_2\= \\beta\_0\+\\beta\_1X\_{2,1}\+\\beta\_2X\_{2,2} \+\\epsilon\_2 \\\\
Y\_3\= \\beta\_0\+\\beta\_1X\_{3,1}\+\\beta\_2X\_{3,2} \+\\epsilon\_3 \\\\
Y\_4\= \\beta\_0\+\\beta\_1X\_{4,1}\+\\beta\_2X\_{4,2} \+\\epsilon\_4
\\end{aligned}
\\]
This expression involving the multiplication of the data matrix, the
\\(\\beta\\) vector and vector of error terms (\\(\\epsilon\\))
could be simply written as follows.
\\\[Y\=X\\beta \+ \\epsilon\\]
In the equation, above \\(Y\\) is the vector of response variables, \\(X\\) is the
data matrix, and \\(\\beta\\) is the vector of coefficients.
This notation is more concise and often used in scientific papers. However, this
also means you need some understanding of linear algebra to follow the math
laid out in such resources.
### 3\.3\.1 How to fit a line
At this point a major question is left unanswered: How did we fit this line?
We basically need to define \\(\\beta\\) values in a structured way.
There are multiple ways of understanding how
to do this, all of which converge to the same
end point. We will describe them one by one.
#### 3\.3\.1\.1 The cost or loss function approach
This is the first approach and in my opinion is easiest to understand.
We try to optimize a function, often called the “cost function” or “loss function”.
The cost function
is the sum of squared differences between the predicted \\(\\hat{Y}\\) values from our model
and the original \\(Y\\) values. The optimization procedure tries to find \\(\\beta\\) values
that minimize this difference between the reality and predicted values.
\\\[min \\sum{(y\_i\-(\\beta\_0\+\\beta\_1x\_i))^2}\\]
Note that this is related to the error term, \\(\\epsilon\\), we already mentioned
above. We are trying to minimize the squared sum of \\(\\epsilon\_i\\) for each data
point. We can do this minimization by a bit of calculus.
The rough algorithm is as follows:
1. Pick a random starting point, random \\(\\beta\\) values.
2. Take the partial derivatives of the cost function to see which direction is
the way to go in the cost function.
3. Take a step toward the direction that minimizes the cost function.
* Step size is a parameter to choose, there are many variants.
4. Repeat step 2,3 until convergence.
This is the basis of the “gradient descent” algorithm. With the help of partial
derivatives we define a “gradient” on the cost function and follow that through
multiple iterations until convergence, meaning until the results do not
improve defined by a margin. The algorithm usually converges to optimum \\(\\beta\\)
values. In Figure [3\.14](relationship-between-variables-linear-models-and-correlation.html#fig:3dcostfunc), we show the cost function over various \\(\\beta\_0\\) and \\(\\beta\_1\\)
values for the histone modification and gene expression data set. The algorithm
will pick a point on this graph and traverse it incrementally based on the
derivatives and converge to the bottom of the cost function “well”. Such optimization methods are the core of machine learning methods we will cover later in Chapters [4](unsupervisedLearning.html#unsupervisedLearning) and
[5](supervisedLearning.html#supervisedLearning).
FIGURE 3\.14: Cost function landscape for linear regression with changing beta values. The optimization process tries to find the lowest point in this landscape by implementing a strategy for updating beta values toward the lowest point in the landscape.
#### 3\.3\.1\.2 Not cost function but maximum likelihood function
We can also think of this problem from a more statistical point of view. In
essence, we are looking for best statistical parameters, in this
case \\(\\beta\\) values, for our model that are most likely to produce such a
scatter of data points given the explanatory variables. This is called the
“maximum likelihood” approach. The approach assumes that a given response variable \\(y\_i\\) follows a normal distribution with mean \\(\\beta\_0\+\\beta\_1x\_i\\) and variance \\(s^2\\). Therefore the probability of observing any given \\(y\_i\\) value is dependent on the \\(\\beta\_0\\) and \\(\\beta\_1\\) values. Since \\(x\_i\\), the explanatory variable, is fixed within our data set, we can maximize the probability of observing any given \\(y\_i\\) by varying \\(\\beta\_0\\) and \\(\\beta\_1\\) values. The trick is to find \\(\\beta\_0\\) and \\(\\beta\_1\\) values that maximizes the probability of observing all the response variables in the dataset given the explanatory variables. The probability of observing a response variable \\(y\_i\\) with assumptions we described above is shown below. Note that this assumes variance is constant and \\(s^2\=\\frac{\\sum{\\epsilon\_i}}{n\-2}\\) is an unbiased estimation for population variance, \\(\\sigma^2\\).
\\\[P(y\_{i})\=\\frac{1}{s\\sqrt{2\\pi} }e^{\-\\frac{1}{2}\\left(\\frac{y\_i\-(\\beta\_0 \+ \\beta\_1x\_i)}{s}\\right)^2}\\]
Following from the probability equation above, the likelihood function (shown as \\(L\\) below) for
linear regression is multiplication of \\(P(y\_{i})\\) for all data points.
\\\[L\=P(y\_1\)P(y\_2\)P(y\_3\)..P(y\_n)\=\\prod\\limits\_{i\=1}^n{P\_i}\\]
This can be simplified to the following equation by some algebra, assumption of normal distribution, and taking logs (since it is
easier to add than multiply).
\\\[ln(L) \= \-nln(s\\sqrt{2\\pi}) \- \\frac{1}{2s^2} \\sum\\limits\_{i\=1}^n{(y\_i\-(\\beta\_0 \+ \\beta\_1x\_i))^2} \\]
As you can see, the right part of the function is the negative of the cost function
defined above. If we wanted to optimize this function we would need to take the derivative of
the function with respect to the \\(\\beta\\) parameters. That means we can ignore the
first part since there are no \\(\\beta\\) terms there. This simply reduces to the
negative of the cost function. Hence, this approach produces exactly the same
result as the cost function approach. The difference is that we defined our
problem
within the domain of statistics. This particular function has still to be optimized. This can be done with some calculus without the need for an
iterative approach.
The maximum likelihood approach also opens up other possibilities for regression. For the case above, we assumed that the points around the mean are distributed by normal distribution. However, there are other cases where this assumption may not hold. For example, for the count data the mean and variance relationship is not constant; the higher the mean counts, the higher the variance. In these cases, the regression framework with maximum likelihood estimation can still be used. We simply change the underlying assumptions about the distribution and calculate the likelihood with a new distribution in mind,
and maximize the parameters for that likelihood. This gives way to “generalized linear model” approach where errors for the response variables can have other distributions than normal distribution. We will see examples of these generalized linear models in Chapter [8](rnaseqanalysis.html#rnaseqanalysis) and [10](bsseq.html#bsseq).
#### 3\.3\.1\.3 Linear algebra and closed\-form solution to linear regression
The last approach we will describe is the minimization process using linear
algebra. If you find this concept challenging, feel free to skip it, but scientific publications and other books frequently use matrix notation and linear algebra to define and solve regression problems. In this case, we do not use an iterative approach. Instead, we will
minimize the cost function by explicitly taking its derivatives with respect to
\\(\\beta\\)’s and setting them to zero. This is doable by employing linear algebra
and matrix calculus. This approach is also called “ordinary least squares”. We
will not
show the whole derivation here, but the following expression
is what we are trying to minimize in matrix notation, which is basically a
different notation of the same minimization problem defined above. Remember
\\(\\epsilon\_i\=Y\_i\-(\\beta\_0\+\\beta\_1x\_i)\\)
\\\[
\\begin{aligned}
\\sum\\epsilon\_{i}^2\=\\epsilon^T\\epsilon\=(Y\-{\\beta}{X})^T(Y\-{\\beta}{X}) \\\\
\=Y^T{Y}\-2{\\beta}^T{Y}\+{\\beta}^TX^TX{\\beta}
\\end{aligned}
\\]
After rearranging the terms, we take the derivative of \\(\\epsilon^T\\epsilon\\)
with respect to \\(\\beta\\), and equalize that to zero. We then arrive at
the following for estimated \\(\\beta\\) values, \\(\\hat{\\beta}\\):
\\\[\\hat{\\beta}\=(X^TX)^{\-1}X^TY\\]
This requires you to calculate the inverse of the \\(X^TX\\) term, which could
be slow for large matrices. Using an iterative approach over the cost function
derivatives will be faster for larger problems.
The linear algebra notation is something you will see in the papers
or other resources often. If you input the data matrix X and solve the \\((X^TX)^{\-1}\\)
,
you get the following values for \\(\\beta\_0\\) and \\(\\beta\_1\\) for simple regression . However, we should note that this simple linear regression case can easily
be solved algebraically without the need for matrix operations. This can be done
by taking the derivative of \\(\\sum{(y\_i\-(\\beta\_0\+\\beta\_1x\_i))^2}\\) with respect to
\\(\\beta\_1\\), rearranging the terms and equalizing the derivative to zero.
\\\[\\hat{\\beta\_1}\=\\frac{\\sum{(x\_i\-\\overline{X})(y\_i\-\\overline{Y})}}{ \\sum{(x\_i\-\\overline{X})^2} }\\]
\\\[\\hat{\\beta\_0}\=\\overline{Y}\-\\hat{\\beta\_1}\\overline{X}\\]
#### 3\.3\.1\.4 Fitting lines in R
After all this theory, you will be surprised how easy it is to fit lines in R.
This is achieved just by the `lm()` function, which stands for linear models. Let’s do this
for a simulated data set and plot the fit. The first step is to simulate the
data. We will decide on \\(\\beta\_0\\) and \\(\\beta\_1\\) values. Then we will decide
on the variance parameter, \\(\\sigma\\), to be used in simulation of error terms,
\\(\\epsilon\\). We will first find \\(Y\\) values, just using the linear equation
\\(Y\=\\beta0\+\\beta\_1X\\), for
a set of \\(X\\) values. Then, we will add the error terms to get our simulated values.
```
# set random number seed, so that the random numbers from the text
# is the same when you run the code.
set.seed(32)
# get 50 X values between 1 and 100
x = runif(50,1,100)
# set b0,b1 and variance (sigma)
b0 = 10
b1 = 2
sigma = 20
# simulate error terms from normal distribution
eps = rnorm(50,0,sigma)
# get y values from the linear equation and addition of error terms
y = b0 + b1*x+ eps
```
Now let us fit a line using the `lm()` function. The function requires a formula, and
optionally a data frame. We need to pass the following expression within the
`lm()` function, `y~x`, where `y` is the simulated \\(Y\\) values and `x` is the explanatory variables \\(X\\). We will then use the `abline()` function to draw the fit. The resulting plot is shown in Figure [3\.15](relationship-between-variables-linear-models-and-correlation.html#fig:geneExpLinearModel).
```
mod1=lm(y~x)
# plot the data points
plot(x,y,pch=20,
ylab="Gene Expression",xlab="Histone modification score")
# plot the linear fit
abline(mod1,col="blue")
```
FIGURE 3\.15: Gene expression and histone modification score modeled by linear regression.
#### 3\.3\.1\.1 The cost or loss function approach
This is the first approach and in my opinion is easiest to understand.
We try to optimize a function, often called the “cost function” or “loss function”.
The cost function
is the sum of squared differences between the predicted \\(\\hat{Y}\\) values from our model
and the original \\(Y\\) values. The optimization procedure tries to find \\(\\beta\\) values
that minimize this difference between the reality and predicted values.
\\\[min \\sum{(y\_i\-(\\beta\_0\+\\beta\_1x\_i))^2}\\]
Note that this is related to the error term, \\(\\epsilon\\), we already mentioned
above. We are trying to minimize the squared sum of \\(\\epsilon\_i\\) for each data
point. We can do this minimization by a bit of calculus.
The rough algorithm is as follows:
1. Pick a random starting point, random \\(\\beta\\) values.
2. Take the partial derivatives of the cost function to see which direction is
the way to go in the cost function.
3. Take a step toward the direction that minimizes the cost function.
* Step size is a parameter to choose, there are many variants.
4. Repeat step 2,3 until convergence.
This is the basis of the “gradient descent” algorithm. With the help of partial
derivatives we define a “gradient” on the cost function and follow that through
multiple iterations until convergence, meaning until the results do not
improve defined by a margin. The algorithm usually converges to optimum \\(\\beta\\)
values. In Figure [3\.14](relationship-between-variables-linear-models-and-correlation.html#fig:3dcostfunc), we show the cost function over various \\(\\beta\_0\\) and \\(\\beta\_1\\)
values for the histone modification and gene expression data set. The algorithm
will pick a point on this graph and traverse it incrementally based on the
derivatives and converge to the bottom of the cost function “well”. Such optimization methods are the core of machine learning methods we will cover later in Chapters [4](unsupervisedLearning.html#unsupervisedLearning) and
[5](supervisedLearning.html#supervisedLearning).
FIGURE 3\.14: Cost function landscape for linear regression with changing beta values. The optimization process tries to find the lowest point in this landscape by implementing a strategy for updating beta values toward the lowest point in the landscape.
#### 3\.3\.1\.2 Not cost function but maximum likelihood function
We can also think of this problem from a more statistical point of view. In
essence, we are looking for best statistical parameters, in this
case \\(\\beta\\) values, for our model that are most likely to produce such a
scatter of data points given the explanatory variables. This is called the
“maximum likelihood” approach. The approach assumes that a given response variable \\(y\_i\\) follows a normal distribution with mean \\(\\beta\_0\+\\beta\_1x\_i\\) and variance \\(s^2\\). Therefore the probability of observing any given \\(y\_i\\) value is dependent on the \\(\\beta\_0\\) and \\(\\beta\_1\\) values. Since \\(x\_i\\), the explanatory variable, is fixed within our data set, we can maximize the probability of observing any given \\(y\_i\\) by varying \\(\\beta\_0\\) and \\(\\beta\_1\\) values. The trick is to find \\(\\beta\_0\\) and \\(\\beta\_1\\) values that maximizes the probability of observing all the response variables in the dataset given the explanatory variables. The probability of observing a response variable \\(y\_i\\) with assumptions we described above is shown below. Note that this assumes variance is constant and \\(s^2\=\\frac{\\sum{\\epsilon\_i}}{n\-2}\\) is an unbiased estimation for population variance, \\(\\sigma^2\\).
\\\[P(y\_{i})\=\\frac{1}{s\\sqrt{2\\pi} }e^{\-\\frac{1}{2}\\left(\\frac{y\_i\-(\\beta\_0 \+ \\beta\_1x\_i)}{s}\\right)^2}\\]
Following from the probability equation above, the likelihood function (shown as \\(L\\) below) for
linear regression is multiplication of \\(P(y\_{i})\\) for all data points.
\\\[L\=P(y\_1\)P(y\_2\)P(y\_3\)..P(y\_n)\=\\prod\\limits\_{i\=1}^n{P\_i}\\]
This can be simplified to the following equation by some algebra, assumption of normal distribution, and taking logs (since it is
easier to add than multiply).
\\\[ln(L) \= \-nln(s\\sqrt{2\\pi}) \- \\frac{1}{2s^2} \\sum\\limits\_{i\=1}^n{(y\_i\-(\\beta\_0 \+ \\beta\_1x\_i))^2} \\]
As you can see, the right part of the function is the negative of the cost function
defined above. If we wanted to optimize this function we would need to take the derivative of
the function with respect to the \\(\\beta\\) parameters. That means we can ignore the
first part since there are no \\(\\beta\\) terms there. This simply reduces to the
negative of the cost function. Hence, this approach produces exactly the same
result as the cost function approach. The difference is that we defined our
problem
within the domain of statistics. This particular function has still to be optimized. This can be done with some calculus without the need for an
iterative approach.
The maximum likelihood approach also opens up other possibilities for regression. For the case above, we assumed that the points around the mean are distributed by normal distribution. However, there are other cases where this assumption may not hold. For example, for the count data the mean and variance relationship is not constant; the higher the mean counts, the higher the variance. In these cases, the regression framework with maximum likelihood estimation can still be used. We simply change the underlying assumptions about the distribution and calculate the likelihood with a new distribution in mind,
and maximize the parameters for that likelihood. This gives way to “generalized linear model” approach where errors for the response variables can have other distributions than normal distribution. We will see examples of these generalized linear models in Chapter [8](rnaseqanalysis.html#rnaseqanalysis) and [10](bsseq.html#bsseq).
#### 3\.3\.1\.3 Linear algebra and closed\-form solution to linear regression
The last approach we will describe is the minimization process using linear
algebra. If you find this concept challenging, feel free to skip it, but scientific publications and other books frequently use matrix notation and linear algebra to define and solve regression problems. In this case, we do not use an iterative approach. Instead, we will
minimize the cost function by explicitly taking its derivatives with respect to
\\(\\beta\\)’s and setting them to zero. This is doable by employing linear algebra
and matrix calculus. This approach is also called “ordinary least squares”. We
will not
show the whole derivation here, but the following expression
is what we are trying to minimize in matrix notation, which is basically a
different notation of the same minimization problem defined above. Remember
\\(\\epsilon\_i\=Y\_i\-(\\beta\_0\+\\beta\_1x\_i)\\)
\\\[
\\begin{aligned}
\\sum\\epsilon\_{i}^2\=\\epsilon^T\\epsilon\=(Y\-{\\beta}{X})^T(Y\-{\\beta}{X}) \\\\
\=Y^T{Y}\-2{\\beta}^T{Y}\+{\\beta}^TX^TX{\\beta}
\\end{aligned}
\\]
After rearranging the terms, we take the derivative of \\(\\epsilon^T\\epsilon\\)
with respect to \\(\\beta\\), and equalize that to zero. We then arrive at
the following for estimated \\(\\beta\\) values, \\(\\hat{\\beta}\\):
\\\[\\hat{\\beta}\=(X^TX)^{\-1}X^TY\\]
This requires you to calculate the inverse of the \\(X^TX\\) term, which could
be slow for large matrices. Using an iterative approach over the cost function
derivatives will be faster for larger problems.
The linear algebra notation is something you will see in the papers
or other resources often. If you input the data matrix X and solve the \\((X^TX)^{\-1}\\)
,
you get the following values for \\(\\beta\_0\\) and \\(\\beta\_1\\) for simple regression . However, we should note that this simple linear regression case can easily
be solved algebraically without the need for matrix operations. This can be done
by taking the derivative of \\(\\sum{(y\_i\-(\\beta\_0\+\\beta\_1x\_i))^2}\\) with respect to
\\(\\beta\_1\\), rearranging the terms and equalizing the derivative to zero.
\\\[\\hat{\\beta\_1}\=\\frac{\\sum{(x\_i\-\\overline{X})(y\_i\-\\overline{Y})}}{ \\sum{(x\_i\-\\overline{X})^2} }\\]
\\\[\\hat{\\beta\_0}\=\\overline{Y}\-\\hat{\\beta\_1}\\overline{X}\\]
#### 3\.3\.1\.4 Fitting lines in R
After all this theory, you will be surprised how easy it is to fit lines in R.
This is achieved just by the `lm()` function, which stands for linear models. Let’s do this
for a simulated data set and plot the fit. The first step is to simulate the
data. We will decide on \\(\\beta\_0\\) and \\(\\beta\_1\\) values. Then we will decide
on the variance parameter, \\(\\sigma\\), to be used in simulation of error terms,
\\(\\epsilon\\). We will first find \\(Y\\) values, just using the linear equation
\\(Y\=\\beta0\+\\beta\_1X\\), for
a set of \\(X\\) values. Then, we will add the error terms to get our simulated values.
```
# set random number seed, so that the random numbers from the text
# is the same when you run the code.
set.seed(32)
# get 50 X values between 1 and 100
x = runif(50,1,100)
# set b0,b1 and variance (sigma)
b0 = 10
b1 = 2
sigma = 20
# simulate error terms from normal distribution
eps = rnorm(50,0,sigma)
# get y values from the linear equation and addition of error terms
y = b0 + b1*x+ eps
```
Now let us fit a line using the `lm()` function. The function requires a formula, and
optionally a data frame. We need to pass the following expression within the
`lm()` function, `y~x`, where `y` is the simulated \\(Y\\) values and `x` is the explanatory variables \\(X\\). We will then use the `abline()` function to draw the fit. The resulting plot is shown in Figure [3\.15](relationship-between-variables-linear-models-and-correlation.html#fig:geneExpLinearModel).
```
mod1=lm(y~x)
# plot the data points
plot(x,y,pch=20,
ylab="Gene Expression",xlab="Histone modification score")
# plot the linear fit
abline(mod1,col="blue")
```
FIGURE 3\.15: Gene expression and histone modification score modeled by linear regression.
### 3\.3\.2 How to estimate the error of the coefficients
Since we are using a sample to estimate the coefficients, they are
not exact; with every random sample they will vary. In Figure [3\.16](relationship-between-variables-linear-models-and-correlation.html#fig:regCoeffRandomSamples), we
take multiple samples from the population and fit lines to each
sample; with each sample the lines slightly change. We are overlaying the
points and the lines for each sample on top of the other samples. When we take 200 samples and fit lines for each of them, the line fits are
variable. And,
we get a normal\-like distribution of \\(\\beta\\) values with a defined mean
and standard deviation, which is called standard error of the
coefficients.
FIGURE 3\.16: Regression coefficients vary with every random sample. The figure illustrates the variability of regression coefficients when regression is done using a sample of data points. Histograms depict this variability for \\(b\_0\\) and \\(b\_1\\) coefficients.
Normally, we will not have access to the population to do repeated sampling,
model fitting, and estimation of the standard error for the coefficients. But
there is statistical theory that helps us infer the population properties from
the sample. When we assume that error terms have constant variance and mean zero
, we can model the uncertainty in the regression coefficients, \\(\\beta\\)s.
The estimates for standard errors of \\(\\beta\\)s for simple regression are as
follows and shown without derivation.
\\\[
\\begin{aligned}
s\=RSE\=\\sqrt{\\frac{\\sum{(y\_i\-(\\beta\_0\+\\beta\_1x\_i))^2}}{n\-2} } \=\\sqrt{\\frac{\\sum{\\epsilon^2}}{n\-2} } \\\\
SE(\\hat{\\beta\_1})\=\\frac{s}{\\sqrt{\\sum{(x\_i\-\\overline{X})^2}}} \\\\
SE(\\hat{\\beta\_0})\=s\\sqrt{ \\frac{1}{n} \+ \\frac{\\overline{X}^2}{\\sum{(x\_i\-\\overline{X})^2} } }
\\end{aligned}
\\]
Notice that that \\(SE(\\beta\_1\)\\) depends on the estimate of variance of
residuals shown as \\(s\\) or **Residual Standard Error (RSE)**.
Notice also the standard error depends on the spread of \\(X\\). If \\(X\\) values have more
variation, the standard error will be lower. This intuitively makes sense since if the
spread of \\(X\\) is low, the regression line will be able to wiggle more
compared to a regression line that is fit to the same number of points but
covers a greater range on the X\-axis.
The standard error estimates can also be used to calculate confidence intervals and test
hypotheses, since the following quantity, called t\-score, approximately follows a
t\-distribution with \\(n\-p\\) degrees of freedom, where \\(n\\) is the number
of data points and \\(p\\) is the number of coefficients estimated.
\\\[ \\frac{\\hat{\\beta\_i}\-\\beta\_test}{SE(\\hat{\\beta\_i})}\\]
Often, we would like to test the null hypothesis if a coefficient is equal to
zero or not. For simple regression, this could mean if there is a relationship
between the explanatory variable and the response variable. We would calculate the
t\-score as follows \\(\\frac{\\hat{\\beta\_i}\-0}{SE(\\hat{\\beta\_i})}\\), and compare it
to the t\-distribution with \\(d.f.\=n\-p\\) to get the p\-value.
We can also
calculate the uncertainty of the regression coefficients using confidence
intervals, the range of values that are likely to contain \\(\\beta\_i\\). The 95%
confidence interval for \\(\\hat{\\beta\_i}\\) is
\\(\\hat{\\beta\_i}\\) ± \\(t\_{0\.975}SE(\\hat{\\beta\_i})\\).
\\(t\_{0\.975}\\) is the 97\.5% percentile of
the t\-distribution with \\(d.f. \= n – p\\).
In R, the `summary()` function will test all the coefficients for the null hypothesis
\\(\\beta\_i\=0\\). The function takes the model output obtained from the `lm()`
function. To demonstrate this, let us first get some data. The procedure below
simulates data to be used in a regression setting and it is useful to examine
what the linear model expects to model the data.
Since we have the data, we can build our model and call the `summary` function.
We will then use the `confint()` function to get the confidence intervals on the
coefficients and the `coef()` function to pull out the estimated coefficients from
the model.
```
mod1=lm(y~x)
summary(mod1)
```
```
##
## Call:
## lm(formula = y ~ x)
##
## Residuals:
## Min 1Q Median 3Q Max
## -77.11 -18.44 0.33 16.06 57.23
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 13.24538 6.28869 2.106 0.0377 *
## x 0.49954 0.05131 9.736 4.54e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 28.77 on 98 degrees of freedom
## Multiple R-squared: 0.4917, Adjusted R-squared: 0.4865
## F-statistic: 94.78 on 1 and 98 DF, p-value: 4.537e-16
```
```
# get confidence intervals
confint(mod1)
```
```
## 2.5 % 97.5 %
## (Intercept) 0.7656777 25.7250883
## x 0.3977129 0.6013594
```
```
# pull out coefficients from the model
coef(mod1)
```
```
## (Intercept) x
## 13.2453830 0.4995361
```
The `summary()` function prints out an extensive list of values.
The “Coefficients” section has the estimates, their standard error, t score,
and the p\-value from the hypothesis test \\(H\_0:\\beta\_i\=0\\). As you can see, the
estimate we get for the coefficients and their standard errors are close to
the ones we get from repeatedly sampling and getting a distribution of
coefficients. This is statistical inference at work, so we can estimate the
population properties within a certain error using just a sample.
### 3\.3\.3 Accuracy of the model
If you have observed the table output of the `summary()` function, you must have noticed there are some other outputs, such as “Residual standard error”,
“Multiple R\-squared” and “F\-statistic”. These are metrics that are useful
for assessing the accuracy of the model. We will explain them one by one.
**RSE** is simply the square\-root of
the sum of squared error terms, divided by degrees of freedom, \\(n\-p\\). For the simple
linear regression case, degrees of freedom is \\(n\-2\\). Sum of the squares of the error terms is also
called the **“Residual sum of squares”**, RSS. So the RSE is
calculated as follows:
\\\[ s\=RSE\=\\sqrt{\\frac{\\sum{(y\_i\-\\hat{Y\_i})^2 }}{n\-p}}\=\\sqrt{\\frac{RSS}{n\-p}}\\]
The RSE is a way of assessing the model fit. The larger the RSE the worse the
model is. However, this is an absolute measure in the units of \\(Y\\) and we have nothing to
compare against. One idea is that we divide it by the RSS of a simpler model
for comparative purposes. That simpler model is in this case is the model
with the intercept, \\(\\beta\_0\\). A very bad model will have close to zero
coefficients for explanatory variables, and the RSS of that model
will be close to the RSS of the model with only the intercept. In such
a model the intercept will be equal to \\(\\overline{Y}\\). As it turns out, the RSS of the model with
just the intercept is called the *“Total Sum of Squares” or TSS*. A good model will have a low \\(RSS/TSS\\). The metric \\(R^2\\) uses these quantities to calculate a score between 0 and 1, and the closer to 1, the better the model. Here is how
it is calculated:
\\\[R^2\=1\-\\frac{RSS}{TSS}\=\\frac{TSS\-RSS}{TSS}\=1\-\\frac{RSS}{TSS}\\]
The \\(TSS\-RSS\\) part of the formula is often referred to as “explained variability” in
the model. The bottom part is for “total variability”. With this interpretation, the higher
the “explained variability”, the better the model. For simple linear regression
with one explanatory variable, the square root of \\(R^2\\) is a quantity known
as the absolute value of the correlation coefficient, which can be calculated for any pair of variables, not only
the
response and the explanatory variables. *Correlation* is the general measure of
linear
relationship between two variables. One
of the most popular flavors of correlation is the Pearson correlation coefficient. Formally, it is the
*covariance* of X and Y divided by multiplication of standard deviations of
X and Y. In R, it can be calculated with the `cor()` function.
\\\[
r\_{xy}\=\\frac{cov(X,Y)}{\\sigma\_x\\sigma\_y}
\=\\frac{\\sum\\limits\_{i\=1}^n (x\_i\-\\bar{x})(y\_i\-\\bar{y})}
{\\sqrt{\\sum\\limits\_{i\=1}^n (x\_i\-\\bar{x})^2 \\sum\\limits\_{i\=1}^n (y\_i\-\\bar{y})^2}}
\\]
In the equation above, \\(cov\\) is the covariance; this is again a measure of
how much two variables change together, like correlation. If two variables
show similar behavior, they will usually have a positive covariance value. If they have opposite behavior, the covariance will have a negative value.
However, these values are boundless. A normalized way of looking at
covariance is to divide covariance by the multiplication of standard
errors of X and Y. This bounds the values to \-1 and 1, and as mentioned
above, is called Pearson correlation coefficient. The values that change in a similar manner will have a positive coefficient, the values that change in
an opposite manner will have a negative coefficient, and pairs that do not have
a linear relationship will have \\(0\\) or near \\(0\\) correlation. In
Figure [3\.17](relationship-between-variables-linear-models-and-correlation.html#fig:CorCovar), we are showing \\(R^2\\), the correlation
coefficient, and covariance for different scatter plots.
FIGURE 3\.17: Correlation and covariance for different scatter plots.
For simple linear regression, correlation can be used to assess the model. However, this becomes useless as a measure of general accuracy
if there is more than one explanatory variable as in multiple linear regression. In that case, \\(R^2\\) is a measure
of accuracy for the model. Interestingly, the square of the
correlation of predicted values
and original response variables (\\((cor(Y,\\hat{Y}))^2\\) ) equals \\(R^2\\) for
multiple linear regression.
The last accuracy measure, or the model fit in general we are going to explain is *F\-statistic*. This is a quantity that depends on the RSS and TSS again. It can also answer one important question that other metrics cannot easily answer. That question is whether or not any of the explanatory
variables have predictive value or in other words if all the explanatory variables are zero. We can write the null hypothesis as follows:
\\\[H\_0: \\beta\_1\=\\beta\_2\=\\beta\_3\=...\=\\beta\_p\=0 \\]
where the alternative is:
\\\[H\_1: \\text{at least one } \\beta\_i \\neq 0 \\]
Remember that \\(TSS\-RSS\\) is analogous to “explained variability” and the RSS is
analogous to “unexplained variability”. For the F\-statistic, we divide explained variance by
unexplained variance. Explained variance is just the \\(TSS\-RSS\\) divided
by degrees of freedom, and unexplained variance is the RSE.
The ratio will follow the F\-distribution
with two parameters, the degrees of freedom for the explained variance and
the degrees of freedom for the unexplained variance. The F\-statistic for a linear model is calculated as follows.
\\\[F\=\\frac{(TSS\-RSS)/(p\-1\)}{RSS/(n\-p)}\=\\frac{(TSS\-RSS)/(p\-1\)}{RSE} \\sim F(p\-1,n\-p)\\]
If the variances are the same, the ratio will be 1, and when \\(H\_0\\) is true, then
it can be shown that expected value of \\((TSS\-RSS)/(p\-1\)\\) will be \\(\\sigma^2\\), which is estimated by the RSE. So, if the variances are significantly different,
the ratio will need to be significantly bigger than 1\.
If the ratio is large enough we can reject the null hypothesis. To assess that,
we need to use software or look up the tables for F statistics with calculated
parameters. In R, function `qf()` can be used to calculate critical value of the
ratio. Benefit of the F\-test over
looking at significance of coefficients one by one is that we circumvent
multiple testing problem. If there are lots of explanatory variables
at least 5% of the time (assuming we use 0\.05 as P\-value significance
cutoff), p\-values from coefficient t\-tests will be wrong. In summary, F\-test is a better choice for testing if there is any association
between the explanatory variables and the response variable.
### 3\.3\.4 Regression with categorical variables
An important feature of linear regression is that categorical variables can
be used as explanatory variables, this feature is very useful in genomics
where explanatory variables can often be categorical. To put it in
context, in our histone modification example we can also include if
promoters have CpG islands or not as a variable. In addition, in
differential gene expression, we usually test the difference between
different conditions, which can be encoded as categorical variables in
a linear regression. We can sure use the t\-test for that as well if there are only 2 conditions, but if there are more conditions and other variables
to control for, such as age or sex of the samples, we need to take those
into account for our statistics, and the t\-test alone cannot handle such
complexity. In addition, when we have categorical variables we can also
have numeric variables in the model and we certainly do not have to include
only one type of variable in a model.
The simplest model with categorical variables includes two levels that
can be encoded in 0 and 1\. Below, we show linear regression with categorical variables. We then plot the fitted line. This plot is shown in Figure [3\.18](relationship-between-variables-linear-models-and-correlation.html#fig:LMcategorical).
```
set.seed(100)
gene1=rnorm(30,mean=4,sd=2)
gene2=rnorm(30,mean=2,sd=2)
gene.df=data.frame(exp=c(gene1,gene2),
group=c( rep(1,30),rep(0,30) ) )
mod2=lm(exp~group,data=gene.df)
summary(mod2)
```
```
##
## Call:
## lm(formula = exp ~ group, data = gene.df)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4.7290 -1.0664 0.0122 1.3840 4.5629
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 2.1851 0.3517 6.214 6.04e-08 ***
## group 1.8726 0.4973 3.765 0.000391 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.926 on 58 degrees of freedom
## Multiple R-squared: 0.1964, Adjusted R-squared: 0.1826
## F-statistic: 14.18 on 1 and 58 DF, p-value: 0.0003905
```
```
require(mosaic)
plotModel(mod2)
```
FIGURE 3\.18: Linear model with a categorical variable coded as 0 and 1\.
We can even compare more levels, and we do not even have to encode them
ourselves. We can pass categorical variables to the `lm()` function.
```
gene.df=data.frame(exp=c(gene1,gene2,gene2),
group=c( rep("A",30),rep("B",30),rep("C",30) )
)
mod3=lm(exp~group,data=gene.df)
summary(mod3)
```
```
##
## Call:
## lm(formula = exp ~ group, data = gene.df)
##
## Residuals:
## Min 1Q Median 3Q Max
## -4.7290 -1.0793 -0.0976 1.4844 4.5629
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.0577 0.3781 10.731 < 2e-16 ***
## groupB -1.8726 0.5348 -3.502 0.000732 ***
## groupC -1.8726 0.5348 -3.502 0.000732 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.071 on 87 degrees of freedom
## Multiple R-squared: 0.1582, Adjusted R-squared: 0.1388
## F-statistic: 8.174 on 2 and 87 DF, p-value: 0.0005582
```
### 3\.3\.5 Regression pitfalls
In most cases one should look at the error terms (residuals) vs. the fitted
values plot. Any structure in this plot indicates problems such as
non\-linearity, correlation of error terms, non\-constant variance or
unusual values driving the fit. Below we briefly explain the potential
issues with the linear regression.
##### 3\.3\.5\.0\.1 Non\-linearity
If the true relationship is far from linearity, prediction accuracy
is reduced and all the other conclusions are questionable. In some cases,
transforming the data with \\(logX\\), \\(\\sqrt{X}\\), and \\(X^2\\) could resolve
the issue.
##### 3\.3\.5\.0\.2 Correlation of explanatory variables
If the explanatory variables are correlated that could lead to something
known as multicolinearity. When this happens SE estimates of the coefficients will be too large. This is usually observed in time\-course
data.
##### 3\.3\.5\.0\.3 Correlation of error terms
This assumes that the errors of the response variables are uncorrelated with each other. If they are, the confidence intervals of the coefficients
might be too narrow.
##### 3\.3\.5\.0\.4 Non\-constant variance of error terms
This means that different response variables have the same variance in their errors, regardless of the values of the predictor variables. If
the errors are not constant (ex: the errors grow as X values increase), this
will result in unreliable estimates in standard errors as the model
assumes constant variance. Transformation of data, such as
\\(logX\\) and \\(\\sqrt{X}\\), could help in some cases.
##### 3\.3\.5\.0\.5 Outliers and high leverage points
Outliers are extreme values for Y and high leverage points are unusual
X values. Both of these extremes have the power to affect the fitted line
and the standard errors. In some cases (Ex: if there are measurement errors), they can be
removed from the data for a better fit.
**Want to know more ?**
* Linear models and derivations of equations including matrix notation
+ *Applied Linear Statistical Models* by Kutner, Nachtsheim, et al. (Kutner, Nachtsheim, and Neter [2003](#ref-kutner2003applied))
+ *Elements of Statistical Learning* by Hastie \& Tibshirani (J. Friedman, Hastie, and Tibshirani [2001](#ref-friedman2001elements))
+ *An Introduction to Statistical Learning* by James, Witten, et al. (James, Witten, Hastie, et al. [2013](#ref-james2013introduction))
##### 3\.3\.5\.0\.1 Non\-linearity
If the true relationship is far from linearity, prediction accuracy
is reduced and all the other conclusions are questionable. In some cases,
transforming the data with \\(logX\\), \\(\\sqrt{X}\\), and \\(X^2\\) could resolve
the issue.
##### 3\.3\.5\.0\.2 Correlation of explanatory variables
If the explanatory variables are correlated that could lead to something
known as multicolinearity. When this happens SE estimates of the coefficients will be too large. This is usually observed in time\-course
data.
##### 3\.3\.5\.0\.3 Correlation of error terms
This assumes that the errors of the response variables are uncorrelated with each other. If they are, the confidence intervals of the coefficients
might be too narrow.
##### 3\.3\.5\.0\.4 Non\-constant variance of error terms
This means that different response variables have the same variance in their errors, regardless of the values of the predictor variables. If
the errors are not constant (ex: the errors grow as X values increase), this
will result in unreliable estimates in standard errors as the model
assumes constant variance. Transformation of data, such as
\\(logX\\) and \\(\\sqrt{X}\\), could help in some cases.
##### 3\.3\.5\.0\.5 Outliers and high leverage points
Outliers are extreme values for Y and high leverage points are unusual
X values. Both of these extremes have the power to affect the fitted line
and the standard errors. In some cases (Ex: if there are measurement errors), they can be
removed from the data for a better fit.
**Want to know more ?**
* Linear models and derivations of equations including matrix notation
+ *Applied Linear Statistical Models* by Kutner, Nachtsheim, et al. (Kutner, Nachtsheim, and Neter [2003](#ref-kutner2003applied))
+ *Elements of Statistical Learning* by Hastie \& Tibshirani (J. Friedman, Hastie, and Tibshirani [2001](#ref-friedman2001elements))
+ *An Introduction to Statistical Learning* by James, Witten, et al. (James, Witten, Hastie, et al. [2013](#ref-james2013introduction))
| Life Sciences |
compgenomr.github.io | http://compgenomr.github.io/book/exercises-1.html |
3\.4 Exercises
--------------
### 3\.4\.1 How to summarize collection of data points: The idea behind statistical distributions
1. Calculate the means and variances
of the rows of the following simulated data set, and plot the distributions
of means and variances using `hist()` and `boxplot()` functions. \[Difficulty: **Beginner/Intermediate**]
```
set.seed(100)
#sample data matrix from normal distribution
gset=rnorm(600,mean=200,sd=70)
data=matrix(gset,ncol=6)
```
2. Using the data generated above, calculate the standard deviation of the
distribution of the means using the `sd()` function. Compare that to the expected
standard error obtained from the central limit theorem keeping in mind the
population parameters were \\(\\sigma\=70\\) and \\(n\=6\\). How does the estimate from the random samples change if we simulate more data with
`data=matrix(rnorm(6000,mean=200,sd=70),ncol=6)`? \[Difficulty: **Beginner/Intermediate**]
3. Simulate 30 random variables using the `rpois()` function. Do this 1000 times and calculate the mean of each sample. Plot the sampling distributions of the means
using a histogram. Get the 2\.5th and 97\.5th percentiles of the
distribution. \[Difficulty: **Beginner/Intermediate**]
4. Use the `t.test()` function to calculate confidence intervals
of the mean on the first random sample `pois1` simulated from the `rpois()` function below. \[Difficulty: **Intermediate**]
```
#HINT
set.seed(100)
#sample 30 values from poisson dist with lamda paramater =30
pois1=rpois(30,lambda=5)
```
5. Use the bootstrap confidence interval for the mean on `pois1`. \[Difficulty: **Intermediate/Advanced**]
6. Compare the theoretical confidence interval of the mean from the `t.test` and the bootstrap confidence interval. Are they similar? \[Difficulty: **Intermediate/Advanced**]
7. Try to re\-create the following figure, which demonstrates the CLT concept.\[Difficulty: **Advanced**]
### 3\.4\.2 How to test for differences in samples
1. Test the difference of means of the following simulated genes
using the randomization, `t-test()`, and `wilcox.test()` functions.
Plot the distributions using histograms and boxplots. \[Difficulty: **Intermediate/Advanced**]
```
set.seed(101)
gene1=rnorm(30,mean=4,sd=3)
gene2=rnorm(30,mean=3,sd=3)
```
2. Test the difference of the means of the following simulated genes
using the randomization, `t-test()` and `wilcox.test()` functions.
Plot the distributions using histograms and boxplots. \[Difficulty: **Intermediate/Advanced**]
```
set.seed(100)
gene1=rnorm(30,mean=4,sd=2)
gene2=rnorm(30,mean=2,sd=2)
```
3. We need an extra data set for this exercise. Read the gene expression data set as follows:
`gexpFile=system.file("extdata","geneExpMat.rds",package="compGenomRData") data=readRDS(gexpFile)`. The data has 100 differentially expressed genes. The first 3 columns are the test samples, and the last 3 are the control samples. Do
a t\-test for each gene (each row is a gene), and record the p\-values.
Then, do a moderated t\-test, as shown in section “Moderated t\-tests” in this chapter, and record
the p\-values. Make a p\-value histogram and compare two approaches in terms of the number of significant tests with the \\(0\.05\\) threshold.
On the p\-values use FDR (BH), Bonferroni and q\-value adjustment methods.
Calculate how many adjusted p\-values are below 0\.05 for each approach.
\[Difficulty: **Intermediate/Advanced**]
### 3\.4\.3 Relationship between variables: Linear models and correlation
Below we are going to simulate X and Y values that are needed for the
rest of the exercise.
```
# set random number seed, so that the random numbers from the text
# is the same when you run the code.
set.seed(32)
# get 50 X values between 1 and 100
x = runif(50,1,100)
# set b0,b1 and variance (sigma)
b0 = 10
b1 = 2
sigma = 20
# simulate error terms from normal distribution
eps = rnorm(50,0,sigma)
# get y values from the linear equation and addition of error terms
y = b0 + b1*x+ eps
```
1. Run the code then fit a line to predict Y based on X. \[Difficulty:**Intermediate**]
2. Plot the scatter plot and the fitted line. \[Difficulty:**Intermediate**]
3. Calculate correlation and R^2\. \[Difficulty:**Intermediate**]
4. Run the `summary()` function and
try to extract P\-values for the model from the object
returned by `summary`. See `?summary.lm`. \[Difficulty:**Intermediate/Advanced**]
5. Plot the residuals vs. the fitted values plot, by calling the `plot()`
function with `which=1` as the second argument. First argument
is the model returned by `lm()`. \[Difficulty:**Advanced**]
6. For the next exercises, read the data set histone modification data set. Use the following to get the path to the file:
```
hmodFile=system.file("extdata",
"HistoneModeVSgeneExp.rds",
package="compGenomRData")`
```
There are 3 columns in the dataset. These are measured levels of H3K4me3,
H3K27me3 and gene expression per gene. Once you read in the data, plot the scatter plot for H3K4me3 vs. expression. \[Difficulty:**Beginner**]
7. Plot the scatter plot for H3K27me3 vs. expression. \[Difficulty:**Beginner**]
8. Fit the model for prediction of expression data using: 1\) Only H3K4me3 as explanatory variable, 2\) Only H3K27me3 as explanatory variable, and 3\) Using both H3K4me3 and H3K27me3 as explanatory variables. Inspect the `summary()` function output in each case, which terms are significant. \[Difficulty:**Beginner/Intermediate**]
9. Is using H3K4me3 and H3K27me3 better than the model with only H3K4me3? \[Difficulty:**Intermediate**]
10. Plot H3k4me3 vs. H3k27me3\. Inspect the points that do not
follow a linear trend. Are they clustered at certain segments
of the plot? Bonus: Is there any biological or technical interpretation
for those points? \[Difficulty:**Intermediate/Advanced**]
### 3\.4\.1 How to summarize collection of data points: The idea behind statistical distributions
1. Calculate the means and variances
of the rows of the following simulated data set, and plot the distributions
of means and variances using `hist()` and `boxplot()` functions. \[Difficulty: **Beginner/Intermediate**]
```
set.seed(100)
#sample data matrix from normal distribution
gset=rnorm(600,mean=200,sd=70)
data=matrix(gset,ncol=6)
```
2. Using the data generated above, calculate the standard deviation of the
distribution of the means using the `sd()` function. Compare that to the expected
standard error obtained from the central limit theorem keeping in mind the
population parameters were \\(\\sigma\=70\\) and \\(n\=6\\). How does the estimate from the random samples change if we simulate more data with
`data=matrix(rnorm(6000,mean=200,sd=70),ncol=6)`? \[Difficulty: **Beginner/Intermediate**]
3. Simulate 30 random variables using the `rpois()` function. Do this 1000 times and calculate the mean of each sample. Plot the sampling distributions of the means
using a histogram. Get the 2\.5th and 97\.5th percentiles of the
distribution. \[Difficulty: **Beginner/Intermediate**]
4. Use the `t.test()` function to calculate confidence intervals
of the mean on the first random sample `pois1` simulated from the `rpois()` function below. \[Difficulty: **Intermediate**]
```
#HINT
set.seed(100)
#sample 30 values from poisson dist with lamda paramater =30
pois1=rpois(30,lambda=5)
```
5. Use the bootstrap confidence interval for the mean on `pois1`. \[Difficulty: **Intermediate/Advanced**]
6. Compare the theoretical confidence interval of the mean from the `t.test` and the bootstrap confidence interval. Are they similar? \[Difficulty: **Intermediate/Advanced**]
7. Try to re\-create the following figure, which demonstrates the CLT concept.\[Difficulty: **Advanced**]
### 3\.4\.2 How to test for differences in samples
1. Test the difference of means of the following simulated genes
using the randomization, `t-test()`, and `wilcox.test()` functions.
Plot the distributions using histograms and boxplots. \[Difficulty: **Intermediate/Advanced**]
```
set.seed(101)
gene1=rnorm(30,mean=4,sd=3)
gene2=rnorm(30,mean=3,sd=3)
```
2. Test the difference of the means of the following simulated genes
using the randomization, `t-test()` and `wilcox.test()` functions.
Plot the distributions using histograms and boxplots. \[Difficulty: **Intermediate/Advanced**]
```
set.seed(100)
gene1=rnorm(30,mean=4,sd=2)
gene2=rnorm(30,mean=2,sd=2)
```
3. We need an extra data set for this exercise. Read the gene expression data set as follows:
`gexpFile=system.file("extdata","geneExpMat.rds",package="compGenomRData") data=readRDS(gexpFile)`. The data has 100 differentially expressed genes. The first 3 columns are the test samples, and the last 3 are the control samples. Do
a t\-test for each gene (each row is a gene), and record the p\-values.
Then, do a moderated t\-test, as shown in section “Moderated t\-tests” in this chapter, and record
the p\-values. Make a p\-value histogram and compare two approaches in terms of the number of significant tests with the \\(0\.05\\) threshold.
On the p\-values use FDR (BH), Bonferroni and q\-value adjustment methods.
Calculate how many adjusted p\-values are below 0\.05 for each approach.
\[Difficulty: **Intermediate/Advanced**]
### 3\.4\.3 Relationship between variables: Linear models and correlation
Below we are going to simulate X and Y values that are needed for the
rest of the exercise.
```
# set random number seed, so that the random numbers from the text
# is the same when you run the code.
set.seed(32)
# get 50 X values between 1 and 100
x = runif(50,1,100)
# set b0,b1 and variance (sigma)
b0 = 10
b1 = 2
sigma = 20
# simulate error terms from normal distribution
eps = rnorm(50,0,sigma)
# get y values from the linear equation and addition of error terms
y = b0 + b1*x+ eps
```
1. Run the code then fit a line to predict Y based on X. \[Difficulty:**Intermediate**]
2. Plot the scatter plot and the fitted line. \[Difficulty:**Intermediate**]
3. Calculate correlation and R^2\. \[Difficulty:**Intermediate**]
4. Run the `summary()` function and
try to extract P\-values for the model from the object
returned by `summary`. See `?summary.lm`. \[Difficulty:**Intermediate/Advanced**]
5. Plot the residuals vs. the fitted values plot, by calling the `plot()`
function with `which=1` as the second argument. First argument
is the model returned by `lm()`. \[Difficulty:**Advanced**]
6. For the next exercises, read the data set histone modification data set. Use the following to get the path to the file:
```
hmodFile=system.file("extdata",
"HistoneModeVSgeneExp.rds",
package="compGenomRData")`
```
There are 3 columns in the dataset. These are measured levels of H3K4me3,
H3K27me3 and gene expression per gene. Once you read in the data, plot the scatter plot for H3K4me3 vs. expression. \[Difficulty:**Beginner**]
7. Plot the scatter plot for H3K27me3 vs. expression. \[Difficulty:**Beginner**]
8. Fit the model for prediction of expression data using: 1\) Only H3K4me3 as explanatory variable, 2\) Only H3K27me3 as explanatory variable, and 3\) Using both H3K4me3 and H3K27me3 as explanatory variables. Inspect the `summary()` function output in each case, which terms are significant. \[Difficulty:**Beginner/Intermediate**]
9. Is using H3K4me3 and H3K27me3 better than the model with only H3K4me3? \[Difficulty:**Intermediate**]
10. Plot H3k4me3 vs. H3k27me3\. Inspect the points that do not
follow a linear trend. Are they clustered at certain segments
of the plot? Bonus: Is there any biological or technical interpretation
for those points? \[Difficulty:**Intermediate/Advanced**]
| Life Sciences |
shainarace.github.io | https://shainarace.github.io/LinearAlgebra/r-programming-basics.html |
Chapter 4 R Programming Basics
==============================
Before we get started, you will need to know the basics of matrix manipulation in the R programming language:
* Generally matrices are entered in as one vector, which R then breaks apart into rows and columns in they way that you specify (with nrow/ncol). The default way that R reads a vector into a matrix is down the columns. To read the data in across the rows, use the byrow\=TRUE option). This is only relevant if you’re entering matrices from scratch.
```
Y=matrix(c(1,2,3,4),nrow=2,ncol=2)
Y
```
```
## [,1] [,2]
## [1,] 1 3
## [2,] 2 4
```
```
X=matrix(c(1,2,3,4),nrow=2,ncol=2,byrow=TRUE)
X
```
```
## [,1] [,2]
## [1,] 1 2
## [2,] 3 4
```
* The standard multiplication symbol, ‘\*,’ will unfortunately provide unexpected results if you are looking for matrix multiplication. ‘\*’ will multiply matrices *elementwise*. In order to do matrix multiplication, the function is ‘%\*%.’
```
X*X
```
```
## [,1] [,2]
## [1,] 1 4
## [2,] 9 16
```
```
X%*%X
```
```
## [,1] [,2]
## [1,] 7 10
## [2,] 15 22
```
* To transpose a matrix or a vector \\(\\X\\), use the function t(\\(\\X\\)).
```
t(X)
```
```
## [,1] [,2]
## [1,] 1 3
## [2,] 2 4
```
* R indexes vectors and matrices starting with \\(i\=1\\) (as opposed to \\(i\=0\\) in python).
* X\[i,j] gives element \\(\\X\_{ij}\\). You can alter individual elements this way.
```
X[2,1]
```
```
## [1] 3
```
```
X[2,1]=100
X
```
```
## [,1] [,2]
## [1,] 1 2
## [2,] 100 4
```
* To create a vector of all ones, \\(\\e\\), use the `rep()` function
```
e=rep(1,5)
e
```
```
## [1] 1 1 1 1 1
```
* To compute the mean of a vector, use the mean function. To compute the column means of a matrix (or data frame), use the `colMeans()` function. You can also use the `apply` function, which is necessary if you want column standard deviations (`sd()` function). `apply(X,dim,function)` applies the specified function to the specified dimension `dim` (1 for rows, 2 for columns) of the matrix or data frame X.
```
# Start by generating random ~N(0,1) data:
A=replicate(2,rnorm(5))
colMeans(A)
```
```
## [1] -0.4884781 0.2465562
```
```
# (Why aren't the means close to zero?)
A=replicate(2,rnorm(100))
colMeans(A)
```
```
## [1] -0.14709807 0.05484491
```
```
#LawOfLargeNumbers.
apply(A,2,sd)
```
```
## [1] 0.9951114 0.9658601
```
```
# To apply a "homemade function" you must create it as a function
# Here we apply a sum of squares function for the first 5 rows of A:
apply(A[1:5, ],1,function(x) x%*%x)
```
```
## [1] 1.7102525 1.0398961 4.1784246 3.9187167 0.5713711
```
```
# Here we center the data by subtracting the mean vector:
B=apply(A,2,function(x) x-mean(x))
colMeans(B)
```
```
## [1] 1.804112e-18 -1.713907e-17
```
```
# R doesn't tell you when things are zero to machine precision. "Machine zero" in
# R is given by the internal variable .Machine$double.eps
colMeans(B) < .Machine$double.eps
```
```
## [1] TRUE TRUE
```
* To invert a matrix, use the `solve()` command.
```
Xinv=solve(X)
X%*%Xinv
```
```
## [,1] [,2]
## [1,] 1 0
## [2,] 0 1
```
* To determine size of a matrix, use the `dim()` function. The result is a vector with two values: `dim(x)[1]` provides the number of rows and `dim(x)[2]` provides the number of columns. You can label rows/columns of a matrix using the `rownames()` or `colnames()` functions.
```
dim(A)
```
```
## [1] 100 2
```
```
nrows=dim(A)[1]
ncols=dim(A)[2]
colnames(A)=c("This","That")
A[1:5, ]
```
```
## This That
## [1,] -1.2985084 0.1553331
## [2,] 0.9521460 -0.3651220
## [3,] 1.8559421 0.8566817
## [4,] -1.8959629 -0.5692463
## [5,] 0.4465415 0.6098949
```
* Most arithmetic functions you apply to a vector act elementwise. In R, \\(\\x^2\\) will be a vector containing the square of the elements in \\(\\x\\). You can add a column to a matrix (or a data frame) by using the `cbind()` function.
```
# Add a column containing the square of the second column
A=cbind(A,A[ ,2]^2)
colnames(A)
```
```
## [1] "This" "That" ""
```
```
colnames(A)[3]="That Squared"
colnames(A)
```
```
## [1] "This" "That" "That Squared"
```
* You can compute vector norms using the `norm()` function. Unfortunately, the default norm is *not* the \\(2\\)\-norm (it should be!) so we must specify the `type="2"` as the second argument to the function.
```
x=c(1,1,1)
y=c(1,0,0)
norm(x,type="2")
```
```
## [1] 1.732051
```
```
# It's actually fewer characters to work from the equivalent definition:
sqrt(x%*%x)
```
```
## [,1]
## [1,] 1.732051
```
```
norm(y,type="2")
```
```
## [1] 1
```
```
norm(x-y,type="2")
```
```
## [1] 1.414214
```
You’ll learn many additional R techniques throughout this course, but our strategy in this text will be to pick them up as we go as opposed to trying to remember them from the beginning.
| Field Specific |
shainarace.github.io | https://shainarace.github.io/LinearAlgebra/solvesys.html |
Chapter 5 Solving Systems of Equations
======================================
In this section we will learn about solving the systems of equations that were presented in Chapter [3](multapp.html#multapp). There are three general situations we may find ourselves in when attempting to solve systems of equations:
1. The system could have one unique solution.
2. The system could have infinitely many solutions (sometimes called *underdetermined*).
3. The system could have no solutions (sometimes called *overdetermined* or *inconsistent*).
Luckily, no matter what type of system we are dealing with, the method to arriving at the answer (should it exist) is the same. The process is called Gaussian (or Gauss\-Jordan) Elimination.
5\.1 Gaussian Elimination
-------------------------
Gauss\-Jordan Elimination is essentially the same process of elimination you may have used in an Algebra class in primary school. Suppose, for example, we have the following simple system of equations:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+2x\_2 \&\=\& 11\\\\
x\_1\+x\_2 \&\=\& 6\\end{eqnarray}\\end{cases}\\]
One simple way to solve this system of equations is to subtract the second equation from the first. By this we mean that we’d perform subtraction on the left hand and right hand sides of the equation:
\\\[\\pm \&x\_1\&\+\&2x\_2 \\\\ \-\&(x\_1\&\+\&x\_2\) \\\\ \\hline \&\&\&x\_2 \\mp \= \\pm 11 \\\\\-6\\\\\\hline 5 \\mp\\]
This operation is clearly allowed because the two subtracted quantities are equal (by the very definition of an equation!). What we are left with is one much simpler equation,
\\\[x\_2\=5\\]
using this information, we can return to the first equation, substitute and solve for \\(x\_1\\):
\\\[\\begin{eqnarray}
x\_1\+2(5\)\&\=\&11 \\\\
x\_1 \&\=\& 1
\\end{eqnarray}\\]
This final process of substitution is often called **back substitution.** Once we have a sufficient amount of information, we can use that information to substitute and solve for the remainder.
### 5\.1\.1 Row Operations
In the previous example, we demonstrated one operation that can be performed on systems of equations without changing the solution: one equation can be added to a multiple of another (in that example, the multiple was \-1\). For any system of equations, there are 3 operations which will not change the solution set:
1. Interchanging the order of the equations.
2. Multiplying both sides of one equation by a constant.
3. Replace one equation by a linear combination of itself and of another equation.
Taking our simple system from the previous example, we’ll examine these three operations concretely:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+2x\_2 \&\=\& 11\\\\
x\_1\+x\_2 \&\=\& 6\\end{eqnarray}\\end{cases}\\]
1. Interchanging the order of the equations.
| \\\[\\begin{cases}\\begin{align} x\_1\+2x\_2 \&\= 11\\\\ x\_1\+x\_2 \&\= 6\\end{align}\\end{cases}\\] \\(\\Leftrightarrow\\) \\\[\\begin{cases}\\begin{align} x\_1\+x\_2 \=\& 6\\\\ x\_1\+2x\_2 \=\& 11\\end{align}\\end{cases}\\] | | |
| --- | --- | --- |
2. Multiplying both sides of one equation by a constant. *(Multiply the second equation by \-1\)*.
| \\\[\\begin{cases}\\begin{align} x\_1\+2x\_2 \&\=\& 11\\\\ x\_1\+x\_2 \&\=\& 6\\end{align}\\end{cases}\\] \\(\\Leftrightarrow\\) \\\[\\begin{cases}\\begin{align} x\_1\+2x\_2 \&\=\& 11\\\\ \-1x\_1\-1x\_2 \&\=\& \-6\\end{align}\\end{cases}\\] | | |
| --- | --- | --- |
3. Replace one equation by a linear combination of itself and of another equation. *(Replace the second equation by the first minus the second.)*
| \\\[\\begin{cases}\\begin{eqnarray} x\_1\+2x\_2 \&\=\& 11\\\\ x\_1\+x\_2 \&\=\& 6\\end{eqnarray}\\end{cases}\\] \\(\\Leftrightarrow\\) \\\[\\begin{cases}\\begin{eqnarray} x\_1\+2x\_2 \&\=\& 11\\\\ x\_2 \&\=\& 5\\end{eqnarray}\\end{cases}\\] | | |
| --- | --- | --- |
Using these 3 row operations, we can transform any system of equations into one that is *triangular*. A **triangular system** is one that can be solved by back substitution. For example,
\\\[\\begin{cases}\\begin{align}
x\_1\+2x\_2 \+3x\_3\= 14\\\\
x\_2\+x\_3 \=6\\\\
x\_3 \= 1\\end{align}\\end{cases}\\]
is a triangular system. Using substitution, the second equation will give us the value for \\(x\_2\\), which will allow for further substitution into the first equation to solve for the value of \\(x\_1\\). Let’s take a look at an example of how we can transform any system to a triangular system.
**Example 5\.1 (Transforming a System to a Triangular System via 3 Operations)** Solve the following system of equations:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3\&\=\& 1\\\\
x\_1\-2x\_2\+2x\_3 \&\=\&4\\\\
x\_1\+2x\_2\-x\_3 \&\=\& 2\\end{eqnarray}\\end{cases}\\]
To turn this into a triangular system, we will want to eliminate the variable \\(x\_1\\) from two of the equations. We can do this by taking the following operations:
1. Replace equation 2 with (equation 2 \- equation 1\).
2. Replace equation 3 with (equation 3 \- equation 1\).
Then, our system becomes:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3\&\=\& 1\\\\
\-3x\_2\+x\_3 \&\=\&3\\\\
x\_2\-2x\_3 \&\=\& 1\\end{eqnarray}\\end{cases}\\]
Next, we will want to eliminate the variable \\(x\_2\\) from the third equation. We can do this by replacing equation 3 with (equation 3 \+ \\(\\frac{1}{3}\\) equation 2\). *However*, we can avoid dealing with fractions if instead we:
3. Swap equations 2 and 3\.
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3 \&\=\& 1\\\\
x\_2\-2x\_3 \&\=\& 1\\\\
\-3x\_2\+x\_3 \&\=\&3\\end{eqnarray}\\end{cases}\\]
Now, as promised our math is a little simpler:
4. Replace equation 3 with (equation 3 \+ 3\*equation 2\).
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3 \&\=\& 1 \\\\
x\_2\-2x\_3 \&\=\& 1 \\\\
\-5x\_3 \&\=\&6 \\end{eqnarray}\\end{cases}\\]
Now that our system is in triangular form, we can use substitution to solve for all of the variables:
\\\[x\_1 \= 3\.6 \\quad x\_2 \= \-1\.4 \\quad x\_3 \= \-1\.2 \\]
This is the procedure for Gaussian Elimination, which we will now formalize in it’s matrix version.
### 5\.1\.2 The Augmented Matrix
When solving systems of equations, we will commonly use the **augmented matrix**.
**Definition 5\.1 (The Augmented Matrix)** The **augmented matrix** of a system of equations is simply the matrix which contains all of the coefficients of the equations, augmented with an extra column holding the values on the right hand sides of the equations. If our system is:
\\\[\\begin{cases}\\begin{eqnarray}
a\_{11}x\_1\+a\_{12}x\_2 \+a\_{13}x\_3\&\=\& b\_1
a\_{21}x\_1\+a\_{22}x\_2 \+a\_{23}x\_3\&\=\& b\_2
a\_{31}x\_1\+a\_{32}x\_2 \+a\_{33}x\_3\&\=\&b\_3 \\end{eqnarray}\\end{cases}\\]
Then the corresponding augmented matrix is
\\\[\\left(\\begin{array}{rrr\|r}
a\_{11}\&a\_{12}\&a\_{13}\& b\_1\\\\
a\_{21}\&a\_{22}\&a\_{23}\& b\_2\\\\
a\_{31}\&a\_{12}\&a\_{33}\& b\_3\\\\
\\end{array}\\right)\\]
Using this augmented matrix, we can contain all of the information needed to perform the three operations outlined in the previous section. We will formalize these operations as they pertain to the rows (i.e. individual equations) of the augmented matrix (i.e. the entire system) in the following definition.
**Definition 5\.2 (Row Operations for Gaussian Elimination)** Gaussian Elimination is performed on the rows, \\(\\arow{i},\\) of an augmented matrix, \\\[\\A \= \\pm \\arow{1}\\\\\\arow{2}\\\\\\arow{3}\\\\\\vdots\\\\\\arow{m}\\mp\\] by using the three **elementary row operations**:
1. Swap rows \\(i\\) and \\(j\\).
2. Replace row \\(i\\) by a nonzero multiple of itself.
3. Replace row \\(i\\) by a linear combination of itself plus a multiple of row \\(j\\).
The ultimate goal of Gaussian elimination is to transform an augmented matrix into an **upper\-triangular matrix** which allows for backsolving.
\\\[\\A \\rightarrow \\left(\\begin{array}{rrrr\|r}
t\_{11}\& t\_{12}\& \\dots\& t\_{1n}\&c\_1\\cr
0\& t\_{22}\& \\dots\& t\_{2n}\&c\_2\\cr
\\vdots\& \\vdots\& \\ddots\& \\vdots\&\\vdots\\cr
0\& 0\& \\dots\& t\_{nn}\&c\_n\\end{array}\\right)\\]
The key to this process at each step is to focus on one position, called the *pivot position* or simply the *pivot*, and try to eliminate all terms below this position using the three row operations. Only nonzero numbers are allowed to be pivots. If a coefficient in a pivot position is ever 0, then the rows of the matrix should be interchanged to find a nonzero pivot. If this is not possible then we continue on to the next possible column where a pivot position can be created.
Let’s now go through a detailed example of Gaussian elimination using the augmented matrix. We will use the same example (and same row operations) from the previous section to demonstrate the idea.
**Example 5\.2 (Row Operations on the Augmented Matrix)** We will solve the system of equations from Example [5\.1](solvesys.html#exm:rowopeq) using the Augmented Matrix.
\\\[\\begin{equation\*}\\begin{cases}\\begin{align}
x\_1\+x\_2 \+x\_3\= 1\\\\
x\_1\-2x\_2\+2x\_3 \=4\\\\
x\_1\+2x\_2\-x\_3 \= 2\\end{align}\\end{cases}
\\end{equation\*}\\]
Our first step will be to write the augmented matrix and identify the current pivot. Here, a square is drawn around the pivot and the numbers below the pivot are circled. It is our goal to eliminate the circled numbers using the row with the pivot.
\\\[\\begin{equation\*}
\\left(\\begin{array}{rrr\|r}
1 \& 1 \& 1 \& 1\\\\
1 \& \-2 \& 2 \&4\\\\
1\&2\&\-1 \&2
\\end{array}\\right)
\\xrightarrow{Current Pivot}\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}} \& \-2 \& 2 \&4\\\\
\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}}\&2\&\-1 \&2
\\end{array}\\right)
\\end{equation\*}\\]
We can eliminate the circled elements by making combinations with those rows and the pivot row. For instance, we’d replace row 2 by the combination (row 2 \- row 1\). Our shorthand notation for this will be R2’ \= R2\-R1\. Similarly we will replace row 3 in the next step.
\\\[\\begin{equation\*}
\\xrightarrow{R2'\=R2\-R1}
\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
\\red{0} \& \\red{\-3} \& \\red{1} \&\\red{3}\\\\
\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}}\&2\&\-1 \&2
\\end{array}\\right)
\\xrightarrow{R3'\=R3\-R1} \\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0 \& \-3 \& 1 \&3\\\\
\\red{0}\&\\red{1}\&\\red{\-2}\&\\red{1}
\\end{array}\\right)
\\end{equation\*}\\]
Now that we have eliminated each of the circled elements below the current pivot, we will continue on to the next pivot, which is \-3\. Looking into the future, we can either do the operation \\(R3'\=R3\+\\frac{1}{3}R2\\) or we can interchange rows 2 and 3 to avoid fractions in our next calculation. To keep things neat, we will do the latter. (*note: either way you proceed will lead you to the same solution!*)
\\\[\\begin{equation\*} \\xrightarrow{Next Pivot}
\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0 \& \\fbox{\-3} \& 1 \&3\\\\
0\&\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}}\&\-2\&1
\\end{array}\\right)
\\xrightarrow{R2 \\leftrightarrow R3} \\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0\&\\fbox{1}\&\-2\&1\\\\
0 \& \\enclose{circle}\[mathcolor\="red"]{\\color{black}{\-3}} \& 1 \&3
\\end{array}\\right)
\\end{equation\*}\\]
Now that the current pivot is equal to 1, we can easily eliminate the circled entries below it by replacing rows with combinations using the pivot row. We finish the process once the last pivot is identified (the final pivot has no eliminations to make below it).
\\\[\\begin{equation\*}
\\xrightarrow{R3'\=R3\+3R2}
\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0\&\\fbox{1}\&\-2\&1\\\\
\\red{0} \& \\red{0} \& \\red{\\fbox{\-5}} \&\\red{6}
\\end{array}\\right)
\\end{equation\*}\\]
At this point, when all the pivots have been reached, the augmented matrix is said to be in **row\-echelon form**. This simply means that all of the entries below the pivots are equal to 0\. The augmented matrix can be transformed back into equation form now that it is in a triangular form:
\\\[\\begin{equation\*}
\\begin{cases}\\begin{align}
x\_1\+x\_2 \+x\_3\= 1\\\\
x\_2\-2x\_3 \= 1\\\\
5x\_3 \=\-6\\end{align}\\end{cases}
\\end{equation\*}\\]
Which is the same system we finally solved in Example [5\.1](solvesys.html#exm:rowopeq) to get the final solution:
\\\[x\_1 \= 3\.6 \\quad x\_2 \= \-1\.4 \\quad x\_3 \= \-1\.2 \\]
### 5\.1\.3 Gaussian Elimination Summary
Let’s summarize the process of Gaussian elimination step\-by\-step:
1. We work from the upper\-left\-hand corner of the matrix to the lower\-right\-hand corner
- Focusing on the first column, identify the first pivot element. The first pivot element should be located in the first row (if this entry is zero, we must interchange rows so that it is non\-zero).
- Eliminate (zero\-out) all elements below the pivot using the combination row operation.
- Determine the next pivot and go back to step 2\.
* Only nonzero numbers are allowed to be pivots.
* If a coefficient in the next pivot position is 0, then the rows of the matrix should be interchanged to find a nonzero pivot.
* If this is not possible then we continue on to the next column to determine a pivot.- When the entries below all of the pivots are equal to zero, the process stops. The augmented matrix is said to be in *row\-echelon form*, which corresponds to a *triangular* system of equations, suitable to solve using back substitution.
**Exercise 5\.1 (Gaussian Elimination and Back Substitution)** Use Gaussian Elimination and back substitution to solve the following system.
\\\[\\begin{cases}\\begin{align}
2x\_1\-x\_2\=1\\\\
\-x\_1\+2x\_2\-x\_3\=0\\\\
\-x\_2\+x\_3\=0\\end{align}\\end{cases}\\]
5\.2 Gauss\-Jordan Elimination
------------------------------
Gauss\-Jordan elimination is Gaussian elimination taken one step further. In Gauss\-Jordan elimination, we do not stop when the augmented matrix is in row\-echelon form. Instead, we force all the pivot elements to equal 1 and we continue to eliminate entries *above* the pivot elements to reach what’s called **reduced row echelon form**.
Let’s take a look at another example:
**Example 5\.3 (Gauss\-Jordan elimination)** We begin with a system of equations, and transform it into an augmented matrix:
\\\[\\begin{cases}\\begin{align}
x\_2 \-x\_3\= 3\\\\
\-2x\_1\+4x\_2\-x\_3 \= 1\\\\
\-2x\_1\+5x\_2\-4x\_3 \=\-2\\end{align}\\end{cases}
\\Longrightarrow \\left(\\begin{array}{rrr\|r}0\&1\&\-1\&3\\\\\-2\&4\&\-1\&1\\\\\-2\&5\&\-4\&\-2\\end{array}\\right) \\]
We start by locating our first pivot element. This element cannot be zero, so we will have to swap rows to bring a non\-zero element to the pivot position.
\\\[\\left(\\begin{array}{rrr\|r}0\&1\&\-1\&3\\\\\-2\&4\&\-1\&1\\\\\-2\&5\&\-4\&\-2\\end{array}\\right) \\xrightarrow{R1\\leftrightarrow R2}
\\left(\\begin{array}{rrr\|r}\\fbox{\-2}\&4\&\-1\&1\\\\0\&1\&\-1\&3\\\\\-2\&5\&\-4\&\-2\\end{array}\\right)\\]
Now that we have a non\-zero pivot, we will want to do two things:
It does not matter what order we perform these two tasks in. Here, we will have an easy time eliminating using the \-2 pivot:
\\\[\\left(\\begin{array}{rrr\|r}\\fbox{\-2}\&4\&\-1\&1\\\\0\&1\&\-1\&3\\\\\-2\&5\&\-4\&\-2\\end{array}\\right)\\xrightarrow{R3'\=R3\-R1} \\left(\\begin{array}{rrr\|r}\\fbox{\-2}\&4\&\-1\&1\\\\0\&1\&\-1\&3\\\\\\red{0}\&\\red{1}\&\\red{\-3}\&\\red{\-3}\\end{array}\\right)\\]
Now, as promised, we will make our pivot equal to 1\.
\\\[\\left(\\begin{array}{rrr\|r}\\fbox{\-2}\&4\&\-1\&1\\\\0\&1\&\-1\&3\\\\0\&1\&\-3\&\-3\\end{array}\\right) \\xrightarrow{R1'\=\-\\frac{1}{2} R1} \\left(\\begin{array}{rrr\|r}\\red{\\fbox{1}}\&\\red{\-2}\&\\red{\\frac{1}{2}}\&\\red{\-\\frac{1}{2}}\\\\0\&1\&\-1\&3\\\\0\&1\&\-3\&\-3\\end{array}\\right)\\]
We have finished our work with this pivot, and now we move on to the next one. Since it is already equal to 1, the only thing left to do is use it to eliminate the entries below it:
\\\[\\left(\\begin{array}{rrr\|r}1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\0\&\\fbox{1}\&\-1\&3\\\\0\&1\&\-3\&\-3\\end{array}\\right)\\xrightarrow{R3'\=R3\-R2} \\left(\\begin{array}{rrr\|r}1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\0\&\\fbox{1}\&\-1\&3\\\\\\red{0}\&\\red{0}\&\\red{\-2}\&\\red{\-6}\\end{array}\\right)\\]
And then we move onto our last pivot. This pivot has no entries below it to eliminate, so all we must do is turn it into a 1:
\\\[\\left(\\begin{array}{rrr\|r}1\&\-2\&\\frac{1}{2}\&\\frac{\-1}{2}\\\\0\&1\&\-1\&3\\\\0\&0\&\\fbox{\-2}\&\-6\\end{array}\\right)\\xrightarrow{R3'\=\-\\frac{1}{2}R3}\\left(\\begin{array}{rrr\|r}1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\0\&1\&\-1\&3\\\\\\red{0}\&\\red{0}\&\\red{\\fbox{1}}\&\\red{3}\\end{array}\\right) \\]
Now, what really differentiates Gauss\-Jordan elimination from Gaussian elimination is the next few steps. Here, our goal will be to use the pivots to eliminate all of the entries *above* them. While this takes a little extra work, as we will see, it helps us avoid the tedious work of back substitution.
We’ll start at the southeast corner on the current pivot. We will use that pivot to eliminate the elements above it:
\\\[\\left(\\begin{array}{rrr\|r} 1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\0\&1\&\-1\&3\\\\0\&0\&\\fbox{1}\&3\\end{array}\\right) \\xrightarrow{R2'\=R2\+R3} \\left(\\begin{array}{rrr\|r} 1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\\\red{0}\&\\red{1}\&\\red{0}\&\\red{6}\\\\0\&0\&\\fbox{1}\&3\\end{array}\\right)\\]
\\\[ \\left(\\begin{array}{rrr\|r} 1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\0\&1\&0\&6\\\\0\&0\&\\fbox{1}\&3\\end{array}\\right)\\xrightarrow{R1'\=R1\-\\frac{1}{2}R3}\\left(\\begin{array}{rrr\|r} \\red{1}\&\\red{\-2}\&\\red{0}\&\\red{\-2}\\\\0\&1\&0\&6\\\\0\&0\&\\fbox{1}\&3\\end{array}\\right)\\]
We’re almost done! One more pivot with elements above it to be eliminated:
\\\[\\left(\\begin{array}{rrr\|r} 1\&\-2\&0\&\-2\\\\0\&\\fbox{1}\&0\&6\\\\0\&0\&1\&3\\end{array}\\right) \\xrightarrow{R1'\=R1\+2R2}
\\left(\\begin{array}{rrr\|r}\\red{1}\&\\red{0}\&\\red{0}\&\\red{10}\\\\0\&\\fbox{1}\&0\&6\\\\0\&0\&1\&3\\end{array}\\right)\\]
And we’ve reached **reduced row echelon form**. How does this help us? Well, let’s transform back to a system of equations:
\\\[\\begin{cases}\\begin{align}
x\_1 \= 10\\\\
x\_2\= 6\\\\
x\_3 \=3\\end{align}\\end{cases}\\]
The solution is simply what’s left in the right hand column of the augmented matrix.
As you can see, the steps to performing Gaussian elimination and Gauss\-Jordan elimination are very similar. Gauss\-Jordan elimination is merely an extension of Gaussian elimination which brings the problem as close to completion as possible.
### 5\.2\.1 Gauss\-Jordan Elimination Summary
1. Focusing on the first column, identify the first pivot element. The first pivot element should be located in the first row (if this entry is zero, we must interchange rows so that it is non\-zero). Our goal will be to use this element to eliminate all of the elements below it.
2. The pivot element should be equal to 1\. If it is not, we simply multiply the row by a constant to make it equal 1 (or interchange rows, if possible).
3. Eliminate (zero\-out) all elements below the pivot using the combination row operation.
4. Determine the next pivot and go back to step 2\.
* Only nonzero numbers are allowed to be pivots. If a coefficient in a pivot position is ever 0, then the rows of the matrix should be interchanged to find a nonzero pivot. If this is not possible then we continue on to the next possible column where a pivot position can be created.
5. When the last pivot is equal to 1, begin to eliminate all the entries above the pivot positions.
6. When all entries above and below each pivot element are equal to zero, the augmented matrix is said to be in *reduced row echelon form* and the Gauss\-Jordan elimination process is complete.
**Exercise 5\.2 (Gauss\-Jordan Elimination)** Use the Gauss\-Jordan method to solve the following system:
\\\[\\begin{cases}\\begin{align}
4x\_2\-3x\_3\=3\\\\
\-x\_1\+7x\_2\-5x\_3\=4\\\\
\-x\_1\+8x\_2\-6x\_3\=5\\end{align}\\end{cases}\\]
5\.3 Three Types of Systems
---------------------------
As was mentioned earlier, there are 3 situations that may arise when solving a system of equations:
* The system could have one **unique solution** (this is the situation of our examples thus far).
* The system could have no solutions (sometimes called *overdetermined* or ***inconsistent***).
* The system could have **infinitely many solutions** (sometimes called *underdetermined*).
### 5\.3\.1 The Unique Solution Case
Based on our earlier examples, we already have a sense for systems which fall into the first case.
**Theorem 5\.1 (Case 1: Unique solution)** A system of equations \\(\\A\\x\=\\b\\) has a unique solution if and only if *both* of the following conditions hold:
1. The number of equations is equal to the number of variables (i.e. the coefficient matrix \\(\\A\\) is *square*).
2. The number of pivots is equal to the number of rows/columns. In other words, under Gauss\-Jordan elimination, the coefficient matrix is transformed into the identity matrix:
\\\[\\A \\xrightarrow{Gauss\-Jordan} I\\]
In this case, we say that the matrix \\(\\A\\) is **invertible** because it is full\-rank (the rank of a matrix is the number of pivots after Gauss\-Jordan elimination) *and* square.
### 5\.3\.2 The Inconsistent Case
The second case scenario is a very specific one. In order for a system of equations to be **inconsistent** and have no solutions, it must be that after Gaussian elimination, a situation occurs where at least one equation reduces to \\(0\=\\alpha\\) where \\(\\alpha\\) is nonzero. Such a situation would look as follows (using asterisks to denote any nonzero numbers):
\\\[\\left(\\begin{array}{rrr\|r} \*\&\*\&\*\&\*\\\\0\&\*\&\*\&\*\\\\0\&0\&0\&\\alpha\\end{array}\\right) \\]
The third row of this augmented system indicates that \\\[0x\_1\+0x\_2\+0x\_3\=\\alpha\\] where \\(\\alpha\\neq 0\\), which is a contradiction. When we reach such a situation through Gauss\-Jordan elimination, we know the system is inconsistent.
**Example 5\.4 (Identifying an Inconsistent System)** \\\[\\begin{cases}\\begin{align}
x\-y\+z\=1\\\\
x\-y\-z\=2\\\\
x\+y\-z\=3\\\\
x\+y\+z\=4\\end{align}\\end{cases}\\]
Using the augmented matrix and Gaussian elimination, we take the following steps:
\\\[\\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\1\&\-1\&\-1\&2\\\\1\&1\&\-1\&3\\\\1\&1\&1\&4\\end{array}\\right) \\xrightarrow{\\substack{R2'\=R2\-R1 \\\\ R3'\=R3\-R1 \\\\ R4'\=R4\-R1}} \\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&0\&\-2\&1\\\\0\&2\&\-2\&2\\\\0\&2\&0\&3\\end{array}\\right) \\]
\\\[\\xrightarrow{ R4\\leftrightarrow R2}\\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&2\&0\&3\\\\0\&2\&\-2\&2\\\\0\&0\&\-2\&1\\end{array}\\right)\\xrightarrow{R3'\=R3\-R2} \\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&2\&0\&3\\\\0\&0\&\-2\&\-1\\\\0\&0\&\-2\&1\\end{array}\\right)\\]
\\\[\\xrightarrow{R4'\=R4\-R3} \\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&2\&0\&3\\\\0\&0\&\-2\&\-1\\\\0\&0\&0\&2\\end{array}\\right)\\]
In this final step, we see our contradiction equation, \\(0\=2\\). Since this is obviously impossible, we conclude that the system is inconsistent.
Sometimes inconsistent systems are referred to as *over\-determined*. In this example, you can see that we had more equations than variables. This is a common characteristic of over\-determined or inconsistent systems. You can think of it as holding too many demands for a small set of variables! In fact, this is precisely the situation in which we find ourselves when we approach linear regression. Regression systems do not have an exact solution: there are generally no set of \\(\\beta\_i's\\) that we can find so that our regression equation exactly fits every observation in the dataset \- the regression system is inconsistent. Thus, we need a way to get *as close as possible* to a solution; that is, we need to find a solution that minimizes the residual error. This is done using the Least Squares method, the subject of Chapter [10](leastsquares.html#leastsquares).
### 5\.3\.3 The Infinite Solutions Case
For the third case, consider the following system of equations written as an augmented matrix, and its reduced row echelon form after Gauss\-Jordan elimination. As an exercise, it is suggested that you confirm this result.
\\\[\\left(\\begin{array}{rrr\|r} 1\&2\&3\&0\\\\2\&1\&3\&0\\\\1\&1\&2\&0\\end{array}\\right) \\xrightarrow{Gauss\-Jordan} \\left(\\begin{array}{rrr\|r} 1\&0\&1\&0\\\\0\&1\&1\&0\\\\0\&0\&0\&0\\end{array}\\right) \\]
There are several things you should notice about this reduced row echelon form. For starters, it has a row that is completely 0\. This means, intuitively, that one of the equations was able to be completely eliminated \- it contained redundant information from the first two. The second thing you might notice is that there are only 2 pivot elements. Because there is no pivot in the third row, the last entries in the third column could not be eliminated! This is characteristic of what is called a **free\-variable**. Let’s see what this means by translating our reduced system back to equations:
\\\[\\begin{cases}\\begin{align}
x\_1\+x\_3 \= 0\\\\
x\_2\+x\_3\= 0\\end{align}\\end{cases}\\]
Clearly, our answer to this problem depends on the variable \\(x\_3\\), which is considered *free* to take on any value. Once we know the value of \\(x\_3\\) we can easily determine that
\\\[\\begin{align}
x\_1 \&\= \-x\_3 \\\\
x\_2 \&\= \-x\_3 \\end{align}\\]
Our convention here is to **parameterize** the solution and simply declare that \\(x\_3\=s\\) (or any other placeholder variable for a constant). Then our solution becomes:
\\\[\\pm x\_1\\\\x\_2\\\\x\_3 \\mp \= \\pm \-s \\\\ \-s \\\\ s \\mp \= s \\pm \-1\\\\\-1\\\\1 \\mp\\]
What this means is that any scalar multiple of the vector \\(\\pm \-1\\\\\-1\\\\1 \\mp\\) is a solution to the system. Thus there are infinitely many solutions!
**Theorem 5\.2 (Case 3: Infinitely Many Solutions)** A system of equations \\(\\A\\x\=\\b\\) has infinitely many solutions if the system is consistent and *any* of the following conditions hold:
1. The number of variables is greater than the number of equations.
2. There is at least one *free variable* presented in the reduced row echelon form.
3. The number of pivots is less than the number of variables.
**Example 5\.5 (Infinitely Many Solutions)** For the following reduced system of equations, characterize the set of solutions in the same fashion as the previous example.
\\\[\\left(\\begin{array}{rrrr\|r}
1\&0\&1\&2\&0\\\\0\&1\&1\&\-1\&0\\\\0\&0\&0\&0\&0\\\\0\&0\&0\&0\&0\\end{array}\\right) \\]
A good way to start is sometimes to write out the corresponding equations:
\\\[\\begin{cases}\\begin{align}
x\_1\+x\_3\+2x\_4 \= 0\\\\
x\_2\+x\_3\-x\_4\= 0\\end{align}\\end{cases} \\Longrightarrow \\systeme{
x\_1\=\-x\_3\-2x\_4\\\\
x\_2\=\-x\_3\+x\_4\\end{align}\\end{cases}\\]
Now we have *two* variables which are free to take on any value. Thus, let
\\\[x\_3 \= s \\quad \\mbox{and} \\quad x\_4 \= t\\]
Then, our solution is:
\\\[\\pm x\_1\\\\x\_2\\\\x\_3\\\\x\_4 \\mp \= \\pm \-s\-2t \\\\ \-s\+t\\\\s\\\\t \\mp \= s\\pm \-1\\\\\-1\\\\1\\\\0 \\mp \+ t\\pm \-2\\\\1\\\\0\\\\1 \\mp\\]
so any linear combination of the vectors
\\\[\\pm \-1\\\\\-1\\\\1\\\\0 \\mp \\quad \\mbox{and} \\quad \\pm \-2\\\\1\\\\0\\\\1 \\mp\\]
will provide a solution to this system.
### 5\.3\.4 Matrix Rank
The **rank** of a matrix is the number of linearly independent rows or columns in the matrix (the number of linearly independent rows will always be the same as the number of linearly independent columns). It can be determined by reducing a matrix to row\-echelon form and counting the number of pivots. A matrix is said to be **full rank** when its rank is maximal, meaning that either all rows or all columns are linearly independent. In other words, an \\(m\\times n\\) matrix \\(\\A\\) is full rank when the rank(\\(\\A\\))\\(\=\\min(m,n)\\). A square matrix that is full rank will always have an inverse.
5\.4 Solving Matrix Equations
-----------------------------
One final piece to the puzzle is what happens when we have a matrix equation like
\\\[\\A\\X\=\\B\\]
This situation is an easy extension of our previous problem because we are essentially solving the same system of equation with several different right\-hand\-side vectors (the columns of \\(\\B\\)).
Let’s look at a \\(2\\times 2\\) example to get a feel for this! We’ll dissect the following matrix equation into two different systems of equations:
\\\[\\pm 1\&1\\\\2\&1\\mp \\pm x\_{11} \& x\_{12} \\\\ x\_{21} \& x\_{22} \\mp \= \\pm 3\&3\\\\4\&5 \\mp.\\]
Based on our previous discussions, we ought to be able to see that this matrix equation represents 4 separate equations which we’ll combine into two systems:
\\\[\\pm 1\&1\\\\2\&1\\mp \\pm x\_{11} \\\\x\_{21} \\mp \= \\pm 3\\\\4 \\mp \\quad \\mbox{and}\\quad \\pm 1\&1\\\\2\&1\\mp \\pm x\_{12} \\\\x\_{22} \\mp \= \\pm 3\\\\5 \\mp\\]
Once you convince yourself that the unknowns can be found in this way, let’s take a look at the augmented matrices for these two systems:
\\\[\\left(\\begin{array}{rr\|r}
1\&1\&3\\\\2\&1\&4\\end{array}\\right) \\quad\\mbox{and}\\quad \\left(\\begin{array}{rr\|r}
1\&1\&3\\\\2\&1\&5\\end{array}\\right)\\]
When performing Gauss\-Jordan elimination on these two augmented matrices, how are the row operations going to differ? They’re not! The same row operations will be used for each augmented matrix \- the only thing that will differ is how these row operations will affect the right hand side vectors. Thus, it is possible for us to keep track of those differences in one larger augmented matrix :
\\\[\\begin{pmatrix}
\\begin{array}{cc\|cc}
1\&1\&3\&3\\\\
2\&1\&4\&5
\\end{array}
\\end{pmatrix}\\]
We can then perform the row operations on both right\-hand sides at once:
\\\[\\begin{pmatrix}
\\begin{array}{cc\|cc}
1\&1\&3\&3\\\\
2\&1\&4\&5
\\end{array}
\\end{pmatrix}\\xrightarrow{R2'\=R2\-2R1}\\begin{pmatrix}
\\begin{array}{cc\|cc}
1\&1\&3\&3\\\\
0\&\-1\&\-2\&\-1
\\end{array}
\\end{pmatrix} \\]
\\\[\\xrightarrow{R2'\=\-1R2}\\begin{pmatrix}
\\begin{array}{cc\|cc}
1\&1\&3\&3\\\\
0\&1\&2\&1
\\end{array}
\\end{pmatrix}\\xrightarrow{R1'\=R1\-R2}\\begin{pmatrix}
\\begin{array}{cc\|cc}
1\&0\&1\&2\\\\
0\&1\&2\&1
\\end{array}
\\end{pmatrix}\\]
Now again, remembering the situation from which we came, we have the equivalent system:
\\\[\\pm 1\&0\\\\0\&1 \\mp \\pm x\_{11} \& x\_{12} \\\\ x\_{21} \& x\_{22} \\mp \= \\pm 1\&2\\\\2\&1\\mp\\]
So we can conclude that \\\[\\pm x\_{11} \& x\_{12} \\\\ x\_{21} \& x\_{22} \\mp \= \\pm 1\&2\\\\2\&1\\mp\\] and we have solved our system. This method is particularly useful when finding the inverse of a matrix.
### 5\.4\.1 Solving for the Inverse of a Matrix
For any square matrix \\(\\A\\), we know the inverse matrix (\\(\\A^{\-1}\\)), if it exists, satisfies the following matrix equation,
\\\[\\A\\A^{\-1} \= \\I.\\]
Using the Gauss\-Jordan method with multiple right hand sides, we can solve for the inverse of any matrix. We simply start with an augmented matrix with \\(\\A\\) on the left and the identity on the right, and then use Gauss\-Jordan elimination to transform the matrix \\(\\A\\) into the identity matrix.
\\\[\\left(\\begin{array}{r\|r}
\\bo{A} \& \\I\\end{array}\\right)\\xrightarrow{Gauss\-Jordan}\\left(\\begin{array}{r\|r} \\bo{I} \& \\A^{\-1}\\end{array}\\right)\\]
If this is possible then the matrix on the right is the inverse of \\(\\A\\). If this is not possible then \\(\\A\\) does not have an inverse. Let’s see a quick example of this.
**Example 5\.6 (Finding a Matrix Inverse)** Find the inverse of \\\[\\A \= \\pm \-1\&2\&\-1\\\\0\&\-1\&1\\\\2\&\-1\&0 \\mp\\] using Gauss\-Jordan Elimination.
Since \\(\\A\\A^{\-1} \= \\I\\), we set up the augmented matrix as \\(\\left(\\begin{array}{r\|r} \\bo{A} \& \\I\\end{array}\\right)\\):
\\\[\\begin{pmatrix}
\\begin{array}{ccc\|ccc}\-1\&2\&\-1\&1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\2\&\-1\&0\&0\&0\&1 \\end{array}\\end{pmatrix} \\xrightarrow{R3'\=R3\+2R1}
\\begin{pmatrix}
\\begin{array}{ccc\|ccc} \-1\&2\&\-1\&1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&3\&\-2\&2\&0\&1 \\end{array}\\end{pmatrix}\\]
\\\[\\begin{pmatrix}
\\begin{array}{ccc\|ccc} \-1\&2\&\-1\&1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&3\&\-2\&2\&0\&1 \\end{array}\\end{pmatrix}
\\xrightarrow{\\substack{R1'\=\-1R1\\\\R3'\=R3\+3R2}}\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&1\&\-1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\]
\\\[\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&1\&\-1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\xrightarrow{\\substack{R1'\=R1\-R3\\\\R2'\=R2\-R3}}\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&0\&\-3\&\-3\&\-1\\\\0\&\-1\&0\&\-2\&\-2\&\-1\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\]
\\\[\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&0\&\-3\&\-3\&\-1\\\\0\&\-1\&0\&\-2\&\-2\&\-1\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\xrightarrow{\\substack{R2'\=\-1R2\\\\R1'\=R1\+2R2}}\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&0\&0\&1\&1\&1\\\\0\&1\&0\&2\&2\&1\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\]
Finally, we have completed our task. The inverse of \\(\\A\\) is the matrix on the right hand side of the augmented matrix!
\\\[\\A^{\-1} \= \\pm 1\&1\&1\\\\2\&2\&1\\\\2\&3\&1 \\mp\\]
**Exercise 5\.3 (Finding a Matrix Inverse)** Use the same method to determine the inverse of
\\\[\\B\=\\pm 1\&1\&1\\\\2\&2\&1\\\\2\&3\&1 \\mp\\]
(*hint: Example [5\.6](solvesys.html#exm:findinverse) should tell you the answer you expect to find!*)
**Example 5\.7 (Inverse of a Diagonal Matrix)** A full rank diagonal matrix (one with no zero diagonal elements) has a particularly neat and tidy inverse. Here we motivate the definition by working through an example. Find the inverse of the digaonal matrix \\(\\D\\),
\\\[\\D \= \\pm 3\&0\&0\\\\0\&\-2\&0\\\\0\&0\&\\sqrt{5} \\mp \\]
To begin the process, we start with an augmented matrix and proceed with Gauss\-Jordan Elimination. In this case, the process is quite simple! The elements above and below the diagonal pivots are already zero, we simply need to make each pivot equal to 1!
\\\[\\pm\\begin{array}{ccc\|ccc} 3\&0\&0\&1\&0\&0\\\\0\&\-2\&0\&0\&1\&0\\\\0\&0\&\\sqrt{5}\&0\&0\&1 \\end{array}\\mp
\\xrightarrow{\\substack{R1'\=\\frac{1}{3}R1 \\\\R2' \= \-\\frac{1}{2} R2\\\\R3'\=\\frac{1}{\\sqrt{5}} R3}}
\\pm\\begin{array}{ccc\|ccc} 1\&0\&0\&\\frac{1}{3}\&0\&0\\\\0\&1\&0\&0\&\-\\frac{1}{2}\&0\\\\0\&0\&1\&0\&0\&\\frac{1}{\\sqrt{5}} \\end{array}\\mp\\]
Thus, the inverse of \\(\\D\\) is:
\\\[\\D^{\-1} \= \\pm \\frac{1}{3}\&0\&0\\\\0\&\-\\frac{1}{2}\&0\\\\0\&0\&\\frac{1}{\\sqrt{5}} \\mp \\]
As you can see, all we had to do is take the scalar inverse of each diagonal element!
**Definition 5\.3 (Inverse of a Diagonal Matrix)** An \\(n\\times n\\) diagonal matrix \\(\\D \= diag\\{d\_{11},d\_{22},\\dots,d\_{nn}\\}\\) with no nonzero diagonal elements is invertible with inverse
\\\[\\D^{\-1} \= diag\\{\\frac{1}{d\_{11}},\\frac{1}{d\_{22}},\\dots,\\frac{1}{d\_{nn}}\\}\\]
5\.5 Gauss\-Jordan Elimination in R
-----------------------------------
It is important that you understand what is happening in the process of Gauss\-Jordan Elimination. Once you have a handle on how the procedure works, it is no longer necessary to do every calculation by hand. We can skip to the reduced row echelon form of a matrix using the `pracma` package in R. \\
We’ll start by creating our matrix as a variable in R. Matrices are entered in as one vector, which R then breaks apart into rows and columns in they way that you specify (with nrow/ncol). The default way that R reads a vector into a matrix is down the columns. To read the data in across the rows, use the byrow\=TRUE option). Once a matrix is created, it is stored under the variable name you give it (below, we call our matrices \\(\\Y\\) and \\(\\X\\)). We can then print out the stored matrix by simply typing \\(\\Y\\) or \\(\\X\\) at the prompt:
```
(Y=matrix(c(1,2,3,4),nrow=2,ncol=2))
```
```
## [,1] [,2]
## [1,] 1 3
## [2,] 2 4
```
```
(X=matrix(c(1,2,3,4),nrow=2,ncol=2,byrow=TRUE))
```
```
## [,1] [,2]
## [1,] 1 2
## [2,] 3 4
```
To perform Gauss\-Jordan elimination, we need to install the `pracma` package which contains the code for this procedure.
```
install.packages("pracma")
```
After installing a package in R, you must always add it to your library (so that you can actually use it in the current session). This is done with the library command:
```
library("pracma")
```
Now that the library is accessible, we can use the command to get the reduced row echelon form of an augmented matrix, \\(\\A\\):
```
A= matrix(c(1,1,1,1,-1,-1,1,1,1,-1,-1,1,1,2,3,4), nrow=4, ncol=4)
A
```
```
## [,1] [,2] [,3] [,4]
## [1,] 1 -1 1 1
## [2,] 1 -1 -1 2
## [3,] 1 1 -1 3
## [4,] 1 1 1 4
```
```
rref(A)
```
```
## [,1] [,2] [,3] [,4]
## [1,] 1 0 0 0
## [2,] 0 1 0 0
## [3,] 0 0 1 0
## [4,] 0 0 0 1
```
And we have the reduced row echelon form for one of the problems from the worksheets! You can see this system of equations is inconsistent because the bottom row amounts to the equation
\\\[0\\x\_1\+0\\x\_2\+0\\x\_3 \= 1\.\\]
This should save you some time and energy by skipping the arithmetic steps in Gauss\-Jordan Elimination.
5\.6 Exercises
--------------
1. For the following two systems of equations, draw both equations on the same plane. Comment on what you find and what it means about the system of equations.
1. \\\[\\begin{eqnarray\*}
x\_1 \+ x\_2 \&\=\& 10 \\\\
\-x\_1 \+ x\_2 \&\=\& 0
\\end{eqnarray\*}\\]- \\\[\\begin{eqnarray\*}
x\_1 \- 2x\_2 \&\=\& \-3 \\\\
2x\_1 \- 4x\_2 \&\=\& 8
\\end{eqnarray\*}\\]- We’ve drawn 2 of the 3 possible outcomes for systems of equations. Give an example of the 3rd possible outcome and draw the accompanying picture.- Specify whether the following augmented matrices are in row\-echelon form (REF), reduced row\-echelon form (RREF), or neither:
1. \\(\\left(\\begin{array}{ccc\|c} 3\&2\&1\&2\\\\0\&2\&0\&1\\\\0\&0\&1\&5 \\end{array}\\right)\\)- \\(\\left(\\begin{array}{ccc\|c} 3\&2\&1\&2\\\\0\&2\&0\&1\\\\0\&4\&0\&0 \\end{array}\\right)\\)- \\(\\left(\\begin{array}{ccc\|c} 1\&1\&0\&2\\\\0\&0\&1\&1\\\\0\&0\&0\&0 \\end{array}\\right)\\)- \\(\\left(\\begin{array}{ccc\|c} 1\&2\&0\&2\\\\0\&1\&1\&1\\\\0\&0\&0\&0 \\end{array}\\right)\\)- Using Gaussian Elimination on the augmented matrices, reduce each system of equations to a triangular form and solve using back\-substitution.
1. \\\[\\begin{cases}
x\_1 \+2x\_2\= 3\\\\
\-x\_1\+x\_2\=0\\end{cases}\\]- \\\[\\begin{cases}
x\_1\+x\_2 \+2x\_3\= 7\\\\
x\_1\+x\_3 \= 4\\\\
\-2x\_1\-2x\_2 \=\-6\\end{cases}\\]- \\\[\\begin{cases}\\begin{align}
2x\_1\-x\_2 \+x\_3\= 1\\\\
\-x\_1\+2x\_2\+3x\_3 \= 6\\\\
x\_2\+4x\_3 \=6 \\end{align}\\end{cases}\\]- Using Gauss\-Jordan Elimination on the augmented matrices, reduce each system of equations from the previous exercise to reduced row\-echelon form and give the solution as a vector.
- Use either Gaussian or Gauss\-Jordan Elimination to solve the following systems of equations. Indicate whether the systems have a unique solution, no solution, or infinitely many solutions. If the system has infinitely many solutions, exhibit a general solution in vector form as we did in Section [5\.3\.3](solvesys.html#infinitesol).
1. \\\[\\begin{cases}\\begin{align}
2x\_1\+2x\_2\+6x\_3\=4\\\\
2x\_1\+x\_2\+7x\_3\=6\\\\
\-2x\_1\-6x\_2\-7x\_3\=\-1\\end{align}\\end{cases}\\]- \\\[\\begin{cases}\\begin{align}
1x\_1\+2x\_2\+2x\_3\=0\\\\
2x\_1\+5x\_2\+7x\_3\=0\\\\
3x\_1\+6x\_2\+6x\_3\=0\\end{align}\\end{cases}\\]
- \\\[\\begin{cases}\\begin{align}
1x\_1\+3x\_2\-5x\_3\=0\\\\
1x\_1\-2x\_2\+4x\_3\=2\\\\
2x\_1\+1x\_2\-1x\_3\=0\\end{align}\\end{cases}\\]- For the following augmented matrices, circle the pivot elements and give the rank of the coefficient matrix along with the number of free variables.
1. \\(\\left(\\begin{array}{cccc\|c} 3\&2\&1\&1\&2\\\\0\&2\&0\&0\&1\\\\0\&0\&1\&0\&5 \\end{array}\\right)\\)- \\(\\left(\\begin{array}{ccc\|c} 1\&1\&0\&2\\\\0\&0\&1\&1\\\\0\&0\&0\&0 \\end{array}\\right)\\)- \\(\\left(\\begin{array}{ccccc\|c} 1\&2\&0\&1\&0\&2\\\\0\&1\&1\&1\&0\&1\\\\0\&0\&0\&1\&1\&2\\\\0\&0\&0\&0\&0\&0 \\end{array}\\right)\\)- Use Gauss\-Jordan Elimination to find the inverse of the following matrices, if possible.
1. \\(\\A\=\\pm 2\&3\\\\2\&2\\mp\\)- \\(\\B\=\\pm 1\&2\\\\2\&4\\mp\\)- \\(\\C\=\\pm 1\&2\&3\\\\4\&5\&6\\\\7\&8\&9\\mp\\)- \\(\\D\=\\pm 4\&0\&0\\\\0\&\-4\&0\\\\0\&0\&2 \\mp\\)- What is the inverse of a diagonal matrix, \\(\\bo{D}\=diag\\{\\sigma\_{1},\\sigma\_{2}, \\dots,\\sigma\_{n}\\}\\)?
- Suppose you have a matrix of data, \\(\\A\_{n\\times p}\\), containing \\(n\\) observations on \\(p\\) variables. Suppose the standard deviations of these variables are contained in a diagonal matrix
\\\[\\bo{S}\= diag\\{\\sigma\_1, \\sigma\_2,\\dots,\\sigma\_p\\}.\\] Give a formula for a matrix that contains the same data but with each variable divided by its standard deviation. *Hint: This problem connects Text Exercise [2\.5](mult.html#exr:diagmultexer) and Example [5\.7](solvesys.html#exm:diaginverse)*.
5\.7 List of Key Terms
----------------------
* systems of equations
* row operations
* row\-echelon form
* pivot element
* Gaussian elimination
* Gauss\-Jordan elimination
* reduced row\-echelon form
* rank
* unique solution
* infinitely many solutions
* inconsistent
* back\-substitution
5\.1 Gaussian Elimination
-------------------------
Gauss\-Jordan Elimination is essentially the same process of elimination you may have used in an Algebra class in primary school. Suppose, for example, we have the following simple system of equations:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+2x\_2 \&\=\& 11\\\\
x\_1\+x\_2 \&\=\& 6\\end{eqnarray}\\end{cases}\\]
One simple way to solve this system of equations is to subtract the second equation from the first. By this we mean that we’d perform subtraction on the left hand and right hand sides of the equation:
\\\[\\pm \&x\_1\&\+\&2x\_2 \\\\ \-\&(x\_1\&\+\&x\_2\) \\\\ \\hline \&\&\&x\_2 \\mp \= \\pm 11 \\\\\-6\\\\\\hline 5 \\mp\\]
This operation is clearly allowed because the two subtracted quantities are equal (by the very definition of an equation!). What we are left with is one much simpler equation,
\\\[x\_2\=5\\]
using this information, we can return to the first equation, substitute and solve for \\(x\_1\\):
\\\[\\begin{eqnarray}
x\_1\+2(5\)\&\=\&11 \\\\
x\_1 \&\=\& 1
\\end{eqnarray}\\]
This final process of substitution is often called **back substitution.** Once we have a sufficient amount of information, we can use that information to substitute and solve for the remainder.
### 5\.1\.1 Row Operations
In the previous example, we demonstrated one operation that can be performed on systems of equations without changing the solution: one equation can be added to a multiple of another (in that example, the multiple was \-1\). For any system of equations, there are 3 operations which will not change the solution set:
1. Interchanging the order of the equations.
2. Multiplying both sides of one equation by a constant.
3. Replace one equation by a linear combination of itself and of another equation.
Taking our simple system from the previous example, we’ll examine these three operations concretely:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+2x\_2 \&\=\& 11\\\\
x\_1\+x\_2 \&\=\& 6\\end{eqnarray}\\end{cases}\\]
1. Interchanging the order of the equations.
| \\\[\\begin{cases}\\begin{align} x\_1\+2x\_2 \&\= 11\\\\ x\_1\+x\_2 \&\= 6\\end{align}\\end{cases}\\] \\(\\Leftrightarrow\\) \\\[\\begin{cases}\\begin{align} x\_1\+x\_2 \=\& 6\\\\ x\_1\+2x\_2 \=\& 11\\end{align}\\end{cases}\\] | | |
| --- | --- | --- |
2. Multiplying both sides of one equation by a constant. *(Multiply the second equation by \-1\)*.
| \\\[\\begin{cases}\\begin{align} x\_1\+2x\_2 \&\=\& 11\\\\ x\_1\+x\_2 \&\=\& 6\\end{align}\\end{cases}\\] \\(\\Leftrightarrow\\) \\\[\\begin{cases}\\begin{align} x\_1\+2x\_2 \&\=\& 11\\\\ \-1x\_1\-1x\_2 \&\=\& \-6\\end{align}\\end{cases}\\] | | |
| --- | --- | --- |
3. Replace one equation by a linear combination of itself and of another equation. *(Replace the second equation by the first minus the second.)*
| \\\[\\begin{cases}\\begin{eqnarray} x\_1\+2x\_2 \&\=\& 11\\\\ x\_1\+x\_2 \&\=\& 6\\end{eqnarray}\\end{cases}\\] \\(\\Leftrightarrow\\) \\\[\\begin{cases}\\begin{eqnarray} x\_1\+2x\_2 \&\=\& 11\\\\ x\_2 \&\=\& 5\\end{eqnarray}\\end{cases}\\] | | |
| --- | --- | --- |
Using these 3 row operations, we can transform any system of equations into one that is *triangular*. A **triangular system** is one that can be solved by back substitution. For example,
\\\[\\begin{cases}\\begin{align}
x\_1\+2x\_2 \+3x\_3\= 14\\\\
x\_2\+x\_3 \=6\\\\
x\_3 \= 1\\end{align}\\end{cases}\\]
is a triangular system. Using substitution, the second equation will give us the value for \\(x\_2\\), which will allow for further substitution into the first equation to solve for the value of \\(x\_1\\). Let’s take a look at an example of how we can transform any system to a triangular system.
**Example 5\.1 (Transforming a System to a Triangular System via 3 Operations)** Solve the following system of equations:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3\&\=\& 1\\\\
x\_1\-2x\_2\+2x\_3 \&\=\&4\\\\
x\_1\+2x\_2\-x\_3 \&\=\& 2\\end{eqnarray}\\end{cases}\\]
To turn this into a triangular system, we will want to eliminate the variable \\(x\_1\\) from two of the equations. We can do this by taking the following operations:
1. Replace equation 2 with (equation 2 \- equation 1\).
2. Replace equation 3 with (equation 3 \- equation 1\).
Then, our system becomes:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3\&\=\& 1\\\\
\-3x\_2\+x\_3 \&\=\&3\\\\
x\_2\-2x\_3 \&\=\& 1\\end{eqnarray}\\end{cases}\\]
Next, we will want to eliminate the variable \\(x\_2\\) from the third equation. We can do this by replacing equation 3 with (equation 3 \+ \\(\\frac{1}{3}\\) equation 2\). *However*, we can avoid dealing with fractions if instead we:
3. Swap equations 2 and 3\.
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3 \&\=\& 1\\\\
x\_2\-2x\_3 \&\=\& 1\\\\
\-3x\_2\+x\_3 \&\=\&3\\end{eqnarray}\\end{cases}\\]
Now, as promised our math is a little simpler:
4. Replace equation 3 with (equation 3 \+ 3\*equation 2\).
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3 \&\=\& 1 \\\\
x\_2\-2x\_3 \&\=\& 1 \\\\
\-5x\_3 \&\=\&6 \\end{eqnarray}\\end{cases}\\]
Now that our system is in triangular form, we can use substitution to solve for all of the variables:
\\\[x\_1 \= 3\.6 \\quad x\_2 \= \-1\.4 \\quad x\_3 \= \-1\.2 \\]
This is the procedure for Gaussian Elimination, which we will now formalize in it’s matrix version.
### 5\.1\.2 The Augmented Matrix
When solving systems of equations, we will commonly use the **augmented matrix**.
**Definition 5\.1 (The Augmented Matrix)** The **augmented matrix** of a system of equations is simply the matrix which contains all of the coefficients of the equations, augmented with an extra column holding the values on the right hand sides of the equations. If our system is:
\\\[\\begin{cases}\\begin{eqnarray}
a\_{11}x\_1\+a\_{12}x\_2 \+a\_{13}x\_3\&\=\& b\_1
a\_{21}x\_1\+a\_{22}x\_2 \+a\_{23}x\_3\&\=\& b\_2
a\_{31}x\_1\+a\_{32}x\_2 \+a\_{33}x\_3\&\=\&b\_3 \\end{eqnarray}\\end{cases}\\]
Then the corresponding augmented matrix is
\\\[\\left(\\begin{array}{rrr\|r}
a\_{11}\&a\_{12}\&a\_{13}\& b\_1\\\\
a\_{21}\&a\_{22}\&a\_{23}\& b\_2\\\\
a\_{31}\&a\_{12}\&a\_{33}\& b\_3\\\\
\\end{array}\\right)\\]
Using this augmented matrix, we can contain all of the information needed to perform the three operations outlined in the previous section. We will formalize these operations as they pertain to the rows (i.e. individual equations) of the augmented matrix (i.e. the entire system) in the following definition.
**Definition 5\.2 (Row Operations for Gaussian Elimination)** Gaussian Elimination is performed on the rows, \\(\\arow{i},\\) of an augmented matrix, \\\[\\A \= \\pm \\arow{1}\\\\\\arow{2}\\\\\\arow{3}\\\\\\vdots\\\\\\arow{m}\\mp\\] by using the three **elementary row operations**:
1. Swap rows \\(i\\) and \\(j\\).
2. Replace row \\(i\\) by a nonzero multiple of itself.
3. Replace row \\(i\\) by a linear combination of itself plus a multiple of row \\(j\\).
The ultimate goal of Gaussian elimination is to transform an augmented matrix into an **upper\-triangular matrix** which allows for backsolving.
\\\[\\A \\rightarrow \\left(\\begin{array}{rrrr\|r}
t\_{11}\& t\_{12}\& \\dots\& t\_{1n}\&c\_1\\cr
0\& t\_{22}\& \\dots\& t\_{2n}\&c\_2\\cr
\\vdots\& \\vdots\& \\ddots\& \\vdots\&\\vdots\\cr
0\& 0\& \\dots\& t\_{nn}\&c\_n\\end{array}\\right)\\]
The key to this process at each step is to focus on one position, called the *pivot position* or simply the *pivot*, and try to eliminate all terms below this position using the three row operations. Only nonzero numbers are allowed to be pivots. If a coefficient in a pivot position is ever 0, then the rows of the matrix should be interchanged to find a nonzero pivot. If this is not possible then we continue on to the next possible column where a pivot position can be created.
Let’s now go through a detailed example of Gaussian elimination using the augmented matrix. We will use the same example (and same row operations) from the previous section to demonstrate the idea.
**Example 5\.2 (Row Operations on the Augmented Matrix)** We will solve the system of equations from Example [5\.1](solvesys.html#exm:rowopeq) using the Augmented Matrix.
\\\[\\begin{equation\*}\\begin{cases}\\begin{align}
x\_1\+x\_2 \+x\_3\= 1\\\\
x\_1\-2x\_2\+2x\_3 \=4\\\\
x\_1\+2x\_2\-x\_3 \= 2\\end{align}\\end{cases}
\\end{equation\*}\\]
Our first step will be to write the augmented matrix and identify the current pivot. Here, a square is drawn around the pivot and the numbers below the pivot are circled. It is our goal to eliminate the circled numbers using the row with the pivot.
\\\[\\begin{equation\*}
\\left(\\begin{array}{rrr\|r}
1 \& 1 \& 1 \& 1\\\\
1 \& \-2 \& 2 \&4\\\\
1\&2\&\-1 \&2
\\end{array}\\right)
\\xrightarrow{Current Pivot}\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}} \& \-2 \& 2 \&4\\\\
\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}}\&2\&\-1 \&2
\\end{array}\\right)
\\end{equation\*}\\]
We can eliminate the circled elements by making combinations with those rows and the pivot row. For instance, we’d replace row 2 by the combination (row 2 \- row 1\). Our shorthand notation for this will be R2’ \= R2\-R1\. Similarly we will replace row 3 in the next step.
\\\[\\begin{equation\*}
\\xrightarrow{R2'\=R2\-R1}
\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
\\red{0} \& \\red{\-3} \& \\red{1} \&\\red{3}\\\\
\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}}\&2\&\-1 \&2
\\end{array}\\right)
\\xrightarrow{R3'\=R3\-R1} \\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0 \& \-3 \& 1 \&3\\\\
\\red{0}\&\\red{1}\&\\red{\-2}\&\\red{1}
\\end{array}\\right)
\\end{equation\*}\\]
Now that we have eliminated each of the circled elements below the current pivot, we will continue on to the next pivot, which is \-3\. Looking into the future, we can either do the operation \\(R3'\=R3\+\\frac{1}{3}R2\\) or we can interchange rows 2 and 3 to avoid fractions in our next calculation. To keep things neat, we will do the latter. (*note: either way you proceed will lead you to the same solution!*)
\\\[\\begin{equation\*} \\xrightarrow{Next Pivot}
\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0 \& \\fbox{\-3} \& 1 \&3\\\\
0\&\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}}\&\-2\&1
\\end{array}\\right)
\\xrightarrow{R2 \\leftrightarrow R3} \\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0\&\\fbox{1}\&\-2\&1\\\\
0 \& \\enclose{circle}\[mathcolor\="red"]{\\color{black}{\-3}} \& 1 \&3
\\end{array}\\right)
\\end{equation\*}\\]
Now that the current pivot is equal to 1, we can easily eliminate the circled entries below it by replacing rows with combinations using the pivot row. We finish the process once the last pivot is identified (the final pivot has no eliminations to make below it).
\\\[\\begin{equation\*}
\\xrightarrow{R3'\=R3\+3R2}
\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0\&\\fbox{1}\&\-2\&1\\\\
\\red{0} \& \\red{0} \& \\red{\\fbox{\-5}} \&\\red{6}
\\end{array}\\right)
\\end{equation\*}\\]
At this point, when all the pivots have been reached, the augmented matrix is said to be in **row\-echelon form**. This simply means that all of the entries below the pivots are equal to 0\. The augmented matrix can be transformed back into equation form now that it is in a triangular form:
\\\[\\begin{equation\*}
\\begin{cases}\\begin{align}
x\_1\+x\_2 \+x\_3\= 1\\\\
x\_2\-2x\_3 \= 1\\\\
5x\_3 \=\-6\\end{align}\\end{cases}
\\end{equation\*}\\]
Which is the same system we finally solved in Example [5\.1](solvesys.html#exm:rowopeq) to get the final solution:
\\\[x\_1 \= 3\.6 \\quad x\_2 \= \-1\.4 \\quad x\_3 \= \-1\.2 \\]
### 5\.1\.3 Gaussian Elimination Summary
Let’s summarize the process of Gaussian elimination step\-by\-step:
1. We work from the upper\-left\-hand corner of the matrix to the lower\-right\-hand corner
- Focusing on the first column, identify the first pivot element. The first pivot element should be located in the first row (if this entry is zero, we must interchange rows so that it is non\-zero).
- Eliminate (zero\-out) all elements below the pivot using the combination row operation.
- Determine the next pivot and go back to step 2\.
* Only nonzero numbers are allowed to be pivots.
* If a coefficient in the next pivot position is 0, then the rows of the matrix should be interchanged to find a nonzero pivot.
* If this is not possible then we continue on to the next column to determine a pivot.- When the entries below all of the pivots are equal to zero, the process stops. The augmented matrix is said to be in *row\-echelon form*, which corresponds to a *triangular* system of equations, suitable to solve using back substitution.
**Exercise 5\.1 (Gaussian Elimination and Back Substitution)** Use Gaussian Elimination and back substitution to solve the following system.
\\\[\\begin{cases}\\begin{align}
2x\_1\-x\_2\=1\\\\
\-x\_1\+2x\_2\-x\_3\=0\\\\
\-x\_2\+x\_3\=0\\end{align}\\end{cases}\\]
### 5\.1\.1 Row Operations
In the previous example, we demonstrated one operation that can be performed on systems of equations without changing the solution: one equation can be added to a multiple of another (in that example, the multiple was \-1\). For any system of equations, there are 3 operations which will not change the solution set:
1. Interchanging the order of the equations.
2. Multiplying both sides of one equation by a constant.
3. Replace one equation by a linear combination of itself and of another equation.
Taking our simple system from the previous example, we’ll examine these three operations concretely:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+2x\_2 \&\=\& 11\\\\
x\_1\+x\_2 \&\=\& 6\\end{eqnarray}\\end{cases}\\]
1. Interchanging the order of the equations.
| \\\[\\begin{cases}\\begin{align} x\_1\+2x\_2 \&\= 11\\\\ x\_1\+x\_2 \&\= 6\\end{align}\\end{cases}\\] \\(\\Leftrightarrow\\) \\\[\\begin{cases}\\begin{align} x\_1\+x\_2 \=\& 6\\\\ x\_1\+2x\_2 \=\& 11\\end{align}\\end{cases}\\] | | |
| --- | --- | --- |
2. Multiplying both sides of one equation by a constant. *(Multiply the second equation by \-1\)*.
| \\\[\\begin{cases}\\begin{align} x\_1\+2x\_2 \&\=\& 11\\\\ x\_1\+x\_2 \&\=\& 6\\end{align}\\end{cases}\\] \\(\\Leftrightarrow\\) \\\[\\begin{cases}\\begin{align} x\_1\+2x\_2 \&\=\& 11\\\\ \-1x\_1\-1x\_2 \&\=\& \-6\\end{align}\\end{cases}\\] | | |
| --- | --- | --- |
3. Replace one equation by a linear combination of itself and of another equation. *(Replace the second equation by the first minus the second.)*
| \\\[\\begin{cases}\\begin{eqnarray} x\_1\+2x\_2 \&\=\& 11\\\\ x\_1\+x\_2 \&\=\& 6\\end{eqnarray}\\end{cases}\\] \\(\\Leftrightarrow\\) \\\[\\begin{cases}\\begin{eqnarray} x\_1\+2x\_2 \&\=\& 11\\\\ x\_2 \&\=\& 5\\end{eqnarray}\\end{cases}\\] | | |
| --- | --- | --- |
Using these 3 row operations, we can transform any system of equations into one that is *triangular*. A **triangular system** is one that can be solved by back substitution. For example,
\\\[\\begin{cases}\\begin{align}
x\_1\+2x\_2 \+3x\_3\= 14\\\\
x\_2\+x\_3 \=6\\\\
x\_3 \= 1\\end{align}\\end{cases}\\]
is a triangular system. Using substitution, the second equation will give us the value for \\(x\_2\\), which will allow for further substitution into the first equation to solve for the value of \\(x\_1\\). Let’s take a look at an example of how we can transform any system to a triangular system.
**Example 5\.1 (Transforming a System to a Triangular System via 3 Operations)** Solve the following system of equations:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3\&\=\& 1\\\\
x\_1\-2x\_2\+2x\_3 \&\=\&4\\\\
x\_1\+2x\_2\-x\_3 \&\=\& 2\\end{eqnarray}\\end{cases}\\]
To turn this into a triangular system, we will want to eliminate the variable \\(x\_1\\) from two of the equations. We can do this by taking the following operations:
1. Replace equation 2 with (equation 2 \- equation 1\).
2. Replace equation 3 with (equation 3 \- equation 1\).
Then, our system becomes:
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3\&\=\& 1\\\\
\-3x\_2\+x\_3 \&\=\&3\\\\
x\_2\-2x\_3 \&\=\& 1\\end{eqnarray}\\end{cases}\\]
Next, we will want to eliminate the variable \\(x\_2\\) from the third equation. We can do this by replacing equation 3 with (equation 3 \+ \\(\\frac{1}{3}\\) equation 2\). *However*, we can avoid dealing with fractions if instead we:
3. Swap equations 2 and 3\.
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3 \&\=\& 1\\\\
x\_2\-2x\_3 \&\=\& 1\\\\
\-3x\_2\+x\_3 \&\=\&3\\end{eqnarray}\\end{cases}\\]
Now, as promised our math is a little simpler:
4. Replace equation 3 with (equation 3 \+ 3\*equation 2\).
\\\[\\begin{cases}\\begin{eqnarray}
x\_1\+x\_2 \+x\_3 \&\=\& 1 \\\\
x\_2\-2x\_3 \&\=\& 1 \\\\
\-5x\_3 \&\=\&6 \\end{eqnarray}\\end{cases}\\]
Now that our system is in triangular form, we can use substitution to solve for all of the variables:
\\\[x\_1 \= 3\.6 \\quad x\_2 \= \-1\.4 \\quad x\_3 \= \-1\.2 \\]
This is the procedure for Gaussian Elimination, which we will now formalize in it’s matrix version.
### 5\.1\.2 The Augmented Matrix
When solving systems of equations, we will commonly use the **augmented matrix**.
**Definition 5\.1 (The Augmented Matrix)** The **augmented matrix** of a system of equations is simply the matrix which contains all of the coefficients of the equations, augmented with an extra column holding the values on the right hand sides of the equations. If our system is:
\\\[\\begin{cases}\\begin{eqnarray}
a\_{11}x\_1\+a\_{12}x\_2 \+a\_{13}x\_3\&\=\& b\_1
a\_{21}x\_1\+a\_{22}x\_2 \+a\_{23}x\_3\&\=\& b\_2
a\_{31}x\_1\+a\_{32}x\_2 \+a\_{33}x\_3\&\=\&b\_3 \\end{eqnarray}\\end{cases}\\]
Then the corresponding augmented matrix is
\\\[\\left(\\begin{array}{rrr\|r}
a\_{11}\&a\_{12}\&a\_{13}\& b\_1\\\\
a\_{21}\&a\_{22}\&a\_{23}\& b\_2\\\\
a\_{31}\&a\_{12}\&a\_{33}\& b\_3\\\\
\\end{array}\\right)\\]
Using this augmented matrix, we can contain all of the information needed to perform the three operations outlined in the previous section. We will formalize these operations as they pertain to the rows (i.e. individual equations) of the augmented matrix (i.e. the entire system) in the following definition.
**Definition 5\.2 (Row Operations for Gaussian Elimination)** Gaussian Elimination is performed on the rows, \\(\\arow{i},\\) of an augmented matrix, \\\[\\A \= \\pm \\arow{1}\\\\\\arow{2}\\\\\\arow{3}\\\\\\vdots\\\\\\arow{m}\\mp\\] by using the three **elementary row operations**:
1. Swap rows \\(i\\) and \\(j\\).
2. Replace row \\(i\\) by a nonzero multiple of itself.
3. Replace row \\(i\\) by a linear combination of itself plus a multiple of row \\(j\\).
The ultimate goal of Gaussian elimination is to transform an augmented matrix into an **upper\-triangular matrix** which allows for backsolving.
\\\[\\A \\rightarrow \\left(\\begin{array}{rrrr\|r}
t\_{11}\& t\_{12}\& \\dots\& t\_{1n}\&c\_1\\cr
0\& t\_{22}\& \\dots\& t\_{2n}\&c\_2\\cr
\\vdots\& \\vdots\& \\ddots\& \\vdots\&\\vdots\\cr
0\& 0\& \\dots\& t\_{nn}\&c\_n\\end{array}\\right)\\]
The key to this process at each step is to focus on one position, called the *pivot position* or simply the *pivot*, and try to eliminate all terms below this position using the three row operations. Only nonzero numbers are allowed to be pivots. If a coefficient in a pivot position is ever 0, then the rows of the matrix should be interchanged to find a nonzero pivot. If this is not possible then we continue on to the next possible column where a pivot position can be created.
Let’s now go through a detailed example of Gaussian elimination using the augmented matrix. We will use the same example (and same row operations) from the previous section to demonstrate the idea.
**Example 5\.2 (Row Operations on the Augmented Matrix)** We will solve the system of equations from Example [5\.1](solvesys.html#exm:rowopeq) using the Augmented Matrix.
\\\[\\begin{equation\*}\\begin{cases}\\begin{align}
x\_1\+x\_2 \+x\_3\= 1\\\\
x\_1\-2x\_2\+2x\_3 \=4\\\\
x\_1\+2x\_2\-x\_3 \= 2\\end{align}\\end{cases}
\\end{equation\*}\\]
Our first step will be to write the augmented matrix and identify the current pivot. Here, a square is drawn around the pivot and the numbers below the pivot are circled. It is our goal to eliminate the circled numbers using the row with the pivot.
\\\[\\begin{equation\*}
\\left(\\begin{array}{rrr\|r}
1 \& 1 \& 1 \& 1\\\\
1 \& \-2 \& 2 \&4\\\\
1\&2\&\-1 \&2
\\end{array}\\right)
\\xrightarrow{Current Pivot}\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}} \& \-2 \& 2 \&4\\\\
\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}}\&2\&\-1 \&2
\\end{array}\\right)
\\end{equation\*}\\]
We can eliminate the circled elements by making combinations with those rows and the pivot row. For instance, we’d replace row 2 by the combination (row 2 \- row 1\). Our shorthand notation for this will be R2’ \= R2\-R1\. Similarly we will replace row 3 in the next step.
\\\[\\begin{equation\*}
\\xrightarrow{R2'\=R2\-R1}
\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
\\red{0} \& \\red{\-3} \& \\red{1} \&\\red{3}\\\\
\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}}\&2\&\-1 \&2
\\end{array}\\right)
\\xrightarrow{R3'\=R3\-R1} \\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0 \& \-3 \& 1 \&3\\\\
\\red{0}\&\\red{1}\&\\red{\-2}\&\\red{1}
\\end{array}\\right)
\\end{equation\*}\\]
Now that we have eliminated each of the circled elements below the current pivot, we will continue on to the next pivot, which is \-3\. Looking into the future, we can either do the operation \\(R3'\=R3\+\\frac{1}{3}R2\\) or we can interchange rows 2 and 3 to avoid fractions in our next calculation. To keep things neat, we will do the latter. (*note: either way you proceed will lead you to the same solution!*)
\\\[\\begin{equation\*} \\xrightarrow{Next Pivot}
\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0 \& \\fbox{\-3} \& 1 \&3\\\\
0\&\\enclose{circle}\[mathcolor\="red"]{\\color{black}{1}}\&\-2\&1
\\end{array}\\right)
\\xrightarrow{R2 \\leftrightarrow R3} \\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0\&\\fbox{1}\&\-2\&1\\\\
0 \& \\enclose{circle}\[mathcolor\="red"]{\\color{black}{\-3}} \& 1 \&3
\\end{array}\\right)
\\end{equation\*}\\]
Now that the current pivot is equal to 1, we can easily eliminate the circled entries below it by replacing rows with combinations using the pivot row. We finish the process once the last pivot is identified (the final pivot has no eliminations to make below it).
\\\[\\begin{equation\*}
\\xrightarrow{R3'\=R3\+3R2}
\\left(\\begin{array}{rrr\|r}
\\fbox{1} \& 1 \& 1 \& 1\\\\
0\&\\fbox{1}\&\-2\&1\\\\
\\red{0} \& \\red{0} \& \\red{\\fbox{\-5}} \&\\red{6}
\\end{array}\\right)
\\end{equation\*}\\]
At this point, when all the pivots have been reached, the augmented matrix is said to be in **row\-echelon form**. This simply means that all of the entries below the pivots are equal to 0\. The augmented matrix can be transformed back into equation form now that it is in a triangular form:
\\\[\\begin{equation\*}
\\begin{cases}\\begin{align}
x\_1\+x\_2 \+x\_3\= 1\\\\
x\_2\-2x\_3 \= 1\\\\
5x\_3 \=\-6\\end{align}\\end{cases}
\\end{equation\*}\\]
Which is the same system we finally solved in Example [5\.1](solvesys.html#exm:rowopeq) to get the final solution:
\\\[x\_1 \= 3\.6 \\quad x\_2 \= \-1\.4 \\quad x\_3 \= \-1\.2 \\]
### 5\.1\.3 Gaussian Elimination Summary
Let’s summarize the process of Gaussian elimination step\-by\-step:
1. We work from the upper\-left\-hand corner of the matrix to the lower\-right\-hand corner
- Focusing on the first column, identify the first pivot element. The first pivot element should be located in the first row (if this entry is zero, we must interchange rows so that it is non\-zero).
- Eliminate (zero\-out) all elements below the pivot using the combination row operation.
- Determine the next pivot and go back to step 2\.
* Only nonzero numbers are allowed to be pivots.
* If a coefficient in the next pivot position is 0, then the rows of the matrix should be interchanged to find a nonzero pivot.
* If this is not possible then we continue on to the next column to determine a pivot.- When the entries below all of the pivots are equal to zero, the process stops. The augmented matrix is said to be in *row\-echelon form*, which corresponds to a *triangular* system of equations, suitable to solve using back substitution.
**Exercise 5\.1 (Gaussian Elimination and Back Substitution)** Use Gaussian Elimination and back substitution to solve the following system.
\\\[\\begin{cases}\\begin{align}
2x\_1\-x\_2\=1\\\\
\-x\_1\+2x\_2\-x\_3\=0\\\\
\-x\_2\+x\_3\=0\\end{align}\\end{cases}\\]
5\.2 Gauss\-Jordan Elimination
------------------------------
Gauss\-Jordan elimination is Gaussian elimination taken one step further. In Gauss\-Jordan elimination, we do not stop when the augmented matrix is in row\-echelon form. Instead, we force all the pivot elements to equal 1 and we continue to eliminate entries *above* the pivot elements to reach what’s called **reduced row echelon form**.
Let’s take a look at another example:
**Example 5\.3 (Gauss\-Jordan elimination)** We begin with a system of equations, and transform it into an augmented matrix:
\\\[\\begin{cases}\\begin{align}
x\_2 \-x\_3\= 3\\\\
\-2x\_1\+4x\_2\-x\_3 \= 1\\\\
\-2x\_1\+5x\_2\-4x\_3 \=\-2\\end{align}\\end{cases}
\\Longrightarrow \\left(\\begin{array}{rrr\|r}0\&1\&\-1\&3\\\\\-2\&4\&\-1\&1\\\\\-2\&5\&\-4\&\-2\\end{array}\\right) \\]
We start by locating our first pivot element. This element cannot be zero, so we will have to swap rows to bring a non\-zero element to the pivot position.
\\\[\\left(\\begin{array}{rrr\|r}0\&1\&\-1\&3\\\\\-2\&4\&\-1\&1\\\\\-2\&5\&\-4\&\-2\\end{array}\\right) \\xrightarrow{R1\\leftrightarrow R2}
\\left(\\begin{array}{rrr\|r}\\fbox{\-2}\&4\&\-1\&1\\\\0\&1\&\-1\&3\\\\\-2\&5\&\-4\&\-2\\end{array}\\right)\\]
Now that we have a non\-zero pivot, we will want to do two things:
It does not matter what order we perform these two tasks in. Here, we will have an easy time eliminating using the \-2 pivot:
\\\[\\left(\\begin{array}{rrr\|r}\\fbox{\-2}\&4\&\-1\&1\\\\0\&1\&\-1\&3\\\\\-2\&5\&\-4\&\-2\\end{array}\\right)\\xrightarrow{R3'\=R3\-R1} \\left(\\begin{array}{rrr\|r}\\fbox{\-2}\&4\&\-1\&1\\\\0\&1\&\-1\&3\\\\\\red{0}\&\\red{1}\&\\red{\-3}\&\\red{\-3}\\end{array}\\right)\\]
Now, as promised, we will make our pivot equal to 1\.
\\\[\\left(\\begin{array}{rrr\|r}\\fbox{\-2}\&4\&\-1\&1\\\\0\&1\&\-1\&3\\\\0\&1\&\-3\&\-3\\end{array}\\right) \\xrightarrow{R1'\=\-\\frac{1}{2} R1} \\left(\\begin{array}{rrr\|r}\\red{\\fbox{1}}\&\\red{\-2}\&\\red{\\frac{1}{2}}\&\\red{\-\\frac{1}{2}}\\\\0\&1\&\-1\&3\\\\0\&1\&\-3\&\-3\\end{array}\\right)\\]
We have finished our work with this pivot, and now we move on to the next one. Since it is already equal to 1, the only thing left to do is use it to eliminate the entries below it:
\\\[\\left(\\begin{array}{rrr\|r}1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\0\&\\fbox{1}\&\-1\&3\\\\0\&1\&\-3\&\-3\\end{array}\\right)\\xrightarrow{R3'\=R3\-R2} \\left(\\begin{array}{rrr\|r}1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\0\&\\fbox{1}\&\-1\&3\\\\\\red{0}\&\\red{0}\&\\red{\-2}\&\\red{\-6}\\end{array}\\right)\\]
And then we move onto our last pivot. This pivot has no entries below it to eliminate, so all we must do is turn it into a 1:
\\\[\\left(\\begin{array}{rrr\|r}1\&\-2\&\\frac{1}{2}\&\\frac{\-1}{2}\\\\0\&1\&\-1\&3\\\\0\&0\&\\fbox{\-2}\&\-6\\end{array}\\right)\\xrightarrow{R3'\=\-\\frac{1}{2}R3}\\left(\\begin{array}{rrr\|r}1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\0\&1\&\-1\&3\\\\\\red{0}\&\\red{0}\&\\red{\\fbox{1}}\&\\red{3}\\end{array}\\right) \\]
Now, what really differentiates Gauss\-Jordan elimination from Gaussian elimination is the next few steps. Here, our goal will be to use the pivots to eliminate all of the entries *above* them. While this takes a little extra work, as we will see, it helps us avoid the tedious work of back substitution.
We’ll start at the southeast corner on the current pivot. We will use that pivot to eliminate the elements above it:
\\\[\\left(\\begin{array}{rrr\|r} 1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\0\&1\&\-1\&3\\\\0\&0\&\\fbox{1}\&3\\end{array}\\right) \\xrightarrow{R2'\=R2\+R3} \\left(\\begin{array}{rrr\|r} 1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\\\red{0}\&\\red{1}\&\\red{0}\&\\red{6}\\\\0\&0\&\\fbox{1}\&3\\end{array}\\right)\\]
\\\[ \\left(\\begin{array}{rrr\|r} 1\&\-2\&\\frac{1}{2}\&\-\\frac{1}{2}\\\\0\&1\&0\&6\\\\0\&0\&\\fbox{1}\&3\\end{array}\\right)\\xrightarrow{R1'\=R1\-\\frac{1}{2}R3}\\left(\\begin{array}{rrr\|r} \\red{1}\&\\red{\-2}\&\\red{0}\&\\red{\-2}\\\\0\&1\&0\&6\\\\0\&0\&\\fbox{1}\&3\\end{array}\\right)\\]
We’re almost done! One more pivot with elements above it to be eliminated:
\\\[\\left(\\begin{array}{rrr\|r} 1\&\-2\&0\&\-2\\\\0\&\\fbox{1}\&0\&6\\\\0\&0\&1\&3\\end{array}\\right) \\xrightarrow{R1'\=R1\+2R2}
\\left(\\begin{array}{rrr\|r}\\red{1}\&\\red{0}\&\\red{0}\&\\red{10}\\\\0\&\\fbox{1}\&0\&6\\\\0\&0\&1\&3\\end{array}\\right)\\]
And we’ve reached **reduced row echelon form**. How does this help us? Well, let’s transform back to a system of equations:
\\\[\\begin{cases}\\begin{align}
x\_1 \= 10\\\\
x\_2\= 6\\\\
x\_3 \=3\\end{align}\\end{cases}\\]
The solution is simply what’s left in the right hand column of the augmented matrix.
As you can see, the steps to performing Gaussian elimination and Gauss\-Jordan elimination are very similar. Gauss\-Jordan elimination is merely an extension of Gaussian elimination which brings the problem as close to completion as possible.
### 5\.2\.1 Gauss\-Jordan Elimination Summary
1. Focusing on the first column, identify the first pivot element. The first pivot element should be located in the first row (if this entry is zero, we must interchange rows so that it is non\-zero). Our goal will be to use this element to eliminate all of the elements below it.
2. The pivot element should be equal to 1\. If it is not, we simply multiply the row by a constant to make it equal 1 (or interchange rows, if possible).
3. Eliminate (zero\-out) all elements below the pivot using the combination row operation.
4. Determine the next pivot and go back to step 2\.
* Only nonzero numbers are allowed to be pivots. If a coefficient in a pivot position is ever 0, then the rows of the matrix should be interchanged to find a nonzero pivot. If this is not possible then we continue on to the next possible column where a pivot position can be created.
5. When the last pivot is equal to 1, begin to eliminate all the entries above the pivot positions.
6. When all entries above and below each pivot element are equal to zero, the augmented matrix is said to be in *reduced row echelon form* and the Gauss\-Jordan elimination process is complete.
**Exercise 5\.2 (Gauss\-Jordan Elimination)** Use the Gauss\-Jordan method to solve the following system:
\\\[\\begin{cases}\\begin{align}
4x\_2\-3x\_3\=3\\\\
\-x\_1\+7x\_2\-5x\_3\=4\\\\
\-x\_1\+8x\_2\-6x\_3\=5\\end{align}\\end{cases}\\]
### 5\.2\.1 Gauss\-Jordan Elimination Summary
1. Focusing on the first column, identify the first pivot element. The first pivot element should be located in the first row (if this entry is zero, we must interchange rows so that it is non\-zero). Our goal will be to use this element to eliminate all of the elements below it.
2. The pivot element should be equal to 1\. If it is not, we simply multiply the row by a constant to make it equal 1 (or interchange rows, if possible).
3. Eliminate (zero\-out) all elements below the pivot using the combination row operation.
4. Determine the next pivot and go back to step 2\.
* Only nonzero numbers are allowed to be pivots. If a coefficient in a pivot position is ever 0, then the rows of the matrix should be interchanged to find a nonzero pivot. If this is not possible then we continue on to the next possible column where a pivot position can be created.
5. When the last pivot is equal to 1, begin to eliminate all the entries above the pivot positions.
6. When all entries above and below each pivot element are equal to zero, the augmented matrix is said to be in *reduced row echelon form* and the Gauss\-Jordan elimination process is complete.
**Exercise 5\.2 (Gauss\-Jordan Elimination)** Use the Gauss\-Jordan method to solve the following system:
\\\[\\begin{cases}\\begin{align}
4x\_2\-3x\_3\=3\\\\
\-x\_1\+7x\_2\-5x\_3\=4\\\\
\-x\_1\+8x\_2\-6x\_3\=5\\end{align}\\end{cases}\\]
5\.3 Three Types of Systems
---------------------------
As was mentioned earlier, there are 3 situations that may arise when solving a system of equations:
* The system could have one **unique solution** (this is the situation of our examples thus far).
* The system could have no solutions (sometimes called *overdetermined* or ***inconsistent***).
* The system could have **infinitely many solutions** (sometimes called *underdetermined*).
### 5\.3\.1 The Unique Solution Case
Based on our earlier examples, we already have a sense for systems which fall into the first case.
**Theorem 5\.1 (Case 1: Unique solution)** A system of equations \\(\\A\\x\=\\b\\) has a unique solution if and only if *both* of the following conditions hold:
1. The number of equations is equal to the number of variables (i.e. the coefficient matrix \\(\\A\\) is *square*).
2. The number of pivots is equal to the number of rows/columns. In other words, under Gauss\-Jordan elimination, the coefficient matrix is transformed into the identity matrix:
\\\[\\A \\xrightarrow{Gauss\-Jordan} I\\]
In this case, we say that the matrix \\(\\A\\) is **invertible** because it is full\-rank (the rank of a matrix is the number of pivots after Gauss\-Jordan elimination) *and* square.
### 5\.3\.2 The Inconsistent Case
The second case scenario is a very specific one. In order for a system of equations to be **inconsistent** and have no solutions, it must be that after Gaussian elimination, a situation occurs where at least one equation reduces to \\(0\=\\alpha\\) where \\(\\alpha\\) is nonzero. Such a situation would look as follows (using asterisks to denote any nonzero numbers):
\\\[\\left(\\begin{array}{rrr\|r} \*\&\*\&\*\&\*\\\\0\&\*\&\*\&\*\\\\0\&0\&0\&\\alpha\\end{array}\\right) \\]
The third row of this augmented system indicates that \\\[0x\_1\+0x\_2\+0x\_3\=\\alpha\\] where \\(\\alpha\\neq 0\\), which is a contradiction. When we reach such a situation through Gauss\-Jordan elimination, we know the system is inconsistent.
**Example 5\.4 (Identifying an Inconsistent System)** \\\[\\begin{cases}\\begin{align}
x\-y\+z\=1\\\\
x\-y\-z\=2\\\\
x\+y\-z\=3\\\\
x\+y\+z\=4\\end{align}\\end{cases}\\]
Using the augmented matrix and Gaussian elimination, we take the following steps:
\\\[\\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\1\&\-1\&\-1\&2\\\\1\&1\&\-1\&3\\\\1\&1\&1\&4\\end{array}\\right) \\xrightarrow{\\substack{R2'\=R2\-R1 \\\\ R3'\=R3\-R1 \\\\ R4'\=R4\-R1}} \\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&0\&\-2\&1\\\\0\&2\&\-2\&2\\\\0\&2\&0\&3\\end{array}\\right) \\]
\\\[\\xrightarrow{ R4\\leftrightarrow R2}\\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&2\&0\&3\\\\0\&2\&\-2\&2\\\\0\&0\&\-2\&1\\end{array}\\right)\\xrightarrow{R3'\=R3\-R2} \\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&2\&0\&3\\\\0\&0\&\-2\&\-1\\\\0\&0\&\-2\&1\\end{array}\\right)\\]
\\\[\\xrightarrow{R4'\=R4\-R3} \\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&2\&0\&3\\\\0\&0\&\-2\&\-1\\\\0\&0\&0\&2\\end{array}\\right)\\]
In this final step, we see our contradiction equation, \\(0\=2\\). Since this is obviously impossible, we conclude that the system is inconsistent.
Sometimes inconsistent systems are referred to as *over\-determined*. In this example, you can see that we had more equations than variables. This is a common characteristic of over\-determined or inconsistent systems. You can think of it as holding too many demands for a small set of variables! In fact, this is precisely the situation in which we find ourselves when we approach linear regression. Regression systems do not have an exact solution: there are generally no set of \\(\\beta\_i's\\) that we can find so that our regression equation exactly fits every observation in the dataset \- the regression system is inconsistent. Thus, we need a way to get *as close as possible* to a solution; that is, we need to find a solution that minimizes the residual error. This is done using the Least Squares method, the subject of Chapter [10](leastsquares.html#leastsquares).
### 5\.3\.3 The Infinite Solutions Case
For the third case, consider the following system of equations written as an augmented matrix, and its reduced row echelon form after Gauss\-Jordan elimination. As an exercise, it is suggested that you confirm this result.
\\\[\\left(\\begin{array}{rrr\|r} 1\&2\&3\&0\\\\2\&1\&3\&0\\\\1\&1\&2\&0\\end{array}\\right) \\xrightarrow{Gauss\-Jordan} \\left(\\begin{array}{rrr\|r} 1\&0\&1\&0\\\\0\&1\&1\&0\\\\0\&0\&0\&0\\end{array}\\right) \\]
There are several things you should notice about this reduced row echelon form. For starters, it has a row that is completely 0\. This means, intuitively, that one of the equations was able to be completely eliminated \- it contained redundant information from the first two. The second thing you might notice is that there are only 2 pivot elements. Because there is no pivot in the third row, the last entries in the third column could not be eliminated! This is characteristic of what is called a **free\-variable**. Let’s see what this means by translating our reduced system back to equations:
\\\[\\begin{cases}\\begin{align}
x\_1\+x\_3 \= 0\\\\
x\_2\+x\_3\= 0\\end{align}\\end{cases}\\]
Clearly, our answer to this problem depends on the variable \\(x\_3\\), which is considered *free* to take on any value. Once we know the value of \\(x\_3\\) we can easily determine that
\\\[\\begin{align}
x\_1 \&\= \-x\_3 \\\\
x\_2 \&\= \-x\_3 \\end{align}\\]
Our convention here is to **parameterize** the solution and simply declare that \\(x\_3\=s\\) (or any other placeholder variable for a constant). Then our solution becomes:
\\\[\\pm x\_1\\\\x\_2\\\\x\_3 \\mp \= \\pm \-s \\\\ \-s \\\\ s \\mp \= s \\pm \-1\\\\\-1\\\\1 \\mp\\]
What this means is that any scalar multiple of the vector \\(\\pm \-1\\\\\-1\\\\1 \\mp\\) is a solution to the system. Thus there are infinitely many solutions!
**Theorem 5\.2 (Case 3: Infinitely Many Solutions)** A system of equations \\(\\A\\x\=\\b\\) has infinitely many solutions if the system is consistent and *any* of the following conditions hold:
1. The number of variables is greater than the number of equations.
2. There is at least one *free variable* presented in the reduced row echelon form.
3. The number of pivots is less than the number of variables.
**Example 5\.5 (Infinitely Many Solutions)** For the following reduced system of equations, characterize the set of solutions in the same fashion as the previous example.
\\\[\\left(\\begin{array}{rrrr\|r}
1\&0\&1\&2\&0\\\\0\&1\&1\&\-1\&0\\\\0\&0\&0\&0\&0\\\\0\&0\&0\&0\&0\\end{array}\\right) \\]
A good way to start is sometimes to write out the corresponding equations:
\\\[\\begin{cases}\\begin{align}
x\_1\+x\_3\+2x\_4 \= 0\\\\
x\_2\+x\_3\-x\_4\= 0\\end{align}\\end{cases} \\Longrightarrow \\systeme{
x\_1\=\-x\_3\-2x\_4\\\\
x\_2\=\-x\_3\+x\_4\\end{align}\\end{cases}\\]
Now we have *two* variables which are free to take on any value. Thus, let
\\\[x\_3 \= s \\quad \\mbox{and} \\quad x\_4 \= t\\]
Then, our solution is:
\\\[\\pm x\_1\\\\x\_2\\\\x\_3\\\\x\_4 \\mp \= \\pm \-s\-2t \\\\ \-s\+t\\\\s\\\\t \\mp \= s\\pm \-1\\\\\-1\\\\1\\\\0 \\mp \+ t\\pm \-2\\\\1\\\\0\\\\1 \\mp\\]
so any linear combination of the vectors
\\\[\\pm \-1\\\\\-1\\\\1\\\\0 \\mp \\quad \\mbox{and} \\quad \\pm \-2\\\\1\\\\0\\\\1 \\mp\\]
will provide a solution to this system.
### 5\.3\.4 Matrix Rank
The **rank** of a matrix is the number of linearly independent rows or columns in the matrix (the number of linearly independent rows will always be the same as the number of linearly independent columns). It can be determined by reducing a matrix to row\-echelon form and counting the number of pivots. A matrix is said to be **full rank** when its rank is maximal, meaning that either all rows or all columns are linearly independent. In other words, an \\(m\\times n\\) matrix \\(\\A\\) is full rank when the rank(\\(\\A\\))\\(\=\\min(m,n)\\). A square matrix that is full rank will always have an inverse.
### 5\.3\.1 The Unique Solution Case
Based on our earlier examples, we already have a sense for systems which fall into the first case.
**Theorem 5\.1 (Case 1: Unique solution)** A system of equations \\(\\A\\x\=\\b\\) has a unique solution if and only if *both* of the following conditions hold:
1. The number of equations is equal to the number of variables (i.e. the coefficient matrix \\(\\A\\) is *square*).
2. The number of pivots is equal to the number of rows/columns. In other words, under Gauss\-Jordan elimination, the coefficient matrix is transformed into the identity matrix:
\\\[\\A \\xrightarrow{Gauss\-Jordan} I\\]
In this case, we say that the matrix \\(\\A\\) is **invertible** because it is full\-rank (the rank of a matrix is the number of pivots after Gauss\-Jordan elimination) *and* square.
### 5\.3\.2 The Inconsistent Case
The second case scenario is a very specific one. In order for a system of equations to be **inconsistent** and have no solutions, it must be that after Gaussian elimination, a situation occurs where at least one equation reduces to \\(0\=\\alpha\\) where \\(\\alpha\\) is nonzero. Such a situation would look as follows (using asterisks to denote any nonzero numbers):
\\\[\\left(\\begin{array}{rrr\|r} \*\&\*\&\*\&\*\\\\0\&\*\&\*\&\*\\\\0\&0\&0\&\\alpha\\end{array}\\right) \\]
The third row of this augmented system indicates that \\\[0x\_1\+0x\_2\+0x\_3\=\\alpha\\] where \\(\\alpha\\neq 0\\), which is a contradiction. When we reach such a situation through Gauss\-Jordan elimination, we know the system is inconsistent.
**Example 5\.4 (Identifying an Inconsistent System)** \\\[\\begin{cases}\\begin{align}
x\-y\+z\=1\\\\
x\-y\-z\=2\\\\
x\+y\-z\=3\\\\
x\+y\+z\=4\\end{align}\\end{cases}\\]
Using the augmented matrix and Gaussian elimination, we take the following steps:
\\\[\\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\1\&\-1\&\-1\&2\\\\1\&1\&\-1\&3\\\\1\&1\&1\&4\\end{array}\\right) \\xrightarrow{\\substack{R2'\=R2\-R1 \\\\ R3'\=R3\-R1 \\\\ R4'\=R4\-R1}} \\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&0\&\-2\&1\\\\0\&2\&\-2\&2\\\\0\&2\&0\&3\\end{array}\\right) \\]
\\\[\\xrightarrow{ R4\\leftrightarrow R2}\\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&2\&0\&3\\\\0\&2\&\-2\&2\\\\0\&0\&\-2\&1\\end{array}\\right)\\xrightarrow{R3'\=R3\-R2} \\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&2\&0\&3\\\\0\&0\&\-2\&\-1\\\\0\&0\&\-2\&1\\end{array}\\right)\\]
\\\[\\xrightarrow{R4'\=R4\-R3} \\left(\\begin{array}{rrr\|r} 1\&\-1\&1\&1\\\\0\&2\&0\&3\\\\0\&0\&\-2\&\-1\\\\0\&0\&0\&2\\end{array}\\right)\\]
In this final step, we see our contradiction equation, \\(0\=2\\). Since this is obviously impossible, we conclude that the system is inconsistent.
Sometimes inconsistent systems are referred to as *over\-determined*. In this example, you can see that we had more equations than variables. This is a common characteristic of over\-determined or inconsistent systems. You can think of it as holding too many demands for a small set of variables! In fact, this is precisely the situation in which we find ourselves when we approach linear regression. Regression systems do not have an exact solution: there are generally no set of \\(\\beta\_i's\\) that we can find so that our regression equation exactly fits every observation in the dataset \- the regression system is inconsistent. Thus, we need a way to get *as close as possible* to a solution; that is, we need to find a solution that minimizes the residual error. This is done using the Least Squares method, the subject of Chapter [10](leastsquares.html#leastsquares).
### 5\.3\.3 The Infinite Solutions Case
For the third case, consider the following system of equations written as an augmented matrix, and its reduced row echelon form after Gauss\-Jordan elimination. As an exercise, it is suggested that you confirm this result.
\\\[\\left(\\begin{array}{rrr\|r} 1\&2\&3\&0\\\\2\&1\&3\&0\\\\1\&1\&2\&0\\end{array}\\right) \\xrightarrow{Gauss\-Jordan} \\left(\\begin{array}{rrr\|r} 1\&0\&1\&0\\\\0\&1\&1\&0\\\\0\&0\&0\&0\\end{array}\\right) \\]
There are several things you should notice about this reduced row echelon form. For starters, it has a row that is completely 0\. This means, intuitively, that one of the equations was able to be completely eliminated \- it contained redundant information from the first two. The second thing you might notice is that there are only 2 pivot elements. Because there is no pivot in the third row, the last entries in the third column could not be eliminated! This is characteristic of what is called a **free\-variable**. Let’s see what this means by translating our reduced system back to equations:
\\\[\\begin{cases}\\begin{align}
x\_1\+x\_3 \= 0\\\\
x\_2\+x\_3\= 0\\end{align}\\end{cases}\\]
Clearly, our answer to this problem depends on the variable \\(x\_3\\), which is considered *free* to take on any value. Once we know the value of \\(x\_3\\) we can easily determine that
\\\[\\begin{align}
x\_1 \&\= \-x\_3 \\\\
x\_2 \&\= \-x\_3 \\end{align}\\]
Our convention here is to **parameterize** the solution and simply declare that \\(x\_3\=s\\) (or any other placeholder variable for a constant). Then our solution becomes:
\\\[\\pm x\_1\\\\x\_2\\\\x\_3 \\mp \= \\pm \-s \\\\ \-s \\\\ s \\mp \= s \\pm \-1\\\\\-1\\\\1 \\mp\\]
What this means is that any scalar multiple of the vector \\(\\pm \-1\\\\\-1\\\\1 \\mp\\) is a solution to the system. Thus there are infinitely many solutions!
**Theorem 5\.2 (Case 3: Infinitely Many Solutions)** A system of equations \\(\\A\\x\=\\b\\) has infinitely many solutions if the system is consistent and *any* of the following conditions hold:
1. The number of variables is greater than the number of equations.
2. There is at least one *free variable* presented in the reduced row echelon form.
3. The number of pivots is less than the number of variables.
**Example 5\.5 (Infinitely Many Solutions)** For the following reduced system of equations, characterize the set of solutions in the same fashion as the previous example.
\\\[\\left(\\begin{array}{rrrr\|r}
1\&0\&1\&2\&0\\\\0\&1\&1\&\-1\&0\\\\0\&0\&0\&0\&0\\\\0\&0\&0\&0\&0\\end{array}\\right) \\]
A good way to start is sometimes to write out the corresponding equations:
\\\[\\begin{cases}\\begin{align}
x\_1\+x\_3\+2x\_4 \= 0\\\\
x\_2\+x\_3\-x\_4\= 0\\end{align}\\end{cases} \\Longrightarrow \\systeme{
x\_1\=\-x\_3\-2x\_4\\\\
x\_2\=\-x\_3\+x\_4\\end{align}\\end{cases}\\]
Now we have *two* variables which are free to take on any value. Thus, let
\\\[x\_3 \= s \\quad \\mbox{and} \\quad x\_4 \= t\\]
Then, our solution is:
\\\[\\pm x\_1\\\\x\_2\\\\x\_3\\\\x\_4 \\mp \= \\pm \-s\-2t \\\\ \-s\+t\\\\s\\\\t \\mp \= s\\pm \-1\\\\\-1\\\\1\\\\0 \\mp \+ t\\pm \-2\\\\1\\\\0\\\\1 \\mp\\]
so any linear combination of the vectors
\\\[\\pm \-1\\\\\-1\\\\1\\\\0 \\mp \\quad \\mbox{and} \\quad \\pm \-2\\\\1\\\\0\\\\1 \\mp\\]
will provide a solution to this system.
### 5\.3\.4 Matrix Rank
The **rank** of a matrix is the number of linearly independent rows or columns in the matrix (the number of linearly independent rows will always be the same as the number of linearly independent columns). It can be determined by reducing a matrix to row\-echelon form and counting the number of pivots. A matrix is said to be **full rank** when its rank is maximal, meaning that either all rows or all columns are linearly independent. In other words, an \\(m\\times n\\) matrix \\(\\A\\) is full rank when the rank(\\(\\A\\))\\(\=\\min(m,n)\\). A square matrix that is full rank will always have an inverse.
5\.4 Solving Matrix Equations
-----------------------------
One final piece to the puzzle is what happens when we have a matrix equation like
\\\[\\A\\X\=\\B\\]
This situation is an easy extension of our previous problem because we are essentially solving the same system of equation with several different right\-hand\-side vectors (the columns of \\(\\B\\)).
Let’s look at a \\(2\\times 2\\) example to get a feel for this! We’ll dissect the following matrix equation into two different systems of equations:
\\\[\\pm 1\&1\\\\2\&1\\mp \\pm x\_{11} \& x\_{12} \\\\ x\_{21} \& x\_{22} \\mp \= \\pm 3\&3\\\\4\&5 \\mp.\\]
Based on our previous discussions, we ought to be able to see that this matrix equation represents 4 separate equations which we’ll combine into two systems:
\\\[\\pm 1\&1\\\\2\&1\\mp \\pm x\_{11} \\\\x\_{21} \\mp \= \\pm 3\\\\4 \\mp \\quad \\mbox{and}\\quad \\pm 1\&1\\\\2\&1\\mp \\pm x\_{12} \\\\x\_{22} \\mp \= \\pm 3\\\\5 \\mp\\]
Once you convince yourself that the unknowns can be found in this way, let’s take a look at the augmented matrices for these two systems:
\\\[\\left(\\begin{array}{rr\|r}
1\&1\&3\\\\2\&1\&4\\end{array}\\right) \\quad\\mbox{and}\\quad \\left(\\begin{array}{rr\|r}
1\&1\&3\\\\2\&1\&5\\end{array}\\right)\\]
When performing Gauss\-Jordan elimination on these two augmented matrices, how are the row operations going to differ? They’re not! The same row operations will be used for each augmented matrix \- the only thing that will differ is how these row operations will affect the right hand side vectors. Thus, it is possible for us to keep track of those differences in one larger augmented matrix :
\\\[\\begin{pmatrix}
\\begin{array}{cc\|cc}
1\&1\&3\&3\\\\
2\&1\&4\&5
\\end{array}
\\end{pmatrix}\\]
We can then perform the row operations on both right\-hand sides at once:
\\\[\\begin{pmatrix}
\\begin{array}{cc\|cc}
1\&1\&3\&3\\\\
2\&1\&4\&5
\\end{array}
\\end{pmatrix}\\xrightarrow{R2'\=R2\-2R1}\\begin{pmatrix}
\\begin{array}{cc\|cc}
1\&1\&3\&3\\\\
0\&\-1\&\-2\&\-1
\\end{array}
\\end{pmatrix} \\]
\\\[\\xrightarrow{R2'\=\-1R2}\\begin{pmatrix}
\\begin{array}{cc\|cc}
1\&1\&3\&3\\\\
0\&1\&2\&1
\\end{array}
\\end{pmatrix}\\xrightarrow{R1'\=R1\-R2}\\begin{pmatrix}
\\begin{array}{cc\|cc}
1\&0\&1\&2\\\\
0\&1\&2\&1
\\end{array}
\\end{pmatrix}\\]
Now again, remembering the situation from which we came, we have the equivalent system:
\\\[\\pm 1\&0\\\\0\&1 \\mp \\pm x\_{11} \& x\_{12} \\\\ x\_{21} \& x\_{22} \\mp \= \\pm 1\&2\\\\2\&1\\mp\\]
So we can conclude that \\\[\\pm x\_{11} \& x\_{12} \\\\ x\_{21} \& x\_{22} \\mp \= \\pm 1\&2\\\\2\&1\\mp\\] and we have solved our system. This method is particularly useful when finding the inverse of a matrix.
### 5\.4\.1 Solving for the Inverse of a Matrix
For any square matrix \\(\\A\\), we know the inverse matrix (\\(\\A^{\-1}\\)), if it exists, satisfies the following matrix equation,
\\\[\\A\\A^{\-1} \= \\I.\\]
Using the Gauss\-Jordan method with multiple right hand sides, we can solve for the inverse of any matrix. We simply start with an augmented matrix with \\(\\A\\) on the left and the identity on the right, and then use Gauss\-Jordan elimination to transform the matrix \\(\\A\\) into the identity matrix.
\\\[\\left(\\begin{array}{r\|r}
\\bo{A} \& \\I\\end{array}\\right)\\xrightarrow{Gauss\-Jordan}\\left(\\begin{array}{r\|r} \\bo{I} \& \\A^{\-1}\\end{array}\\right)\\]
If this is possible then the matrix on the right is the inverse of \\(\\A\\). If this is not possible then \\(\\A\\) does not have an inverse. Let’s see a quick example of this.
**Example 5\.6 (Finding a Matrix Inverse)** Find the inverse of \\\[\\A \= \\pm \-1\&2\&\-1\\\\0\&\-1\&1\\\\2\&\-1\&0 \\mp\\] using Gauss\-Jordan Elimination.
Since \\(\\A\\A^{\-1} \= \\I\\), we set up the augmented matrix as \\(\\left(\\begin{array}{r\|r} \\bo{A} \& \\I\\end{array}\\right)\\):
\\\[\\begin{pmatrix}
\\begin{array}{ccc\|ccc}\-1\&2\&\-1\&1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\2\&\-1\&0\&0\&0\&1 \\end{array}\\end{pmatrix} \\xrightarrow{R3'\=R3\+2R1}
\\begin{pmatrix}
\\begin{array}{ccc\|ccc} \-1\&2\&\-1\&1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&3\&\-2\&2\&0\&1 \\end{array}\\end{pmatrix}\\]
\\\[\\begin{pmatrix}
\\begin{array}{ccc\|ccc} \-1\&2\&\-1\&1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&3\&\-2\&2\&0\&1 \\end{array}\\end{pmatrix}
\\xrightarrow{\\substack{R1'\=\-1R1\\\\R3'\=R3\+3R2}}\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&1\&\-1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\]
\\\[\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&1\&\-1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\xrightarrow{\\substack{R1'\=R1\-R3\\\\R2'\=R2\-R3}}\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&0\&\-3\&\-3\&\-1\\\\0\&\-1\&0\&\-2\&\-2\&\-1\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\]
\\\[\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&0\&\-3\&\-3\&\-1\\\\0\&\-1\&0\&\-2\&\-2\&\-1\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\xrightarrow{\\substack{R2'\=\-1R2\\\\R1'\=R1\+2R2}}\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&0\&0\&1\&1\&1\\\\0\&1\&0\&2\&2\&1\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\]
Finally, we have completed our task. The inverse of \\(\\A\\) is the matrix on the right hand side of the augmented matrix!
\\\[\\A^{\-1} \= \\pm 1\&1\&1\\\\2\&2\&1\\\\2\&3\&1 \\mp\\]
**Exercise 5\.3 (Finding a Matrix Inverse)** Use the same method to determine the inverse of
\\\[\\B\=\\pm 1\&1\&1\\\\2\&2\&1\\\\2\&3\&1 \\mp\\]
(*hint: Example [5\.6](solvesys.html#exm:findinverse) should tell you the answer you expect to find!*)
**Example 5\.7 (Inverse of a Diagonal Matrix)** A full rank diagonal matrix (one with no zero diagonal elements) has a particularly neat and tidy inverse. Here we motivate the definition by working through an example. Find the inverse of the digaonal matrix \\(\\D\\),
\\\[\\D \= \\pm 3\&0\&0\\\\0\&\-2\&0\\\\0\&0\&\\sqrt{5} \\mp \\]
To begin the process, we start with an augmented matrix and proceed with Gauss\-Jordan Elimination. In this case, the process is quite simple! The elements above and below the diagonal pivots are already zero, we simply need to make each pivot equal to 1!
\\\[\\pm\\begin{array}{ccc\|ccc} 3\&0\&0\&1\&0\&0\\\\0\&\-2\&0\&0\&1\&0\\\\0\&0\&\\sqrt{5}\&0\&0\&1 \\end{array}\\mp
\\xrightarrow{\\substack{R1'\=\\frac{1}{3}R1 \\\\R2' \= \-\\frac{1}{2} R2\\\\R3'\=\\frac{1}{\\sqrt{5}} R3}}
\\pm\\begin{array}{ccc\|ccc} 1\&0\&0\&\\frac{1}{3}\&0\&0\\\\0\&1\&0\&0\&\-\\frac{1}{2}\&0\\\\0\&0\&1\&0\&0\&\\frac{1}{\\sqrt{5}} \\end{array}\\mp\\]
Thus, the inverse of \\(\\D\\) is:
\\\[\\D^{\-1} \= \\pm \\frac{1}{3}\&0\&0\\\\0\&\-\\frac{1}{2}\&0\\\\0\&0\&\\frac{1}{\\sqrt{5}} \\mp \\]
As you can see, all we had to do is take the scalar inverse of each diagonal element!
**Definition 5\.3 (Inverse of a Diagonal Matrix)** An \\(n\\times n\\) diagonal matrix \\(\\D \= diag\\{d\_{11},d\_{22},\\dots,d\_{nn}\\}\\) with no nonzero diagonal elements is invertible with inverse
\\\[\\D^{\-1} \= diag\\{\\frac{1}{d\_{11}},\\frac{1}{d\_{22}},\\dots,\\frac{1}{d\_{nn}}\\}\\]
### 5\.4\.1 Solving for the Inverse of a Matrix
For any square matrix \\(\\A\\), we know the inverse matrix (\\(\\A^{\-1}\\)), if it exists, satisfies the following matrix equation,
\\\[\\A\\A^{\-1} \= \\I.\\]
Using the Gauss\-Jordan method with multiple right hand sides, we can solve for the inverse of any matrix. We simply start with an augmented matrix with \\(\\A\\) on the left and the identity on the right, and then use Gauss\-Jordan elimination to transform the matrix \\(\\A\\) into the identity matrix.
\\\[\\left(\\begin{array}{r\|r}
\\bo{A} \& \\I\\end{array}\\right)\\xrightarrow{Gauss\-Jordan}\\left(\\begin{array}{r\|r} \\bo{I} \& \\A^{\-1}\\end{array}\\right)\\]
If this is possible then the matrix on the right is the inverse of \\(\\A\\). If this is not possible then \\(\\A\\) does not have an inverse. Let’s see a quick example of this.
**Example 5\.6 (Finding a Matrix Inverse)** Find the inverse of \\\[\\A \= \\pm \-1\&2\&\-1\\\\0\&\-1\&1\\\\2\&\-1\&0 \\mp\\] using Gauss\-Jordan Elimination.
Since \\(\\A\\A^{\-1} \= \\I\\), we set up the augmented matrix as \\(\\left(\\begin{array}{r\|r} \\bo{A} \& \\I\\end{array}\\right)\\):
\\\[\\begin{pmatrix}
\\begin{array}{ccc\|ccc}\-1\&2\&\-1\&1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\2\&\-1\&0\&0\&0\&1 \\end{array}\\end{pmatrix} \\xrightarrow{R3'\=R3\+2R1}
\\begin{pmatrix}
\\begin{array}{ccc\|ccc} \-1\&2\&\-1\&1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&3\&\-2\&2\&0\&1 \\end{array}\\end{pmatrix}\\]
\\\[\\begin{pmatrix}
\\begin{array}{ccc\|ccc} \-1\&2\&\-1\&1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&3\&\-2\&2\&0\&1 \\end{array}\\end{pmatrix}
\\xrightarrow{\\substack{R1'\=\-1R1\\\\R3'\=R3\+3R2}}\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&1\&\-1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\]
\\\[\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&1\&\-1\&0\&0\\\\0\&\-1\&1\&0\&1\&0\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\xrightarrow{\\substack{R1'\=R1\-R3\\\\R2'\=R2\-R3}}\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&0\&\-3\&\-3\&\-1\\\\0\&\-1\&0\&\-2\&\-2\&\-1\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\]
\\\[\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&\-2\&0\&\-3\&\-3\&\-1\\\\0\&\-1\&0\&\-2\&\-2\&\-1\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\xrightarrow{\\substack{R2'\=\-1R2\\\\R1'\=R1\+2R2}}\\begin{pmatrix}\\begin{array}{ccc\|ccc} 1\&0\&0\&1\&1\&1\\\\0\&1\&0\&2\&2\&1\\\\0\&0\&1\&2\&3\&1 \\end{array}\\end{pmatrix}\\]
Finally, we have completed our task. The inverse of \\(\\A\\) is the matrix on the right hand side of the augmented matrix!
\\\[\\A^{\-1} \= \\pm 1\&1\&1\\\\2\&2\&1\\\\2\&3\&1 \\mp\\]
**Exercise 5\.3 (Finding a Matrix Inverse)** Use the same method to determine the inverse of
\\\[\\B\=\\pm 1\&1\&1\\\\2\&2\&1\\\\2\&3\&1 \\mp\\]
(*hint: Example [5\.6](solvesys.html#exm:findinverse) should tell you the answer you expect to find!*)
**Example 5\.7 (Inverse of a Diagonal Matrix)** A full rank diagonal matrix (one with no zero diagonal elements) has a particularly neat and tidy inverse. Here we motivate the definition by working through an example. Find the inverse of the digaonal matrix \\(\\D\\),
\\\[\\D \= \\pm 3\&0\&0\\\\0\&\-2\&0\\\\0\&0\&\\sqrt{5} \\mp \\]
To begin the process, we start with an augmented matrix and proceed with Gauss\-Jordan Elimination. In this case, the process is quite simple! The elements above and below the diagonal pivots are already zero, we simply need to make each pivot equal to 1!
\\\[\\pm\\begin{array}{ccc\|ccc} 3\&0\&0\&1\&0\&0\\\\0\&\-2\&0\&0\&1\&0\\\\0\&0\&\\sqrt{5}\&0\&0\&1 \\end{array}\\mp
\\xrightarrow{\\substack{R1'\=\\frac{1}{3}R1 \\\\R2' \= \-\\frac{1}{2} R2\\\\R3'\=\\frac{1}{\\sqrt{5}} R3}}
\\pm\\begin{array}{ccc\|ccc} 1\&0\&0\&\\frac{1}{3}\&0\&0\\\\0\&1\&0\&0\&\-\\frac{1}{2}\&0\\\\0\&0\&1\&0\&0\&\\frac{1}{\\sqrt{5}} \\end{array}\\mp\\]
Thus, the inverse of \\(\\D\\) is:
\\\[\\D^{\-1} \= \\pm \\frac{1}{3}\&0\&0\\\\0\&\-\\frac{1}{2}\&0\\\\0\&0\&\\frac{1}{\\sqrt{5}} \\mp \\]
As you can see, all we had to do is take the scalar inverse of each diagonal element!
**Definition 5\.3 (Inverse of a Diagonal Matrix)** An \\(n\\times n\\) diagonal matrix \\(\\D \= diag\\{d\_{11},d\_{22},\\dots,d\_{nn}\\}\\) with no nonzero diagonal elements is invertible with inverse
\\\[\\D^{\-1} \= diag\\{\\frac{1}{d\_{11}},\\frac{1}{d\_{22}},\\dots,\\frac{1}{d\_{nn}}\\}\\]
5\.5 Gauss\-Jordan Elimination in R
-----------------------------------
It is important that you understand what is happening in the process of Gauss\-Jordan Elimination. Once you have a handle on how the procedure works, it is no longer necessary to do every calculation by hand. We can skip to the reduced row echelon form of a matrix using the `pracma` package in R. \\
We’ll start by creating our matrix as a variable in R. Matrices are entered in as one vector, which R then breaks apart into rows and columns in they way that you specify (with nrow/ncol). The default way that R reads a vector into a matrix is down the columns. To read the data in across the rows, use the byrow\=TRUE option). Once a matrix is created, it is stored under the variable name you give it (below, we call our matrices \\(\\Y\\) and \\(\\X\\)). We can then print out the stored matrix by simply typing \\(\\Y\\) or \\(\\X\\) at the prompt:
```
(Y=matrix(c(1,2,3,4),nrow=2,ncol=2))
```
```
## [,1] [,2]
## [1,] 1 3
## [2,] 2 4
```
```
(X=matrix(c(1,2,3,4),nrow=2,ncol=2,byrow=TRUE))
```
```
## [,1] [,2]
## [1,] 1 2
## [2,] 3 4
```
To perform Gauss\-Jordan elimination, we need to install the `pracma` package which contains the code for this procedure.
```
install.packages("pracma")
```
After installing a package in R, you must always add it to your library (so that you can actually use it in the current session). This is done with the library command:
```
library("pracma")
```
Now that the library is accessible, we can use the command to get the reduced row echelon form of an augmented matrix, \\(\\A\\):
```
A= matrix(c(1,1,1,1,-1,-1,1,1,1,-1,-1,1,1,2,3,4), nrow=4, ncol=4)
A
```
```
## [,1] [,2] [,3] [,4]
## [1,] 1 -1 1 1
## [2,] 1 -1 -1 2
## [3,] 1 1 -1 3
## [4,] 1 1 1 4
```
```
rref(A)
```
```
## [,1] [,2] [,3] [,4]
## [1,] 1 0 0 0
## [2,] 0 1 0 0
## [3,] 0 0 1 0
## [4,] 0 0 0 1
```
And we have the reduced row echelon form for one of the problems from the worksheets! You can see this system of equations is inconsistent because the bottom row amounts to the equation
\\\[0\\x\_1\+0\\x\_2\+0\\x\_3 \= 1\.\\]
This should save you some time and energy by skipping the arithmetic steps in Gauss\-Jordan Elimination.
5\.6 Exercises
--------------
1. For the following two systems of equations, draw both equations on the same plane. Comment on what you find and what it means about the system of equations.
1. \\\[\\begin{eqnarray\*}
x\_1 \+ x\_2 \&\=\& 10 \\\\
\-x\_1 \+ x\_2 \&\=\& 0
\\end{eqnarray\*}\\]- \\\[\\begin{eqnarray\*}
x\_1 \- 2x\_2 \&\=\& \-3 \\\\
2x\_1 \- 4x\_2 \&\=\& 8
\\end{eqnarray\*}\\]- We’ve drawn 2 of the 3 possible outcomes for systems of equations. Give an example of the 3rd possible outcome and draw the accompanying picture.- Specify whether the following augmented matrices are in row\-echelon form (REF), reduced row\-echelon form (RREF), or neither:
1. \\(\\left(\\begin{array}{ccc\|c} 3\&2\&1\&2\\\\0\&2\&0\&1\\\\0\&0\&1\&5 \\end{array}\\right)\\)- \\(\\left(\\begin{array}{ccc\|c} 3\&2\&1\&2\\\\0\&2\&0\&1\\\\0\&4\&0\&0 \\end{array}\\right)\\)- \\(\\left(\\begin{array}{ccc\|c} 1\&1\&0\&2\\\\0\&0\&1\&1\\\\0\&0\&0\&0 \\end{array}\\right)\\)- \\(\\left(\\begin{array}{ccc\|c} 1\&2\&0\&2\\\\0\&1\&1\&1\\\\0\&0\&0\&0 \\end{array}\\right)\\)- Using Gaussian Elimination on the augmented matrices, reduce each system of equations to a triangular form and solve using back\-substitution.
1. \\\[\\begin{cases}
x\_1 \+2x\_2\= 3\\\\
\-x\_1\+x\_2\=0\\end{cases}\\]- \\\[\\begin{cases}
x\_1\+x\_2 \+2x\_3\= 7\\\\
x\_1\+x\_3 \= 4\\\\
\-2x\_1\-2x\_2 \=\-6\\end{cases}\\]- \\\[\\begin{cases}\\begin{align}
2x\_1\-x\_2 \+x\_3\= 1\\\\
\-x\_1\+2x\_2\+3x\_3 \= 6\\\\
x\_2\+4x\_3 \=6 \\end{align}\\end{cases}\\]- Using Gauss\-Jordan Elimination on the augmented matrices, reduce each system of equations from the previous exercise to reduced row\-echelon form and give the solution as a vector.
- Use either Gaussian or Gauss\-Jordan Elimination to solve the following systems of equations. Indicate whether the systems have a unique solution, no solution, or infinitely many solutions. If the system has infinitely many solutions, exhibit a general solution in vector form as we did in Section [5\.3\.3](solvesys.html#infinitesol).
1. \\\[\\begin{cases}\\begin{align}
2x\_1\+2x\_2\+6x\_3\=4\\\\
2x\_1\+x\_2\+7x\_3\=6\\\\
\-2x\_1\-6x\_2\-7x\_3\=\-1\\end{align}\\end{cases}\\]- \\\[\\begin{cases}\\begin{align}
1x\_1\+2x\_2\+2x\_3\=0\\\\
2x\_1\+5x\_2\+7x\_3\=0\\\\
3x\_1\+6x\_2\+6x\_3\=0\\end{align}\\end{cases}\\]
- \\\[\\begin{cases}\\begin{align}
1x\_1\+3x\_2\-5x\_3\=0\\\\
1x\_1\-2x\_2\+4x\_3\=2\\\\
2x\_1\+1x\_2\-1x\_3\=0\\end{align}\\end{cases}\\]- For the following augmented matrices, circle the pivot elements and give the rank of the coefficient matrix along with the number of free variables.
1. \\(\\left(\\begin{array}{cccc\|c} 3\&2\&1\&1\&2\\\\0\&2\&0\&0\&1\\\\0\&0\&1\&0\&5 \\end{array}\\right)\\)- \\(\\left(\\begin{array}{ccc\|c} 1\&1\&0\&2\\\\0\&0\&1\&1\\\\0\&0\&0\&0 \\end{array}\\right)\\)- \\(\\left(\\begin{array}{ccccc\|c} 1\&2\&0\&1\&0\&2\\\\0\&1\&1\&1\&0\&1\\\\0\&0\&0\&1\&1\&2\\\\0\&0\&0\&0\&0\&0 \\end{array}\\right)\\)- Use Gauss\-Jordan Elimination to find the inverse of the following matrices, if possible.
1. \\(\\A\=\\pm 2\&3\\\\2\&2\\mp\\)- \\(\\B\=\\pm 1\&2\\\\2\&4\\mp\\)- \\(\\C\=\\pm 1\&2\&3\\\\4\&5\&6\\\\7\&8\&9\\mp\\)- \\(\\D\=\\pm 4\&0\&0\\\\0\&\-4\&0\\\\0\&0\&2 \\mp\\)- What is the inverse of a diagonal matrix, \\(\\bo{D}\=diag\\{\\sigma\_{1},\\sigma\_{2}, \\dots,\\sigma\_{n}\\}\\)?
- Suppose you have a matrix of data, \\(\\A\_{n\\times p}\\), containing \\(n\\) observations on \\(p\\) variables. Suppose the standard deviations of these variables are contained in a diagonal matrix
\\\[\\bo{S}\= diag\\{\\sigma\_1, \\sigma\_2,\\dots,\\sigma\_p\\}.\\] Give a formula for a matrix that contains the same data but with each variable divided by its standard deviation. *Hint: This problem connects Text Exercise [2\.5](mult.html#exr:diagmultexer) and Example [5\.7](solvesys.html#exm:diaginverse)*.
5\.7 List of Key Terms
----------------------
* systems of equations
* row operations
* row\-echelon form
* pivot element
* Gaussian elimination
* Gauss\-Jordan elimination
* reduced row\-echelon form
* rank
* unique solution
* infinitely many solutions
* inconsistent
* back\-substitution
| Field Specific |
shainarace.github.io | https://shainarace.github.io/LinearAlgebra/lsapp.html |
Chapter 11 Applications of Least Squares
========================================
11\.1 Simple Linear Regression
------------------------------
### 11\.1\.1 Cars Data
The \`cars’ dataset is included in the datasets package. This dataset contains observations of speed and stopping distance for 50 cars. We can take a look at the summary statistics by using the **summary** function.
```
summary(cars)
```
```
## speed dist
## Min. : 4.0 Min. : 2.00
## 1st Qu.:12.0 1st Qu.: 26.00
## Median :15.0 Median : 36.00
## Mean :15.4 Mean : 42.98
## 3rd Qu.:19.0 3rd Qu.: 56.00
## Max. :25.0 Max. :120.00
```
We can plot these two variables as follows:
```
plot(cars)
```
### 11\.1\.2 Setting up the Normal Equations
Let’s set up a system of equations
\\\[\\mathbf{X}\\boldsymbol\\beta\=\\mathbf{y}\\]
to create the model
\\\[stopping\\\_distance\=\\beta\_0\+\\beta\_1speed.\\]
To do this, we need a design matrix \\(\\mathbf{X}\\) containing a column of ones for the intercept term and a column containing the speed variable. We also need a vector \\(\\mathbf{y}\\) containing the corresponding stopping distances.
#### The `model.matrix()` function
The `model.matrix()` function will create the design or modeling matrix \\(\\mathbf{X}\\) for us. This function takes a formula and data matrix as input and exports the matrix that we represent as \\(\\mathbf{X}\\) in the normal equations. For datasets with categorical (factor) inputs, this function would also create dummy variables for each level, leaving out a reference level by default. You can override the default to leave out a reference level (when you override this default, you perform **one\-hot\-encoding** on said categorical variable) by including the following option as a third input to the function, where `df` is the name of your data frame:
`contrasts.arg = lapply(df[,sapply(df,is.factor) ], contrasts, contrasts=FALSE`
For an exact example, see the commented out code in the chunk below. There are no factors in the `cars` data, so the code may not even run, but I wanted to provide the line of code necessary for this task, as it is one that we use quite frequently for clustering, PCA, or machine learning!
```
# Create matrix X and label the columns
X=model.matrix(dist~speed, data=cars)
# Create vector y and label the column
y=cars$dist
# CODE TO PERFORM ONE-HOT ENCODING (NO REFERENCE LEVEL FOR CATEGORICAL DUMMIES)
# X=model.matrix(dist~speed, data=cars, contrasts.arg = lapply(cars[,sapply(cars,is.factor) ], contrasts, contrasts=FALSE)
```
Let’s print the first 10 rows of each to see what we did:
```
# Show first 10 rows, all columns. To show only observations 2,4, and 7, for
# example, the code would be X[c(2,4,7), ]
X[1:10, ]
```
```
## (Intercept) speed
## 1 1 4
## 2 1 4
## 3 1 7
## 4 1 7
## 5 1 8
## 6 1 9
## 7 1 10
## 8 1 10
## 9 1 10
## 10 1 11
```
```
y[1:10]
```
```
## [1] 2 10 4 22 16 10 18 26 34 17
```
### 11\.1\.3 Solving for Parameter Estimates and Statistics
Now lets find our parameter estimates by solving the normal equations,
\\\[\\mathbf{X}^T\\mathbf{X}\\boldsymbol\\beta \= \\mathbf{X}^T\\mathbf{y}\\]
using the built in **solve** function. To solve the system \\(\\mathbf{A}\\mathbf{x}\=\\mathbf{b}\\) we’d use `solve(A,b)`.
```
(beta=solve(t(X) %*% X ,t(X)%*%y))
```
```
## [,1]
## (Intercept) -17.579095
## speed 3.932409
```
At the same time we can compute the residuals,
\\\[\\mathbf{r}\=\\mathbf{y}\-\\mathbf{\\hat{y}}\\]
the total sum of squares (SST),
\\\[\\sum\_{i\=1}^n (y\-\\bar{y})^2\=(\\mathbf{y}\-\\mathbf{\\bar{y}})^T(\\mathbf{y}\-\\mathbf{\\bar{y}})\=\\\|\\mathbf{y}\-\\mathbf{\\bar{y}}\\\|^2\\]
the regression sum of squares (SSR or SSM)
\\\[\\sum\_{i\=1}^n (\\hat{y}\-\\bar{y})^2\=(\\mathbf{\\hat{y}}\-\\mathbf{\\bar{y}})^T(\\mathbf{\\hat{y}}\-\\mathbf{\\bar{y}})\=\\\|\\mathbf{\\hat{y}}\-\\mathbf{\\bar{y}}\\\|^2\\]
the residual sum of squares (SSE)
\\\[\\sum\_{i\=1}^n r\_i \=\\mathbf{r}^T\\mathbf{r}\=\\\|\\mathbf{r}\\\|^2\\]
and the unbiased estimator of the variance of the residuals, using the model degrees of freedom which is \\(n\-2\=48\\):
\\\[\\widehat{\\sigma\_{\\varepsilon}}^2 \=\\frac{SSE}{d.f.} \= \\frac{\\\|\\mathbf{r}\\\|^2}{48}\\]
Then \\(R^2\\):
\\\[R^2 \= \\frac{SSR}{SST}\\]
We can also compute the standard error of \\(\\widehat{\\boldsymbol\\beta}\\) since
\\\[\\begin{eqnarray\*}
\\widehat{\\boldsymbol\\beta} \&\=\& (\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T\\mathbf{y}\\\\
var(\\widehat{\\boldsymbol\\beta})\&\=\&var((\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T\\mathbf{y})\\\\
\&\=\&(\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T var(\\mathbf{y}) \\mathbf{X}(\\mathbf{X}^T\\mathbf{X})^{\-1} \\\\
\&\=\&(\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T (\\widehat{\\sigma\_{\\varepsilon}}^2\) \\mathbf{X}(\\mathbf{X}^T\\mathbf{X})^{\-1} \\\\
\&\=\& \\widehat{\\sigma\_{\\varepsilon}}^2 (\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T\\mathbf{X}(\\mathbf{X}^T\\mathbf{X})^{\-1} \\\\
\&\=\& \\widehat{\\sigma\_{\\varepsilon}}^2(\\mathbf{X}^T\\mathbf{X})^{\-1}
\\end{eqnarray\*}\\]
The variances of each \\(\\widehat\\beta\\) are given by the diagonal elements of their covariance matrix (see Definition [6\.3](norms.html#def:covariancedef)), and the standard errors of each \\(\\widehat\\beta\\) are thus obtained by taking the square roots of these diagonal elements:
\\\[s.e.(\\widehat{\\beta\_i})\=\\sqrt{\\widehat{\\sigma\_{\\varepsilon}}\[(\\mathbf{X}^T\\mathbf{X})^{\-1}]\_{ii}}\\]
```
meany=mean(y)
XXinv=solve(t(X)%*%X)
yhat=X%*%XXinv%*%t(X)%*%y
resid=y-yhat
SStotal=norm(y-meany,type="2")^2
### OR SStotal=t(y-meany)%*%(y-meany)
SSreg=norm(yhat-meany,type="2")^2
### OR SSreg=t(yhat-meany)%*%(yhat-meany)
SSresid=norm(resid,type="2")^2
### OR SSresid=t(resid)%*%resid
Rsquared=SSreg/SStotal
StdErrorResiduals=norm(resid/sqrt(48), type="2") #=sqrt(SSresid/48)
CovBeta=SSresid*XXinv/48
StdErrorIntercept = sqrt(CovBeta[1,1])
StdErrorSlope = sqrt(CovBeta[2,2])
```
```
## [1] "Rsquared: 0.651079380758252"
```
```
## [1] "SSresid: 11353.5210510949"
```
```
## [1] "SSmodel: 21185.4589489052"
```
```
## [1] "StdErrorResiduals: 15.3795867488199"
```
```
## [1] "StdErrorIntercept: 6.75844016937923"
```
```
## [1] "StdErrorIntercept: 0.415512776657122"
```
Let’s plot our regression line over the original data:
```
plot(cars)
abline(beta[1],beta[2],col='blue')
```
### 11\.1\.4 OLS in R via `lm()`
Finally, let’s compare our results to the built in linear model solver, `lm()`:
```
fit = lm(dist ~ speed, data=cars)
summary(fit)
```
```
##
## Call:
## lm(formula = dist ~ speed, data = cars)
##
## Residuals:
## Min 1Q Median 3Q Max
## -29.069 -9.525 -2.272 9.215 43.201
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -17.5791 6.7584 -2.601 0.0123 *
## speed 3.9324 0.4155 9.464 1.49e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15.38 on 48 degrees of freedom
## Multiple R-squared: 0.6511, Adjusted R-squared: 0.6438
## F-statistic: 89.57 on 1 and 48 DF, p-value: 1.49e-12
```
```
anova(fit)
```
```
## Analysis of Variance Table
##
## Response: dist
## Df Sum Sq Mean Sq F value Pr(>F)
## speed 1 21186 21185.5 89.567 1.49e-12 ***
## Residuals 48 11354 236.5
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
11\.2 Multiple Linear Regression
--------------------------------
### 11\.2\.1 Bike Sharing Dataset
11\.1 Simple Linear Regression
------------------------------
### 11\.1\.1 Cars Data
The \`cars’ dataset is included in the datasets package. This dataset contains observations of speed and stopping distance for 50 cars. We can take a look at the summary statistics by using the **summary** function.
```
summary(cars)
```
```
## speed dist
## Min. : 4.0 Min. : 2.00
## 1st Qu.:12.0 1st Qu.: 26.00
## Median :15.0 Median : 36.00
## Mean :15.4 Mean : 42.98
## 3rd Qu.:19.0 3rd Qu.: 56.00
## Max. :25.0 Max. :120.00
```
We can plot these two variables as follows:
```
plot(cars)
```
### 11\.1\.2 Setting up the Normal Equations
Let’s set up a system of equations
\\\[\\mathbf{X}\\boldsymbol\\beta\=\\mathbf{y}\\]
to create the model
\\\[stopping\\\_distance\=\\beta\_0\+\\beta\_1speed.\\]
To do this, we need a design matrix \\(\\mathbf{X}\\) containing a column of ones for the intercept term and a column containing the speed variable. We also need a vector \\(\\mathbf{y}\\) containing the corresponding stopping distances.
#### The `model.matrix()` function
The `model.matrix()` function will create the design or modeling matrix \\(\\mathbf{X}\\) for us. This function takes a formula and data matrix as input and exports the matrix that we represent as \\(\\mathbf{X}\\) in the normal equations. For datasets with categorical (factor) inputs, this function would also create dummy variables for each level, leaving out a reference level by default. You can override the default to leave out a reference level (when you override this default, you perform **one\-hot\-encoding** on said categorical variable) by including the following option as a third input to the function, where `df` is the name of your data frame:
`contrasts.arg = lapply(df[,sapply(df,is.factor) ], contrasts, contrasts=FALSE`
For an exact example, see the commented out code in the chunk below. There are no factors in the `cars` data, so the code may not even run, but I wanted to provide the line of code necessary for this task, as it is one that we use quite frequently for clustering, PCA, or machine learning!
```
# Create matrix X and label the columns
X=model.matrix(dist~speed, data=cars)
# Create vector y and label the column
y=cars$dist
# CODE TO PERFORM ONE-HOT ENCODING (NO REFERENCE LEVEL FOR CATEGORICAL DUMMIES)
# X=model.matrix(dist~speed, data=cars, contrasts.arg = lapply(cars[,sapply(cars,is.factor) ], contrasts, contrasts=FALSE)
```
Let’s print the first 10 rows of each to see what we did:
```
# Show first 10 rows, all columns. To show only observations 2,4, and 7, for
# example, the code would be X[c(2,4,7), ]
X[1:10, ]
```
```
## (Intercept) speed
## 1 1 4
## 2 1 4
## 3 1 7
## 4 1 7
## 5 1 8
## 6 1 9
## 7 1 10
## 8 1 10
## 9 1 10
## 10 1 11
```
```
y[1:10]
```
```
## [1] 2 10 4 22 16 10 18 26 34 17
```
### 11\.1\.3 Solving for Parameter Estimates and Statistics
Now lets find our parameter estimates by solving the normal equations,
\\\[\\mathbf{X}^T\\mathbf{X}\\boldsymbol\\beta \= \\mathbf{X}^T\\mathbf{y}\\]
using the built in **solve** function. To solve the system \\(\\mathbf{A}\\mathbf{x}\=\\mathbf{b}\\) we’d use `solve(A,b)`.
```
(beta=solve(t(X) %*% X ,t(X)%*%y))
```
```
## [,1]
## (Intercept) -17.579095
## speed 3.932409
```
At the same time we can compute the residuals,
\\\[\\mathbf{r}\=\\mathbf{y}\-\\mathbf{\\hat{y}}\\]
the total sum of squares (SST),
\\\[\\sum\_{i\=1}^n (y\-\\bar{y})^2\=(\\mathbf{y}\-\\mathbf{\\bar{y}})^T(\\mathbf{y}\-\\mathbf{\\bar{y}})\=\\\|\\mathbf{y}\-\\mathbf{\\bar{y}}\\\|^2\\]
the regression sum of squares (SSR or SSM)
\\\[\\sum\_{i\=1}^n (\\hat{y}\-\\bar{y})^2\=(\\mathbf{\\hat{y}}\-\\mathbf{\\bar{y}})^T(\\mathbf{\\hat{y}}\-\\mathbf{\\bar{y}})\=\\\|\\mathbf{\\hat{y}}\-\\mathbf{\\bar{y}}\\\|^2\\]
the residual sum of squares (SSE)
\\\[\\sum\_{i\=1}^n r\_i \=\\mathbf{r}^T\\mathbf{r}\=\\\|\\mathbf{r}\\\|^2\\]
and the unbiased estimator of the variance of the residuals, using the model degrees of freedom which is \\(n\-2\=48\\):
\\\[\\widehat{\\sigma\_{\\varepsilon}}^2 \=\\frac{SSE}{d.f.} \= \\frac{\\\|\\mathbf{r}\\\|^2}{48}\\]
Then \\(R^2\\):
\\\[R^2 \= \\frac{SSR}{SST}\\]
We can also compute the standard error of \\(\\widehat{\\boldsymbol\\beta}\\) since
\\\[\\begin{eqnarray\*}
\\widehat{\\boldsymbol\\beta} \&\=\& (\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T\\mathbf{y}\\\\
var(\\widehat{\\boldsymbol\\beta})\&\=\&var((\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T\\mathbf{y})\\\\
\&\=\&(\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T var(\\mathbf{y}) \\mathbf{X}(\\mathbf{X}^T\\mathbf{X})^{\-1} \\\\
\&\=\&(\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T (\\widehat{\\sigma\_{\\varepsilon}}^2\) \\mathbf{X}(\\mathbf{X}^T\\mathbf{X})^{\-1} \\\\
\&\=\& \\widehat{\\sigma\_{\\varepsilon}}^2 (\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T\\mathbf{X}(\\mathbf{X}^T\\mathbf{X})^{\-1} \\\\
\&\=\& \\widehat{\\sigma\_{\\varepsilon}}^2(\\mathbf{X}^T\\mathbf{X})^{\-1}
\\end{eqnarray\*}\\]
The variances of each \\(\\widehat\\beta\\) are given by the diagonal elements of their covariance matrix (see Definition [6\.3](norms.html#def:covariancedef)), and the standard errors of each \\(\\widehat\\beta\\) are thus obtained by taking the square roots of these diagonal elements:
\\\[s.e.(\\widehat{\\beta\_i})\=\\sqrt{\\widehat{\\sigma\_{\\varepsilon}}\[(\\mathbf{X}^T\\mathbf{X})^{\-1}]\_{ii}}\\]
```
meany=mean(y)
XXinv=solve(t(X)%*%X)
yhat=X%*%XXinv%*%t(X)%*%y
resid=y-yhat
SStotal=norm(y-meany,type="2")^2
### OR SStotal=t(y-meany)%*%(y-meany)
SSreg=norm(yhat-meany,type="2")^2
### OR SSreg=t(yhat-meany)%*%(yhat-meany)
SSresid=norm(resid,type="2")^2
### OR SSresid=t(resid)%*%resid
Rsquared=SSreg/SStotal
StdErrorResiduals=norm(resid/sqrt(48), type="2") #=sqrt(SSresid/48)
CovBeta=SSresid*XXinv/48
StdErrorIntercept = sqrt(CovBeta[1,1])
StdErrorSlope = sqrt(CovBeta[2,2])
```
```
## [1] "Rsquared: 0.651079380758252"
```
```
## [1] "SSresid: 11353.5210510949"
```
```
## [1] "SSmodel: 21185.4589489052"
```
```
## [1] "StdErrorResiduals: 15.3795867488199"
```
```
## [1] "StdErrorIntercept: 6.75844016937923"
```
```
## [1] "StdErrorIntercept: 0.415512776657122"
```
Let’s plot our regression line over the original data:
```
plot(cars)
abline(beta[1],beta[2],col='blue')
```
### 11\.1\.4 OLS in R via `lm()`
Finally, let’s compare our results to the built in linear model solver, `lm()`:
```
fit = lm(dist ~ speed, data=cars)
summary(fit)
```
```
##
## Call:
## lm(formula = dist ~ speed, data = cars)
##
## Residuals:
## Min 1Q Median 3Q Max
## -29.069 -9.525 -2.272 9.215 43.201
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -17.5791 6.7584 -2.601 0.0123 *
## speed 3.9324 0.4155 9.464 1.49e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15.38 on 48 degrees of freedom
## Multiple R-squared: 0.6511, Adjusted R-squared: 0.6438
## F-statistic: 89.57 on 1 and 48 DF, p-value: 1.49e-12
```
```
anova(fit)
```
```
## Analysis of Variance Table
##
## Response: dist
## Df Sum Sq Mean Sq F value Pr(>F)
## speed 1 21186 21185.5 89.567 1.49e-12 ***
## Residuals 48 11354 236.5
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
### 11\.1\.1 Cars Data
The \`cars’ dataset is included in the datasets package. This dataset contains observations of speed and stopping distance for 50 cars. We can take a look at the summary statistics by using the **summary** function.
```
summary(cars)
```
```
## speed dist
## Min. : 4.0 Min. : 2.00
## 1st Qu.:12.0 1st Qu.: 26.00
## Median :15.0 Median : 36.00
## Mean :15.4 Mean : 42.98
## 3rd Qu.:19.0 3rd Qu.: 56.00
## Max. :25.0 Max. :120.00
```
We can plot these two variables as follows:
```
plot(cars)
```
### 11\.1\.2 Setting up the Normal Equations
Let’s set up a system of equations
\\\[\\mathbf{X}\\boldsymbol\\beta\=\\mathbf{y}\\]
to create the model
\\\[stopping\\\_distance\=\\beta\_0\+\\beta\_1speed.\\]
To do this, we need a design matrix \\(\\mathbf{X}\\) containing a column of ones for the intercept term and a column containing the speed variable. We also need a vector \\(\\mathbf{y}\\) containing the corresponding stopping distances.
#### The `model.matrix()` function
The `model.matrix()` function will create the design or modeling matrix \\(\\mathbf{X}\\) for us. This function takes a formula and data matrix as input and exports the matrix that we represent as \\(\\mathbf{X}\\) in the normal equations. For datasets with categorical (factor) inputs, this function would also create dummy variables for each level, leaving out a reference level by default. You can override the default to leave out a reference level (when you override this default, you perform **one\-hot\-encoding** on said categorical variable) by including the following option as a third input to the function, where `df` is the name of your data frame:
`contrasts.arg = lapply(df[,sapply(df,is.factor) ], contrasts, contrasts=FALSE`
For an exact example, see the commented out code in the chunk below. There are no factors in the `cars` data, so the code may not even run, but I wanted to provide the line of code necessary for this task, as it is one that we use quite frequently for clustering, PCA, or machine learning!
```
# Create matrix X and label the columns
X=model.matrix(dist~speed, data=cars)
# Create vector y and label the column
y=cars$dist
# CODE TO PERFORM ONE-HOT ENCODING (NO REFERENCE LEVEL FOR CATEGORICAL DUMMIES)
# X=model.matrix(dist~speed, data=cars, contrasts.arg = lapply(cars[,sapply(cars,is.factor) ], contrasts, contrasts=FALSE)
```
Let’s print the first 10 rows of each to see what we did:
```
# Show first 10 rows, all columns. To show only observations 2,4, and 7, for
# example, the code would be X[c(2,4,7), ]
X[1:10, ]
```
```
## (Intercept) speed
## 1 1 4
## 2 1 4
## 3 1 7
## 4 1 7
## 5 1 8
## 6 1 9
## 7 1 10
## 8 1 10
## 9 1 10
## 10 1 11
```
```
y[1:10]
```
```
## [1] 2 10 4 22 16 10 18 26 34 17
```
#### The `model.matrix()` function
The `model.matrix()` function will create the design or modeling matrix \\(\\mathbf{X}\\) for us. This function takes a formula and data matrix as input and exports the matrix that we represent as \\(\\mathbf{X}\\) in the normal equations. For datasets with categorical (factor) inputs, this function would also create dummy variables for each level, leaving out a reference level by default. You can override the default to leave out a reference level (when you override this default, you perform **one\-hot\-encoding** on said categorical variable) by including the following option as a third input to the function, where `df` is the name of your data frame:
`contrasts.arg = lapply(df[,sapply(df,is.factor) ], contrasts, contrasts=FALSE`
For an exact example, see the commented out code in the chunk below. There are no factors in the `cars` data, so the code may not even run, but I wanted to provide the line of code necessary for this task, as it is one that we use quite frequently for clustering, PCA, or machine learning!
```
# Create matrix X and label the columns
X=model.matrix(dist~speed, data=cars)
# Create vector y and label the column
y=cars$dist
# CODE TO PERFORM ONE-HOT ENCODING (NO REFERENCE LEVEL FOR CATEGORICAL DUMMIES)
# X=model.matrix(dist~speed, data=cars, contrasts.arg = lapply(cars[,sapply(cars,is.factor) ], contrasts, contrasts=FALSE)
```
Let’s print the first 10 rows of each to see what we did:
```
# Show first 10 rows, all columns. To show only observations 2,4, and 7, for
# example, the code would be X[c(2,4,7), ]
X[1:10, ]
```
```
## (Intercept) speed
## 1 1 4
## 2 1 4
## 3 1 7
## 4 1 7
## 5 1 8
## 6 1 9
## 7 1 10
## 8 1 10
## 9 1 10
## 10 1 11
```
```
y[1:10]
```
```
## [1] 2 10 4 22 16 10 18 26 34 17
```
### 11\.1\.3 Solving for Parameter Estimates and Statistics
Now lets find our parameter estimates by solving the normal equations,
\\\[\\mathbf{X}^T\\mathbf{X}\\boldsymbol\\beta \= \\mathbf{X}^T\\mathbf{y}\\]
using the built in **solve** function. To solve the system \\(\\mathbf{A}\\mathbf{x}\=\\mathbf{b}\\) we’d use `solve(A,b)`.
```
(beta=solve(t(X) %*% X ,t(X)%*%y))
```
```
## [,1]
## (Intercept) -17.579095
## speed 3.932409
```
At the same time we can compute the residuals,
\\\[\\mathbf{r}\=\\mathbf{y}\-\\mathbf{\\hat{y}}\\]
the total sum of squares (SST),
\\\[\\sum\_{i\=1}^n (y\-\\bar{y})^2\=(\\mathbf{y}\-\\mathbf{\\bar{y}})^T(\\mathbf{y}\-\\mathbf{\\bar{y}})\=\\\|\\mathbf{y}\-\\mathbf{\\bar{y}}\\\|^2\\]
the regression sum of squares (SSR or SSM)
\\\[\\sum\_{i\=1}^n (\\hat{y}\-\\bar{y})^2\=(\\mathbf{\\hat{y}}\-\\mathbf{\\bar{y}})^T(\\mathbf{\\hat{y}}\-\\mathbf{\\bar{y}})\=\\\|\\mathbf{\\hat{y}}\-\\mathbf{\\bar{y}}\\\|^2\\]
the residual sum of squares (SSE)
\\\[\\sum\_{i\=1}^n r\_i \=\\mathbf{r}^T\\mathbf{r}\=\\\|\\mathbf{r}\\\|^2\\]
and the unbiased estimator of the variance of the residuals, using the model degrees of freedom which is \\(n\-2\=48\\):
\\\[\\widehat{\\sigma\_{\\varepsilon}}^2 \=\\frac{SSE}{d.f.} \= \\frac{\\\|\\mathbf{r}\\\|^2}{48}\\]
Then \\(R^2\\):
\\\[R^2 \= \\frac{SSR}{SST}\\]
We can also compute the standard error of \\(\\widehat{\\boldsymbol\\beta}\\) since
\\\[\\begin{eqnarray\*}
\\widehat{\\boldsymbol\\beta} \&\=\& (\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T\\mathbf{y}\\\\
var(\\widehat{\\boldsymbol\\beta})\&\=\&var((\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T\\mathbf{y})\\\\
\&\=\&(\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T var(\\mathbf{y}) \\mathbf{X}(\\mathbf{X}^T\\mathbf{X})^{\-1} \\\\
\&\=\&(\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T (\\widehat{\\sigma\_{\\varepsilon}}^2\) \\mathbf{X}(\\mathbf{X}^T\\mathbf{X})^{\-1} \\\\
\&\=\& \\widehat{\\sigma\_{\\varepsilon}}^2 (\\mathbf{X}^T\\mathbf{X})^{\-1}\\mathbf{X}^T\\mathbf{X}(\\mathbf{X}^T\\mathbf{X})^{\-1} \\\\
\&\=\& \\widehat{\\sigma\_{\\varepsilon}}^2(\\mathbf{X}^T\\mathbf{X})^{\-1}
\\end{eqnarray\*}\\]
The variances of each \\(\\widehat\\beta\\) are given by the diagonal elements of their covariance matrix (see Definition [6\.3](norms.html#def:covariancedef)), and the standard errors of each \\(\\widehat\\beta\\) are thus obtained by taking the square roots of these diagonal elements:
\\\[s.e.(\\widehat{\\beta\_i})\=\\sqrt{\\widehat{\\sigma\_{\\varepsilon}}\[(\\mathbf{X}^T\\mathbf{X})^{\-1}]\_{ii}}\\]
```
meany=mean(y)
XXinv=solve(t(X)%*%X)
yhat=X%*%XXinv%*%t(X)%*%y
resid=y-yhat
SStotal=norm(y-meany,type="2")^2
### OR SStotal=t(y-meany)%*%(y-meany)
SSreg=norm(yhat-meany,type="2")^2
### OR SSreg=t(yhat-meany)%*%(yhat-meany)
SSresid=norm(resid,type="2")^2
### OR SSresid=t(resid)%*%resid
Rsquared=SSreg/SStotal
StdErrorResiduals=norm(resid/sqrt(48), type="2") #=sqrt(SSresid/48)
CovBeta=SSresid*XXinv/48
StdErrorIntercept = sqrt(CovBeta[1,1])
StdErrorSlope = sqrt(CovBeta[2,2])
```
```
## [1] "Rsquared: 0.651079380758252"
```
```
## [1] "SSresid: 11353.5210510949"
```
```
## [1] "SSmodel: 21185.4589489052"
```
```
## [1] "StdErrorResiduals: 15.3795867488199"
```
```
## [1] "StdErrorIntercept: 6.75844016937923"
```
```
## [1] "StdErrorIntercept: 0.415512776657122"
```
Let’s plot our regression line over the original data:
```
plot(cars)
abline(beta[1],beta[2],col='blue')
```
### 11\.1\.4 OLS in R via `lm()`
Finally, let’s compare our results to the built in linear model solver, `lm()`:
```
fit = lm(dist ~ speed, data=cars)
summary(fit)
```
```
##
## Call:
## lm(formula = dist ~ speed, data = cars)
##
## Residuals:
## Min 1Q Median 3Q Max
## -29.069 -9.525 -2.272 9.215 43.201
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -17.5791 6.7584 -2.601 0.0123 *
## speed 3.9324 0.4155 9.464 1.49e-12 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15.38 on 48 degrees of freedom
## Multiple R-squared: 0.6511, Adjusted R-squared: 0.6438
## F-statistic: 89.57 on 1 and 48 DF, p-value: 1.49e-12
```
```
anova(fit)
```
```
## Analysis of Variance Table
##
## Response: dist
## Df Sum Sq Mean Sq F value Pr(>F)
## speed 1 21186 21185.5 89.567 1.49e-12 ***
## Residuals 48 11354 236.5
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
11\.2 Multiple Linear Regression
--------------------------------
### 11\.2\.1 Bike Sharing Dataset
### 11\.2\.1 Bike Sharing Dataset
| Field Specific |
shainarace.github.io | https://shainarace.github.io/LinearAlgebra/pca.html |
Chapter 13 Principal Components Analysis
========================================
We now have the tools necessary to discuss one of the most important concepts in mathematical statistics: **Principal Components Analysis (PCA)**. Before we dive into the mathematical details, we’ll first introduce an effective analogy to develop our intuition.
13\.1 God’s Flashlight
----------------------
Imagine your data as a multidimensional cloud of points in space. God (however you conceive that) has a flashlight and can project this data at a right angle down onto a flat surface \- the flashlight just casts a point shadow, the shadows don’t get bigger like on Earth. The center of the flat surface is fixed at the center of the data, so it’s more like 2 flashlights, one from below the surface, one from above, both at right angles. We could rotate this flashlight/flat surface setup around and get infinitely many projections of the data from different perspectives. The PCA projection is the *one* with the most variance, which indicates that it *contains the most information from your original data*. It’s also the projection that is *closest to the original data* in the Euclidean or sum\-of\-squared\-error sense (PCA gives the rank k approximation to your data with the lowest possible error). Once projected, the axes of the projection (drawn so that the “first” axis points in the direction of greatest variance) are your principal components, providing the orthogonal directions of maximal variance.
This projection we’ve just described is actually the projection of the data onto a *hyperplane*, which entails a rank reduction of 1, though you might have imagined it as a projection onto a 2\-dimensional plane. The great thing about PCA is that both of those visuals are appropriate \- we can project the data onto any dimensional subspace of the original from 1 to rank(\\(\\X\\))\-1\.
With this analogy in mind, we bring back to the interactive plot from Chapter [9](orthog.html#orthog) to ponder what these different projections of a data cloud would look like, and to locate the maximum variance projection of *this* data.
13\.2 PCA Details
-----------------
PCA involves the analysis of eigenvalues and eigenvectors of the covariance or the correlation matrix. Its development relies on the following important facts:
**Theorem 13\.1 (Diagonalization of Symmetric Matrices)** All \\(n\\times n\\) real valued symmetric matrices (like the covariance and correlation matrix) have two very important properties:
1. They have a complete set of \\(n\\) linearly independent eigenvectors, \\(\\{\\v\_1,\\dots,\\v\_n\\}\\), corresponding to eigenvalues \\\[\\lambda\_1 \\geq \\lambda\_2 \\geq\\dots\\geq \\lambda\_n.\\]
2. Furthermore, these eigenvectors can be always be chosen to be *orthonormal* so that if \\(\\V\=\[\\v\_1\|\\dots\|\\v\_n]\\) then
\\\[\\V^{T}\\V\=\\bo{I}\\]
or equivalently, \\(\\V^{\-1}\=\\V^{T}\\).
Letting \\(\\D\\) be a diagonal matrix with \\(D\_{ii}\=\\lambda\_i\\), by the definition of eigenvalues and eigenvectors we have for any symmetric matrix \\(\\bo{S}\\),
\\\[\\bo{S}\\V\=\\V\\D\\]
Thus, any symmetric matrix \\(\\bo{S}\\) can be diagonalized in the following way:
\\\[\\V^{T}\\bo{S}\\V\=\\D\\]
Covariance and Correlation matrices (when there is no perfect multicollinearity in variables) have the additional property that all of their eigenvalues are positive (nonzero). They are *positive definite* matrices.
Now that we know we have a complete set of eigenvectors, it is common to order them according to the magnitude of their corresponding eigenvalues. From here on out, we will use \\((\\lambda\_1,\\v\_1\)\\) to represent the **largest** eigenvalue of a matrix and its corresponding eigenvector. When working with a covariance or correlation matrix, this eigenvector associated with the largest eigenvalue is called the **first principal component** and points in the direction for which the variance of the data is maximal. Example [13\.1](pca.html#exm:coveigs) illustrates this point.
**Example 13\.1 (Eigenvectors of the Covariance Matrix)** Suppose we have a matrix of data for 10 individuals on 2 variables, \\(\\x\_1\\) and \\(\\x\_2\\). Plotted on a plane, the data appears as follows:
Our data matrix for these points is:
\\\[\\X\=\\pm 1 \& 1\\\\2\&1\\\\2\&4\\\\3\&1\\\\4\&4\\\\5\&2\\\\6\&4\\\\6\&6\\\\7\&6\\\\8\&8 \\mp\\]
the means of the variables in \\(\\X\\) are:
\\\[\\bar{\\x}\=\\pm 4\.4 \\\\ 3\.7 \\mp. \\]
When thinking about variance directions, our first step should be to center the data so that it has mean zero. Eigenvectors measure the spread of data around the origin. Variance measures spread of data around the mean. Thus, we need to equate the mean with the origin. To center the data, we simply compute
\\\[\\X\_c\=\\X\-\\e\\bar{\\x}^T \= \\pm 1 \& 1\\\\2\&1\\\\2\&4\\\\3\&1\\\\4\&4\\\\5\&2\\\\6\&4\\\\6\&6\\\\7\&6\\\\8\&8 \\mp \- \\pm 4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\mp \= \\pm \-3\.4\&\-2\.7\\\\\-2\.4\&\-2\.7\\\\\-2\.4\& 0\.3\\\\\-1\.4\&\-2\.7\\\\ \-0\.4\& 0\.3\\\\0\.6\&\-1\.7\\\\1\.6\& 0\.3\\\\1\.6\& 2\.3\\\\2\.6\& 2\.3\\\\3\.6\& 4\.3\\mp.\\]
Examining the new centered data, we find that we’ve only translated our data in the plane \- we haven’t distorted it in any fashion.
Thus the covariance matrix is:
\\\[\\ssigma\=\\frac{1}{9}(\\X\_c^T\\X\_c)\= \\pm 5\.6 \& 4\.8\\\\4\.8\&6\.0111 \\mp \\]
The eigenvalue and eigenvector pairs of \\(\\ssigma\\) are (rounded to 2 decimal places) as follows:
\\\[(\\lambda\_1,\\v\_1\)\=\\left( 10\.6100 , \\begin{bmatrix} 0\.69 \\\\ 0\.72 \\end{bmatrix}\\right) \\mbox{ and } (\\lambda\_2,\\v\_2\)\= \\left( 1\.0012,\\begin{bmatrix}\-0\.72\\\\0\.69 \\end{bmatrix}\\right)\\]
Let’s plot the eigenvector directions on the same graph:
The eigenvector \\(\\v\_1\\) is called the **first principal component**. It is the direction along which the variance of the data is maximal. The eigenvector \\(\\v\_2\\) is the **second principal component**. In general, the second principal component is the direction, orthogonal to the first, along which the variance of the data is maximal (in two dimensions, there is only one direction possible.)
Why is this important? Let’s consider what we’ve just done. We started with two variables, \\(\\x\_1\\) and \\(\\x\_2\\), which appeared to be correlated. We then derived *new variables*, \\(\\v\_1\\) and \\(\\v\_2\\), which are linear combinations of the original variables:
\\\[\\begin{eqnarray}
\\v\_1 \&\=\& 0\.69\\x\_1 \+ 0\.72\\x\_2 \\\\
\\tag{13\.1}
\\v\_2 \&\=\& \-0\.72\\x\_1 \+ 0\.69\\x\_2
\\end{eqnarray}\\]
These new variables are completely uncorrelated. To see this, let’s represent our data according to the new variables \- i.e. let’s change the basis from \\(\\mathcal{B}\_1\=\[\\x\_1,\\x\_2]\\) to \\(\\mathcal{B}\_2\=\[\\v\_1,\\v\_2]\\).
**Example 13\.2 (The Principal Component Basis)** Let’s express our data in the basis defined by the principal components. We want to find coordinates (in a \\(2\\times 10\\) matrix \\(\\A\\)) such that our original (centered) data can be expressed in terms of principal components. This is done by solving for \\(\\A\\) in the following equation (see Chapter [8](basis.html#basis) and note that the *rows* of \\(\\X\\) define the points rather than the columns):
\\\[\\begin{eqnarray}
\\X\_c \&\=\& \\A \\V^T \\\\
\\pm \-3\.4\&\-2\.7\\\\\-2\.4\&\-2\.7\\\\\-2\.4\& 0\.3\\\\\-1\.4\&\-2\.7\\\\ \-0\.4\& 0\.3\\\\0\.6\&\-1\.7\\\\1\.6\& 0\.3\\\\1\.6\& 2\.3\\\\2\.6\& 2\.3\\\\3\.6\& 4\.3 \\mp \&\=\& \\pm a\_{11} \& a\_{12} \\\\ a\_{21} \& a\_{22} \\\\ a\_{31} \& a\_{32}\\\\ a\_{41} \& a\_{42}\\\\ a\_{51} \& a\_{52}\\\\ a\_{61} \& a\_{62}\\\\ a\_{71} \& a\_{72}\\\\ a\_{81} \& a\_{82}\\\\ a\_{91} \& a\_{92}\\\\ a\_{10,1} \& a\_{10,2} \\mp \\pm \\v\_1^T \\\\ \\v\_2^T \\mp
\\end{eqnarray}\\]
Conveniently, our new basis is orthonormal meaning that \\(\\V\\) is an orthogonal matrix, so
\\\[\\A\=\\X\\V .\\]
The new data coordinates reflect a simple rotation of the data around the origin:
Visually, we can see that the new variables are uncorrelated. You may wish to confirm this by calculating the covariance. In fact, we can do this in a general sense. If \\(\\A\=\\X\_c\\V\\) is our new data, then the covariance matrix is diagonal:
\\\[\\begin{eqnarray\*}
\\ssigma\_A \&\=\& \\frac{1}{n\-1}\\A^T\\A \\\\
\&\=\& \\frac{1}{n\-1}(\\X\_c\\V)^T(\\X\_c\\V) \\\\
\&\=\& \\frac{1}{n\-1}\\V^T((\\X\_c^T\\X\_c)\\V\\\\
\&\=\&\\frac{1}{n\-1}\\V^T((n\-1\)\\ssigma\_X)\\V\\\\
\&\=\&\\V^T(\\ssigma\_X)\\V\\\\
\&\=\&\\V^T(\\V\\D\\V^T)\\V\\\\
\&\=\& \\D
\\end{eqnarray\*}\\]
Where \\(\\ssigma\_X\=\\V\\D\\V^T\\) comes from the diagonalization in Theorem [13\.1](pca.html#thm:eigsym).
By changing our variables to principal components, we have managed to **“hide”** the correlation between \\(\\x\_1\\) and \\(\\x\_2\\) while keeping the spacial relationships between data points in tact. Transformation *back* to variables \\(\\x\_1\\) and \\(\\x\_2\\) is easily done by using the linear relationships in from Equation [(13\.1\)](pca.html#eq:pcacomb).
13\.3 Geometrical comparison with Least Squares
-----------------------------------------------
In least squares regression, our objective is to maximize the amount of variance explained in our target variable. It may look as though the first principal component from Example [13\.1](pca.html#exm:coveigs) points in the direction of the regression line. This is not the case however. The first principal component points in the direction of a line which minimizes the sum of squared *orthogonal* distances between the points and the line. Regressing \\(\\x\_2\\) on \\(\\x\_1\\), on the other hand, provides a line which minimizes the sum of squared *vertical* distances between points and the line. This is illustrated in Figure [13\.1](pca.html#fig:pcvsreg).
Figure 13\.1: Principal Components vs. Regression Lines
The first principal component about the mean of a set of points can be represented by that line which most closely approaches the data points. Let this not conjure up images of linear regression in your head, though. In contrast, linear least squares tries to minimize the distance in a single direction only (the direction of your target variable axes). Thus, although the two use a similar error metric, linear least squares is a method that treats one dimension of the data preferentially, while PCA treats all dimensions equally.
You might be tempted to conclude from Figure [13\.1](pca.html#fig:pcvsreg) that the first principal component and the regression line “ought to be similar.” This is a terrible conclusion if you consider a large multivariate dataset and the various regression lines that would predict each variable in that dataset. In PCA, there is no target variable and thus no single regression line that we’d be comparing to.
13\.4 Covariance or Correlation Matrix?
---------------------------------------
Principal components analysis can involve eigenvectors of either the covariance matrix or the correlation matrix. When we perform this analysis on the covariance matrix, the geometric interpretation is simply centering the data and then determining the direction of maximal variance. When we perform this analysis on the correlation matrix, the interpretation is *standardizing* the data and then determining the direction of maximal variance. The correlation matrix is simply a scaled form of the covariance matrix. In general, these two methods give different results, especially when the scales of the variables are different.
The covariance matrix is the default for (most) \\(\\textsf{R}\\) PCA functions. The correlation matrix is the default in SAS and the covariance matrix method is invoked by the option:
```
proc princomp data=X cov;
var x1--x10;
run;
```
Choosing between the covariance and correlation matrix can sometimes pose problems. The rule of thumb is that the correlation matrix should be used when the scales of the variables vary greatly. In this case, the variables with the highest variance will dominate the first principal component. The argument against automatically using correlation matrices is that it turns out to be quite a brutal way of standardizing your data \- forcing all variables to contain the same amount of information (after all, don’t we equate variance to information?) seems naive and counterintuitive when it is not absolutely necessary for differences in scale. We hope that the case studies outlined in Chapter [14](pcaapp.html#pcaapp) will give those who *always* use the correlation option reason for pause, and we hope that, in the future, they will consider multiple presentations of the data and their corresponding low\-rank representations of the data.
13\.5 PCA in R
--------------
Let’s find Principal Components using the iris dataset. This is a well\-known dataset, often used to demonstrate the effect of clustering algorithms. It contains numeric measurements for 150 iris flowers along 4 dimensions. The fifth column in the dataset tells us what species of Iris the flower is. There are 3 species.
1. Sepal.Length
2. Sepal.Width
3. Petal.Length
4. Petal.Width
5. Species
* Setosa
* Versicolor
* Virginica
Let’s first take a look at the scatterplot matrix:
```
pairs(~Sepal.Length+Sepal.Width+Petal.Length+Petal.Width,data=iris,col=c("red","green3","blue")[iris$Species])
```
It is apparent that some of our variables are correlated. We can confirm this by computing the correlation matrix with the `cor()` function. We can also check out the individual variances of the variables and the covariances between variables by examining the covariance matrix (`cov()` function). Remember \- when looking at covariances, we can really only interpret the *sign* of the number and not the magnitude as we can with the correlations.
```
cor(iris[1:4])
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Sepal.Length 1.0000000 -0.1175698 0.8717538 0.8179411
## Sepal.Width -0.1175698 1.0000000 -0.4284401 -0.3661259
## Petal.Length 0.8717538 -0.4284401 1.0000000 0.9628654
## Petal.Width 0.8179411 -0.3661259 0.9628654 1.0000000
```
```
cov(iris[1:4])
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Sepal.Length 0.6856935 -0.0424340 1.2743154 0.5162707
## Sepal.Width -0.0424340 0.1899794 -0.3296564 -0.1216394
## Petal.Length 1.2743154 -0.3296564 3.1162779 1.2956094
## Petal.Width 0.5162707 -0.1216394 1.2956094 0.5810063
```
We have relatively strong positive correlation between Petal Length, Petal Width and Sepal Length. It is also clear that Petal Length has more than 3 times the variance of the other 3 variables. How will this effect our analysis?
The scatter plots and correlation matrix provide useful information, but they don’t give us a true sense for how the data looks when all 4 attributes are considered simultaneously.
In the next section we will compute the principal components directly from eigenvalues and eigenvectors of the covariance or correlation matrix. **It’s important to note that this method of *computing* principal components is not actually recommended \- the answer provided is the same, but the numerical stability and efficiency of this method may be dubious for large datasets. The Singular Value Decomposition (SVD), which will be discussed in Chapter [15](svd.html#svd), is generally a preferred route to computing principal components.** Using both the covariance matrix and the correlation matrix, let’s see what we can learn about the data. Let’s start with the covariance matrix which is the default setting for the `prcomp()` function in R.
### 13\.5\.1 Covariance PCA
Let’s start with the covariance matrix which is the default setting for the `prcomp` function in R. It’s worth repeating that a dedicated principal component function like `prcomp()` is superior in numerical stability and efficiency to the lines of code in the next section. **The only reason for directly computing the covariance matrix and its eigenvalues and eigenvectors (as opposed to `prcomp()`) is for edification. Computing a PCA in this manner, just this once, will help us grasp the exact mathematics of the situation and empower us to use built in functions with greater flexibility and understanding.**
### 13\.5\.2 Principal Components, Loadings, and Variance Explained
```
covM = cov(iris[1:4])
eig=eigen(covM,symmetric=TRUE,only.values=FALSE)
c=colnames(iris[1:4])
eig$values
```
```
## [1] 4.22824171 0.24267075 0.07820950 0.02383509
```
```
# Label the loadings
rownames(eig$vectors)=c(colnames(iris[1:4]))
# eig$vectors
```
The eigenvalues tell us how much of the total variance in the data is directed along each eigenvector. Thus, the amount of variance along \\(\\mathbf{v}\_1\\) is \\(\\lambda\_1\\) and the *proportion* of variance explained by the first principal component is
\\\[\\frac{\\lambda\_1}{\\lambda\_1\+\\lambda\_2\+\\lambda\_3\+\\lambda\_4}\\]
```
eig$values[1]/sum(eig$values)
```
```
## [1] 0.9246187
```
Thus 92% of the variation in the Iris data is explained by the first component alone. What if we consider the first and second principal component directions? Using this two dimensional representation (approximation/projection) we can capture the following proportion of variance:
\\\[\\frac{\\lambda\_1\+\\lambda\_2}{\\lambda\_1\+\\lambda\_2\+\\lambda\_3\+\\lambda\_4}\\]
```
sum(eig$values[1:2])/sum(eig$values)
```
```
## [1] 0.9776852
```
With two dimensions, we explain 97\.8% of the variance in these 4 variables! The entries in each eigenvector are called the **loadings** of the variables on the component. The loadings give us an idea how important each variable is to each component. For example, it seems that the third variable in our dataset (Petal Length) is dominating the first principal component. This should not come as too much of a shock \- that variable had (by far) the largest amount of variation of the four. In order to capture the most amount of variance in a single dimension, we should certainly be considering this variable strongly. The variable with the next largest variance, Sepal Length, dominates the second principal component.
**Note:** *Had Petal Length and Sepal Length been correlated, they would not have dominated separate principal components, they would have shared one. These two variables are not correlated and thus their variation cannot be captured along the same direction.*
### 13\.5\.3 Scores and PCA Projection
Lets plot the *projection* of the four\-dimensional iris data onto the two dimensional space spanned by the first 2 principal components. To do this, we need coordinates. These coordinates are commonly called **scores** in statistical texts. We can find the coordinates of the data on the principal components by solving the system
\\\[\\mathbf{X}\=\\mathbf{A}\\mathbf{V}^T\\]
where \\(\\mathbf{X}\\) is our original iris data **(centered to have mean \= 0\)** and \\(\\mathbf{A}\\) is a matrix of coordinates in the new principal component space, spanned by the eigenvectors in \\(\\mathbf{V}\\).
Solving this system is simple enough \- since \\(\\mathbf{V}\\) is an orthogonal matrix per Theorem [13\.1](pca.html#thm:eigsym). Let’s confirm this:
```
eig$vectors %*% t(eig$vectors)
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Sepal.Length 1.000000e+00 4.163336e-17 -2.775558e-17 -2.775558e-17
## Sepal.Width 4.163336e-17 1.000000e+00 1.665335e-16 1.942890e-16
## Petal.Length -2.775558e-17 1.665335e-16 1.000000e+00 -2.220446e-16
## Petal.Width -2.775558e-17 1.942890e-16 -2.220446e-16 1.000000e+00
```
```
t(eig$vectors) %*% eig$vectors
```
```
## [,1] [,2] [,3] [,4]
## [1,] 1.000000e+00 -2.289835e-16 0.000000e+00 -1.110223e-16
## [2,] -2.289835e-16 1.000000e+00 2.775558e-17 -1.318390e-16
## [3,] 0.000000e+00 2.775558e-17 1.000000e+00 1.110223e-16
## [4,] -1.110223e-16 -1.318390e-16 1.110223e-16 1.000000e+00
```
We’ll have to settle for precision at 15 decimal places. Close enough!
To find the scores, we simply subtract the means from our original variables to create the data matrix \\(\\mathbf{X}\\) and compute
\\\[\\mathbf{A}\=\\mathbf{X}\\mathbf{V}\\]
```
# The scale function centers and scales by default
X=scale(iris[1:4],center=TRUE,scale=FALSE)
# Create data.frame from matrix for plotting purposes.
scores=data.frame(X %*% eig$vectors)
# Change default variable names
colnames(scores)=c("Prin1","Prin2","Prin3","Prin4")
# Print coordinates/scores of first 10 observations
scores[1:10, ]
```
```
## Prin1 Prin2 Prin3 Prin4
## 1 -2.684126 -0.31939725 -0.02791483 0.002262437
## 2 -2.714142 0.17700123 -0.21046427 0.099026550
## 3 -2.888991 0.14494943 0.01790026 0.019968390
## 4 -2.745343 0.31829898 0.03155937 -0.075575817
## 5 -2.728717 -0.32675451 0.09007924 -0.061258593
## 6 -2.280860 -0.74133045 0.16867766 -0.024200858
## 7 -2.820538 0.08946138 0.25789216 -0.048143106
## 8 -2.626145 -0.16338496 -0.02187932 -0.045297871
## 9 -2.886383 0.57831175 0.02075957 -0.026744736
## 10 -2.672756 0.11377425 -0.19763272 -0.056295401
```
To this point, we have simply computed coordinates (scores) on a new set of axis (formed by the principal components, i.e. the eigenvectors). These axes are orthogonal and are aligned with the directions of maximal variance in the data. When we consider only a subset of principal components (like 2 the two components here that account for 97\.8% of the variance), we are projecting the data onto a lower dimensional space. Generally, this is one of the primary goals of PCA: Project the data down into a lower dimensional space (*onto the span of the principal components*) while keeping the maximum amount of information (i.e. variance).
Thus, we know that almost 98% of the data’s variance can be seen in two\-dimensions using the first two principal components. Let’s go ahead and see what this looks like:
```
plot(scores$Prin1, scores$Prin2,
main="Data Projected on First 2 Principal Components",
xlab="First Principal Component",
ylab="Second Principal Component",
col=c("red","green3","blue")[iris$Species])
```
### 13\.5\.4 PCA functions in R
```
irispca=prcomp(iris[1:4])
# Variance Explained
summary(irispca)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 2.0563 0.49262 0.2797 0.15439
## Proportion of Variance 0.9246 0.05307 0.0171 0.00521
## Cumulative Proportion 0.9246 0.97769 0.9948 1.00000
```
```
# Eigenvectors:
irispca$rotation
```
```
## PC1 PC2 PC3 PC4
## Sepal.Length 0.36138659 -0.65658877 0.58202985 0.3154872
## Sepal.Width -0.08452251 -0.73016143 -0.59791083 -0.3197231
## Petal.Length 0.85667061 0.17337266 -0.07623608 -0.4798390
## Petal.Width 0.35828920 0.07548102 -0.54583143 0.7536574
```
```
# Coordinates of first 10 observations along PCs:
irispca$x[1:10, ]
```
```
## PC1 PC2 PC3 PC4
## [1,] -2.684126 -0.31939725 0.02791483 0.002262437
## [2,] -2.714142 0.17700123 0.21046427 0.099026550
## [3,] -2.888991 0.14494943 -0.01790026 0.019968390
## [4,] -2.745343 0.31829898 -0.03155937 -0.075575817
## [5,] -2.728717 -0.32675451 -0.09007924 -0.061258593
## [6,] -2.280860 -0.74133045 -0.16867766 -0.024200858
## [7,] -2.820538 0.08946138 -0.25789216 -0.048143106
## [8,] -2.626145 -0.16338496 0.02187932 -0.045297871
## [9,] -2.886383 0.57831175 -0.02075957 -0.026744736
## [10,] -2.672756 0.11377425 0.19763272 -0.056295401
```
All of the information we computed using eigenvectors aligns with what we see here, except that the coordinates/scores and the loadings of Principal Component 3 are of the opposite sign. In light of what we know about eigenvectors representing *directions*, this should be no cause for alarm. The `prcomp` function arrived at the unit basis vector pointing in the negative direction of the one we found directly from the `eig` function \- which should negate all the coordinates and leave us with an equivalent mirror image in all of our projections.
### 13\.5\.5 The Biplot
One additional feature that R users have created is the **biplot**. The PCA biplot allows us to see where our original variables fall in the space of the principal components. Highly correlated variables will fall along the same direction (or exactly opposite directions) as a change in one of these variables correlates to a change in the other. Uncorrelated variables will appear further apart. The length of the variable vectors on the biplot tell us the degree to which variability in variable is explained in that direction. Shorter vectors have less variability than longer vectors. So in the biplot below, petal width and petal length point in the same direction indicating that these variables share a relatively high degree of correlation. However, the vector for petal width is much shorter than that of petal length, which means you can expect a higher degree of change in petal length as you proceed to the right along PC1\. PC1 explains more of the variance in petal length than it does petal width. If we were to imagine a third PC orthogonal to the plane shown, petal width is likely to exist at much larger angle off the plane \- here, it is being projected down from that 3\-dimensional picture.
```
biplot(irispca, col = c("gray", "blue"))
```
We can examine some of the outlying observations to see how they align with these projected variable directions. It helps to compare them to the quartiles of the data. Also keep in mind the direction of the arrows in the plot. If the arrow points down then the positive direction is down \- indicating observations which are greater than the mean. Let’s pick out observations 42 and 132 and see what the actual data points look like in comparison to the rest of the sample population.
```
summary(iris[1:4])
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Min. :4.300 Min. :2.000 Min. :1.000 Min. :0.100
## 1st Qu.:5.100 1st Qu.:2.800 1st Qu.:1.600 1st Qu.:0.300
## Median :5.800 Median :3.000 Median :4.350 Median :1.300
## Mean :5.843 Mean :3.057 Mean :3.758 Mean :1.199
## 3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100 3rd Qu.:1.800
## Max. :7.900 Max. :4.400 Max. :6.900 Max. :2.500
```
```
# Consider orientation of outlying observations:
iris[42, ]
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 42 4.5 2.3 1.3 0.3 setosa
```
```
iris[132, ]
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 132 7.9 3.8 6.4 2 virginica
```
13\.6 Variable Clustering with PCA
----------------------------------
The direction arrows on the biplot are merely the coefficients of the original variables when combined to make principal components. Don’t forget that principal components are simply linear combinations of the original variables.
For example, here we have the first principal component (the first column of \\(\\V\\)), \\(\\mathbf{v}\_1\\) as:
```
eig$vectors[,1]
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## 0.36138659 -0.08452251 0.85667061 0.35828920
```
This means that the **coordinates of the data along** the first principal component, which we’ll denote here as \\(PC\_1\\) are given by a simple linear combination of our original variables after centering (for covariance PCA) or standardization (for correlation PCA)
\\\[PC\_1 \= 0\.36Sepal.Length\-0\.08Sepal.Width\+0\.85Petal.Length \+0\.35Petal.Width\\]
the same equation could be written for each of the vectors of coordinates along principal components, \\(PC\_1,\\dots, PC\_4\\).
Essentially, we have a system of equations telling us that the rows of \\(\\V^T\\) (i.e. the columns of \\(\\V\\)) give us the weights of each variable for each principal component:
\\\[\\begin{equation}
\\tag{13\.2}
\\begin{bmatrix} PC\_1\\\\PC\_2\\\\PC\_3\\\\PC\_4\\end{bmatrix} \= \\mathbf{V}^T\\begin{bmatrix}Sepal.Length\\\\Sepal.Width\\\\Petal.Length\\\\Petal.Width\\end{bmatrix}
\\end{equation}\\]
Thus, if want the coordinates of our original variables in terms of Principal Components (so that we can plot them as we do in the biplot) we need to look no further than the rows of the matrix \\(\\mathbf{V}\\) as
\\\[\\begin{equation}
\\tag{13\.3}
\\begin{bmatrix}Sepal.Length\\\\Sepal.Width\\\\Petal.Length\\\\Petal.Width\\end{bmatrix} \=\\mathbf{V}\\begin{bmatrix} PC\_1\\\\PC\_2\\\\PC\_3\\\\PC\_4\\end{bmatrix}
\\end{equation}\\]
means that the rows of \\(\\mathbf{V}\\) give us the coordinates of our original variables in the PCA space. The transition from Equation [(13\.2\)](pca.html#eq:cpc1) to Equation [(13\.3\)](pca.html#eq:cpc2) is provided by the orthogonality of the eigenvectors per Theorem [13\.1](pca.html#thm:eigsym).
```
#First entry in each eigenvectors give coefficients for Variable 1:
eig$vectors[1,]
```
```
## [1] 0.3613866 -0.6565888 -0.5820299 0.3154872
```
\\\[Sepal.Length \= 0\.361 PC\_1 \- 0\.657 PC\_2 \- 0\.582 PC\_3 \+ 0\.315 PC\_4\\]
You can see this on the biplot. The vector shown for Sepal.Length is (0\.361, \-0\.656\), which is the two dimensional projection formed by throwing out components 3 and 4\.
Variables which lie upon similar directions in the PCA space tend to change together in a similar fashion. We might consider Petal.Width and Petal.Length as a cluster of variables because they share a direction on the biplot, which means they represent much of the same information (the underlying construct being the “size of the petal” in this case).
### 13\.6\.1 Correlation PCA
We can complete the same analysis using the correlation matrix. I’ll leave it as an exercise to compute the Principal Component loadings and scores and variance explained directly from eigenvectors and eigenvalues. You should do this and compare your results to the R output. *(Beware: you must transform your data before solving for the scores. With the covariance version, this meant centering \- for the correlation version, this means standardization as well)*
```
irispca2=prcomp(iris[1:4], cor=TRUE)
```
```
## Warning: In prcomp.default(iris[1:4], cor = TRUE) :
## extra argument 'cor' will be disregarded
```
```
summary(irispca2)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 2.0563 0.49262 0.2797 0.15439
## Proportion of Variance 0.9246 0.05307 0.0171 0.00521
## Cumulative Proportion 0.9246 0.97769 0.9948 1.00000
```
```
irispca2$rotation
```
```
## PC1 PC2 PC3 PC4
## Sepal.Length 0.36138659 -0.65658877 0.58202985 0.3154872
## Sepal.Width -0.08452251 -0.73016143 -0.59791083 -0.3197231
## Petal.Length 0.85667061 0.17337266 -0.07623608 -0.4798390
## Petal.Width 0.35828920 0.07548102 -0.54583143 0.7536574
```
```
irispca2$x[1:10,]
```
```
## PC1 PC2 PC3 PC4
## [1,] -2.684126 -0.31939725 0.02791483 0.002262437
## [2,] -2.714142 0.17700123 0.21046427 0.099026550
## [3,] -2.888991 0.14494943 -0.01790026 0.019968390
## [4,] -2.745343 0.31829898 -0.03155937 -0.075575817
## [5,] -2.728717 -0.32675451 -0.09007924 -0.061258593
## [6,] -2.280860 -0.74133045 -0.16867766 -0.024200858
## [7,] -2.820538 0.08946138 -0.25789216 -0.048143106
## [8,] -2.626145 -0.16338496 0.02187932 -0.045297871
## [9,] -2.886383 0.57831175 -0.02075957 -0.026744736
## [10,] -2.672756 0.11377425 0.19763272 -0.056295401
```
```
plot(irispca2$x[,1],irispca2$x[,2],
main="Data Projected on First 2 Principal Components",
xlab="First Principal Component",
ylab="Second Principal Component",
col=c("red","green3","blue")[iris$Species])
```
```
biplot(irispca2)
```
Here you can see the direction vectors of the original variables are relatively uniform in length in the PCA space. This is due to the standardization in the correlation matrix. However, the general message is the same: Petal.Width and Petal.Length Cluster together, and many of the same observations appear “on the fray” on the PCA space \- although not all of them!
### 13\.6\.2 Which Projection is Better?
What do you think? It depends on the task, and it depends on the data. One flavor of PCA is not “better” than the other. Correlation PCA is appropriate when the scales of your attributes differ wildly, and covariance PCA would be inappropriate in that situation. But in all other scenarios, when the scales of our attributes are roughly the same, we should always consider both dimension reductions and make a decision based upon the resulting output (variance explained, projection plots, loadings).
For the iris data, The results in terms of variable clustering are pretty much the same. For clustering/classifying the 3 species of flowers, we can see better separation in the covariance version.
### 13\.6\.3 Beware of biplots
Be careful not to draw improper conclusions from biplots. Particularly, be careful about situations where the first two principal components do not summarize the majority of the variance. If a large amount of variance is captured by the 3rd or 4th (or higher) principal components, then we must keep in mind that the variable projections on the first two principal components are flattened out versions of a higher dimensional picture. If a variable vector appears short in the 2\-dimensional projection, it means one of two things:
* That variable has small variance
* That variable appears to have small variance when depicted in the space of the first two principal components, but truly has a larger variance which is represented by 3rd or higher principal components.
Let’s take a look at an example of this. We’ll generate 500 rows of data on 4 nearly independent normal random variables. Since these variables are uncorrelated, we might expect that the 4 orthogonal principal components will line up relatively close to the original variables. If this doesn’t happen, then at the very least we can expect the biplot to show little to no correlation between the variables. We’ll give variables \\(2\\) and \\(3\\) the largest variance. Multiple runs of this code will generate different results with similar implications.
```
means=c(2,4,1,3)
sigmas=c(7,9,10,8)
sample.size=500
data=mapply(function(mu,sig){rnorm(mu,sig, n=sample.size)},mu=means,sig=sigmas)
cor(data)
```
```
## [,1] [,2] [,3] [,4]
## [1,] 1.00000000 -0.00301237 0.053142703 0.123924125
## [2,] -0.00301237 1.00000000 -0.023528385 -0.029730772
## [3,] 0.05314270 -0.02352838 1.000000000 -0.009356552
## [4,] 0.12392412 -0.02973077 -0.009356552 1.000000000
```
```
pc=prcomp(data,scale=TRUE)
summary(pc)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 1.0664 1.0066 0.9961 0.9259
## Proportion of Variance 0.2843 0.2533 0.2480 0.2143
## Cumulative Proportion 0.2843 0.5376 0.7857 1.0000
```
```
pc$rotation
```
```
## PC1 PC2 PC3 PC4
## [1,] -0.6888797 -0.1088649 0.2437045 0.6739446
## [2,] 0.1995146 -0.5361096 0.8018612 -0.1726242
## [3,] -0.2568194 0.7601069 0.5028666 -0.3215688
## [4,] -0.6478290 -0.3506744 -0.2115465 -0.6423341
```
```
biplot(pc)
```
Figure 13\.2: BiPlot of Iris Data
Obviously, the wrong conclusion to make from this biplot is that Variables 1 and 4 are correlated. Variables 1 and 4 do not load highly on the first two principal components \- in the *whole* 4\-dimensional principal component space they are nearly orthogonal to each other and to variables 1 and 2\. Thus, their orthogonal projections appear near the origin of this 2\-dimensional subspace.
The morals of the story:
* Always corroborate your results using the variable loadings and the amount of variation explained by each variable.
* When a variable shows up near the origin in a biplot, it is generally not well represented by your two\-dimensional approximation of the data.
13\.1 God’s Flashlight
----------------------
Imagine your data as a multidimensional cloud of points in space. God (however you conceive that) has a flashlight and can project this data at a right angle down onto a flat surface \- the flashlight just casts a point shadow, the shadows don’t get bigger like on Earth. The center of the flat surface is fixed at the center of the data, so it’s more like 2 flashlights, one from below the surface, one from above, both at right angles. We could rotate this flashlight/flat surface setup around and get infinitely many projections of the data from different perspectives. The PCA projection is the *one* with the most variance, which indicates that it *contains the most information from your original data*. It’s also the projection that is *closest to the original data* in the Euclidean or sum\-of\-squared\-error sense (PCA gives the rank k approximation to your data with the lowest possible error). Once projected, the axes of the projection (drawn so that the “first” axis points in the direction of greatest variance) are your principal components, providing the orthogonal directions of maximal variance.
This projection we’ve just described is actually the projection of the data onto a *hyperplane*, which entails a rank reduction of 1, though you might have imagined it as a projection onto a 2\-dimensional plane. The great thing about PCA is that both of those visuals are appropriate \- we can project the data onto any dimensional subspace of the original from 1 to rank(\\(\\X\\))\-1\.
With this analogy in mind, we bring back to the interactive plot from Chapter [9](orthog.html#orthog) to ponder what these different projections of a data cloud would look like, and to locate the maximum variance projection of *this* data.
13\.2 PCA Details
-----------------
PCA involves the analysis of eigenvalues and eigenvectors of the covariance or the correlation matrix. Its development relies on the following important facts:
**Theorem 13\.1 (Diagonalization of Symmetric Matrices)** All \\(n\\times n\\) real valued symmetric matrices (like the covariance and correlation matrix) have two very important properties:
1. They have a complete set of \\(n\\) linearly independent eigenvectors, \\(\\{\\v\_1,\\dots,\\v\_n\\}\\), corresponding to eigenvalues \\\[\\lambda\_1 \\geq \\lambda\_2 \\geq\\dots\\geq \\lambda\_n.\\]
2. Furthermore, these eigenvectors can be always be chosen to be *orthonormal* so that if \\(\\V\=\[\\v\_1\|\\dots\|\\v\_n]\\) then
\\\[\\V^{T}\\V\=\\bo{I}\\]
or equivalently, \\(\\V^{\-1}\=\\V^{T}\\).
Letting \\(\\D\\) be a diagonal matrix with \\(D\_{ii}\=\\lambda\_i\\), by the definition of eigenvalues and eigenvectors we have for any symmetric matrix \\(\\bo{S}\\),
\\\[\\bo{S}\\V\=\\V\\D\\]
Thus, any symmetric matrix \\(\\bo{S}\\) can be diagonalized in the following way:
\\\[\\V^{T}\\bo{S}\\V\=\\D\\]
Covariance and Correlation matrices (when there is no perfect multicollinearity in variables) have the additional property that all of their eigenvalues are positive (nonzero). They are *positive definite* matrices.
Now that we know we have a complete set of eigenvectors, it is common to order them according to the magnitude of their corresponding eigenvalues. From here on out, we will use \\((\\lambda\_1,\\v\_1\)\\) to represent the **largest** eigenvalue of a matrix and its corresponding eigenvector. When working with a covariance or correlation matrix, this eigenvector associated with the largest eigenvalue is called the **first principal component** and points in the direction for which the variance of the data is maximal. Example [13\.1](pca.html#exm:coveigs) illustrates this point.
**Example 13\.1 (Eigenvectors of the Covariance Matrix)** Suppose we have a matrix of data for 10 individuals on 2 variables, \\(\\x\_1\\) and \\(\\x\_2\\). Plotted on a plane, the data appears as follows:
Our data matrix for these points is:
\\\[\\X\=\\pm 1 \& 1\\\\2\&1\\\\2\&4\\\\3\&1\\\\4\&4\\\\5\&2\\\\6\&4\\\\6\&6\\\\7\&6\\\\8\&8 \\mp\\]
the means of the variables in \\(\\X\\) are:
\\\[\\bar{\\x}\=\\pm 4\.4 \\\\ 3\.7 \\mp. \\]
When thinking about variance directions, our first step should be to center the data so that it has mean zero. Eigenvectors measure the spread of data around the origin. Variance measures spread of data around the mean. Thus, we need to equate the mean with the origin. To center the data, we simply compute
\\\[\\X\_c\=\\X\-\\e\\bar{\\x}^T \= \\pm 1 \& 1\\\\2\&1\\\\2\&4\\\\3\&1\\\\4\&4\\\\5\&2\\\\6\&4\\\\6\&6\\\\7\&6\\\\8\&8 \\mp \- \\pm 4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\\\4\.4 \& 3\.7 \\mp \= \\pm \-3\.4\&\-2\.7\\\\\-2\.4\&\-2\.7\\\\\-2\.4\& 0\.3\\\\\-1\.4\&\-2\.7\\\\ \-0\.4\& 0\.3\\\\0\.6\&\-1\.7\\\\1\.6\& 0\.3\\\\1\.6\& 2\.3\\\\2\.6\& 2\.3\\\\3\.6\& 4\.3\\mp.\\]
Examining the new centered data, we find that we’ve only translated our data in the plane \- we haven’t distorted it in any fashion.
Thus the covariance matrix is:
\\\[\\ssigma\=\\frac{1}{9}(\\X\_c^T\\X\_c)\= \\pm 5\.6 \& 4\.8\\\\4\.8\&6\.0111 \\mp \\]
The eigenvalue and eigenvector pairs of \\(\\ssigma\\) are (rounded to 2 decimal places) as follows:
\\\[(\\lambda\_1,\\v\_1\)\=\\left( 10\.6100 , \\begin{bmatrix} 0\.69 \\\\ 0\.72 \\end{bmatrix}\\right) \\mbox{ and } (\\lambda\_2,\\v\_2\)\= \\left( 1\.0012,\\begin{bmatrix}\-0\.72\\\\0\.69 \\end{bmatrix}\\right)\\]
Let’s plot the eigenvector directions on the same graph:
The eigenvector \\(\\v\_1\\) is called the **first principal component**. It is the direction along which the variance of the data is maximal. The eigenvector \\(\\v\_2\\) is the **second principal component**. In general, the second principal component is the direction, orthogonal to the first, along which the variance of the data is maximal (in two dimensions, there is only one direction possible.)
Why is this important? Let’s consider what we’ve just done. We started with two variables, \\(\\x\_1\\) and \\(\\x\_2\\), which appeared to be correlated. We then derived *new variables*, \\(\\v\_1\\) and \\(\\v\_2\\), which are linear combinations of the original variables:
\\\[\\begin{eqnarray}
\\v\_1 \&\=\& 0\.69\\x\_1 \+ 0\.72\\x\_2 \\\\
\\tag{13\.1}
\\v\_2 \&\=\& \-0\.72\\x\_1 \+ 0\.69\\x\_2
\\end{eqnarray}\\]
These new variables are completely uncorrelated. To see this, let’s represent our data according to the new variables \- i.e. let’s change the basis from \\(\\mathcal{B}\_1\=\[\\x\_1,\\x\_2]\\) to \\(\\mathcal{B}\_2\=\[\\v\_1,\\v\_2]\\).
**Example 13\.2 (The Principal Component Basis)** Let’s express our data in the basis defined by the principal components. We want to find coordinates (in a \\(2\\times 10\\) matrix \\(\\A\\)) such that our original (centered) data can be expressed in terms of principal components. This is done by solving for \\(\\A\\) in the following equation (see Chapter [8](basis.html#basis) and note that the *rows* of \\(\\X\\) define the points rather than the columns):
\\\[\\begin{eqnarray}
\\X\_c \&\=\& \\A \\V^T \\\\
\\pm \-3\.4\&\-2\.7\\\\\-2\.4\&\-2\.7\\\\\-2\.4\& 0\.3\\\\\-1\.4\&\-2\.7\\\\ \-0\.4\& 0\.3\\\\0\.6\&\-1\.7\\\\1\.6\& 0\.3\\\\1\.6\& 2\.3\\\\2\.6\& 2\.3\\\\3\.6\& 4\.3 \\mp \&\=\& \\pm a\_{11} \& a\_{12} \\\\ a\_{21} \& a\_{22} \\\\ a\_{31} \& a\_{32}\\\\ a\_{41} \& a\_{42}\\\\ a\_{51} \& a\_{52}\\\\ a\_{61} \& a\_{62}\\\\ a\_{71} \& a\_{72}\\\\ a\_{81} \& a\_{82}\\\\ a\_{91} \& a\_{92}\\\\ a\_{10,1} \& a\_{10,2} \\mp \\pm \\v\_1^T \\\\ \\v\_2^T \\mp
\\end{eqnarray}\\]
Conveniently, our new basis is orthonormal meaning that \\(\\V\\) is an orthogonal matrix, so
\\\[\\A\=\\X\\V .\\]
The new data coordinates reflect a simple rotation of the data around the origin:
Visually, we can see that the new variables are uncorrelated. You may wish to confirm this by calculating the covariance. In fact, we can do this in a general sense. If \\(\\A\=\\X\_c\\V\\) is our new data, then the covariance matrix is diagonal:
\\\[\\begin{eqnarray\*}
\\ssigma\_A \&\=\& \\frac{1}{n\-1}\\A^T\\A \\\\
\&\=\& \\frac{1}{n\-1}(\\X\_c\\V)^T(\\X\_c\\V) \\\\
\&\=\& \\frac{1}{n\-1}\\V^T((\\X\_c^T\\X\_c)\\V\\\\
\&\=\&\\frac{1}{n\-1}\\V^T((n\-1\)\\ssigma\_X)\\V\\\\
\&\=\&\\V^T(\\ssigma\_X)\\V\\\\
\&\=\&\\V^T(\\V\\D\\V^T)\\V\\\\
\&\=\& \\D
\\end{eqnarray\*}\\]
Where \\(\\ssigma\_X\=\\V\\D\\V^T\\) comes from the diagonalization in Theorem [13\.1](pca.html#thm:eigsym).
By changing our variables to principal components, we have managed to **“hide”** the correlation between \\(\\x\_1\\) and \\(\\x\_2\\) while keeping the spacial relationships between data points in tact. Transformation *back* to variables \\(\\x\_1\\) and \\(\\x\_2\\) is easily done by using the linear relationships in from Equation [(13\.1\)](pca.html#eq:pcacomb).
13\.3 Geometrical comparison with Least Squares
-----------------------------------------------
In least squares regression, our objective is to maximize the amount of variance explained in our target variable. It may look as though the first principal component from Example [13\.1](pca.html#exm:coveigs) points in the direction of the regression line. This is not the case however. The first principal component points in the direction of a line which minimizes the sum of squared *orthogonal* distances between the points and the line. Regressing \\(\\x\_2\\) on \\(\\x\_1\\), on the other hand, provides a line which minimizes the sum of squared *vertical* distances between points and the line. This is illustrated in Figure [13\.1](pca.html#fig:pcvsreg).
Figure 13\.1: Principal Components vs. Regression Lines
The first principal component about the mean of a set of points can be represented by that line which most closely approaches the data points. Let this not conjure up images of linear regression in your head, though. In contrast, linear least squares tries to minimize the distance in a single direction only (the direction of your target variable axes). Thus, although the two use a similar error metric, linear least squares is a method that treats one dimension of the data preferentially, while PCA treats all dimensions equally.
You might be tempted to conclude from Figure [13\.1](pca.html#fig:pcvsreg) that the first principal component and the regression line “ought to be similar.” This is a terrible conclusion if you consider a large multivariate dataset and the various regression lines that would predict each variable in that dataset. In PCA, there is no target variable and thus no single regression line that we’d be comparing to.
13\.4 Covariance or Correlation Matrix?
---------------------------------------
Principal components analysis can involve eigenvectors of either the covariance matrix or the correlation matrix. When we perform this analysis on the covariance matrix, the geometric interpretation is simply centering the data and then determining the direction of maximal variance. When we perform this analysis on the correlation matrix, the interpretation is *standardizing* the data and then determining the direction of maximal variance. The correlation matrix is simply a scaled form of the covariance matrix. In general, these two methods give different results, especially when the scales of the variables are different.
The covariance matrix is the default for (most) \\(\\textsf{R}\\) PCA functions. The correlation matrix is the default in SAS and the covariance matrix method is invoked by the option:
```
proc princomp data=X cov;
var x1--x10;
run;
```
Choosing between the covariance and correlation matrix can sometimes pose problems. The rule of thumb is that the correlation matrix should be used when the scales of the variables vary greatly. In this case, the variables with the highest variance will dominate the first principal component. The argument against automatically using correlation matrices is that it turns out to be quite a brutal way of standardizing your data \- forcing all variables to contain the same amount of information (after all, don’t we equate variance to information?) seems naive and counterintuitive when it is not absolutely necessary for differences in scale. We hope that the case studies outlined in Chapter [14](pcaapp.html#pcaapp) will give those who *always* use the correlation option reason for pause, and we hope that, in the future, they will consider multiple presentations of the data and their corresponding low\-rank representations of the data.
13\.5 PCA in R
--------------
Let’s find Principal Components using the iris dataset. This is a well\-known dataset, often used to demonstrate the effect of clustering algorithms. It contains numeric measurements for 150 iris flowers along 4 dimensions. The fifth column in the dataset tells us what species of Iris the flower is. There are 3 species.
1. Sepal.Length
2. Sepal.Width
3. Petal.Length
4. Petal.Width
5. Species
* Setosa
* Versicolor
* Virginica
Let’s first take a look at the scatterplot matrix:
```
pairs(~Sepal.Length+Sepal.Width+Petal.Length+Petal.Width,data=iris,col=c("red","green3","blue")[iris$Species])
```
It is apparent that some of our variables are correlated. We can confirm this by computing the correlation matrix with the `cor()` function. We can also check out the individual variances of the variables and the covariances between variables by examining the covariance matrix (`cov()` function). Remember \- when looking at covariances, we can really only interpret the *sign* of the number and not the magnitude as we can with the correlations.
```
cor(iris[1:4])
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Sepal.Length 1.0000000 -0.1175698 0.8717538 0.8179411
## Sepal.Width -0.1175698 1.0000000 -0.4284401 -0.3661259
## Petal.Length 0.8717538 -0.4284401 1.0000000 0.9628654
## Petal.Width 0.8179411 -0.3661259 0.9628654 1.0000000
```
```
cov(iris[1:4])
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Sepal.Length 0.6856935 -0.0424340 1.2743154 0.5162707
## Sepal.Width -0.0424340 0.1899794 -0.3296564 -0.1216394
## Petal.Length 1.2743154 -0.3296564 3.1162779 1.2956094
## Petal.Width 0.5162707 -0.1216394 1.2956094 0.5810063
```
We have relatively strong positive correlation between Petal Length, Petal Width and Sepal Length. It is also clear that Petal Length has more than 3 times the variance of the other 3 variables. How will this effect our analysis?
The scatter plots and correlation matrix provide useful information, but they don’t give us a true sense for how the data looks when all 4 attributes are considered simultaneously.
In the next section we will compute the principal components directly from eigenvalues and eigenvectors of the covariance or correlation matrix. **It’s important to note that this method of *computing* principal components is not actually recommended \- the answer provided is the same, but the numerical stability and efficiency of this method may be dubious for large datasets. The Singular Value Decomposition (SVD), which will be discussed in Chapter [15](svd.html#svd), is generally a preferred route to computing principal components.** Using both the covariance matrix and the correlation matrix, let’s see what we can learn about the data. Let’s start with the covariance matrix which is the default setting for the `prcomp()` function in R.
### 13\.5\.1 Covariance PCA
Let’s start with the covariance matrix which is the default setting for the `prcomp` function in R. It’s worth repeating that a dedicated principal component function like `prcomp()` is superior in numerical stability and efficiency to the lines of code in the next section. **The only reason for directly computing the covariance matrix and its eigenvalues and eigenvectors (as opposed to `prcomp()`) is for edification. Computing a PCA in this manner, just this once, will help us grasp the exact mathematics of the situation and empower us to use built in functions with greater flexibility and understanding.**
### 13\.5\.2 Principal Components, Loadings, and Variance Explained
```
covM = cov(iris[1:4])
eig=eigen(covM,symmetric=TRUE,only.values=FALSE)
c=colnames(iris[1:4])
eig$values
```
```
## [1] 4.22824171 0.24267075 0.07820950 0.02383509
```
```
# Label the loadings
rownames(eig$vectors)=c(colnames(iris[1:4]))
# eig$vectors
```
The eigenvalues tell us how much of the total variance in the data is directed along each eigenvector. Thus, the amount of variance along \\(\\mathbf{v}\_1\\) is \\(\\lambda\_1\\) and the *proportion* of variance explained by the first principal component is
\\\[\\frac{\\lambda\_1}{\\lambda\_1\+\\lambda\_2\+\\lambda\_3\+\\lambda\_4}\\]
```
eig$values[1]/sum(eig$values)
```
```
## [1] 0.9246187
```
Thus 92% of the variation in the Iris data is explained by the first component alone. What if we consider the first and second principal component directions? Using this two dimensional representation (approximation/projection) we can capture the following proportion of variance:
\\\[\\frac{\\lambda\_1\+\\lambda\_2}{\\lambda\_1\+\\lambda\_2\+\\lambda\_3\+\\lambda\_4}\\]
```
sum(eig$values[1:2])/sum(eig$values)
```
```
## [1] 0.9776852
```
With two dimensions, we explain 97\.8% of the variance in these 4 variables! The entries in each eigenvector are called the **loadings** of the variables on the component. The loadings give us an idea how important each variable is to each component. For example, it seems that the third variable in our dataset (Petal Length) is dominating the first principal component. This should not come as too much of a shock \- that variable had (by far) the largest amount of variation of the four. In order to capture the most amount of variance in a single dimension, we should certainly be considering this variable strongly. The variable with the next largest variance, Sepal Length, dominates the second principal component.
**Note:** *Had Petal Length and Sepal Length been correlated, they would not have dominated separate principal components, they would have shared one. These two variables are not correlated and thus their variation cannot be captured along the same direction.*
### 13\.5\.3 Scores and PCA Projection
Lets plot the *projection* of the four\-dimensional iris data onto the two dimensional space spanned by the first 2 principal components. To do this, we need coordinates. These coordinates are commonly called **scores** in statistical texts. We can find the coordinates of the data on the principal components by solving the system
\\\[\\mathbf{X}\=\\mathbf{A}\\mathbf{V}^T\\]
where \\(\\mathbf{X}\\) is our original iris data **(centered to have mean \= 0\)** and \\(\\mathbf{A}\\) is a matrix of coordinates in the new principal component space, spanned by the eigenvectors in \\(\\mathbf{V}\\).
Solving this system is simple enough \- since \\(\\mathbf{V}\\) is an orthogonal matrix per Theorem [13\.1](pca.html#thm:eigsym). Let’s confirm this:
```
eig$vectors %*% t(eig$vectors)
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Sepal.Length 1.000000e+00 4.163336e-17 -2.775558e-17 -2.775558e-17
## Sepal.Width 4.163336e-17 1.000000e+00 1.665335e-16 1.942890e-16
## Petal.Length -2.775558e-17 1.665335e-16 1.000000e+00 -2.220446e-16
## Petal.Width -2.775558e-17 1.942890e-16 -2.220446e-16 1.000000e+00
```
```
t(eig$vectors) %*% eig$vectors
```
```
## [,1] [,2] [,3] [,4]
## [1,] 1.000000e+00 -2.289835e-16 0.000000e+00 -1.110223e-16
## [2,] -2.289835e-16 1.000000e+00 2.775558e-17 -1.318390e-16
## [3,] 0.000000e+00 2.775558e-17 1.000000e+00 1.110223e-16
## [4,] -1.110223e-16 -1.318390e-16 1.110223e-16 1.000000e+00
```
We’ll have to settle for precision at 15 decimal places. Close enough!
To find the scores, we simply subtract the means from our original variables to create the data matrix \\(\\mathbf{X}\\) and compute
\\\[\\mathbf{A}\=\\mathbf{X}\\mathbf{V}\\]
```
# The scale function centers and scales by default
X=scale(iris[1:4],center=TRUE,scale=FALSE)
# Create data.frame from matrix for plotting purposes.
scores=data.frame(X %*% eig$vectors)
# Change default variable names
colnames(scores)=c("Prin1","Prin2","Prin3","Prin4")
# Print coordinates/scores of first 10 observations
scores[1:10, ]
```
```
## Prin1 Prin2 Prin3 Prin4
## 1 -2.684126 -0.31939725 -0.02791483 0.002262437
## 2 -2.714142 0.17700123 -0.21046427 0.099026550
## 3 -2.888991 0.14494943 0.01790026 0.019968390
## 4 -2.745343 0.31829898 0.03155937 -0.075575817
## 5 -2.728717 -0.32675451 0.09007924 -0.061258593
## 6 -2.280860 -0.74133045 0.16867766 -0.024200858
## 7 -2.820538 0.08946138 0.25789216 -0.048143106
## 8 -2.626145 -0.16338496 -0.02187932 -0.045297871
## 9 -2.886383 0.57831175 0.02075957 -0.026744736
## 10 -2.672756 0.11377425 -0.19763272 -0.056295401
```
To this point, we have simply computed coordinates (scores) on a new set of axis (formed by the principal components, i.e. the eigenvectors). These axes are orthogonal and are aligned with the directions of maximal variance in the data. When we consider only a subset of principal components (like 2 the two components here that account for 97\.8% of the variance), we are projecting the data onto a lower dimensional space. Generally, this is one of the primary goals of PCA: Project the data down into a lower dimensional space (*onto the span of the principal components*) while keeping the maximum amount of information (i.e. variance).
Thus, we know that almost 98% of the data’s variance can be seen in two\-dimensions using the first two principal components. Let’s go ahead and see what this looks like:
```
plot(scores$Prin1, scores$Prin2,
main="Data Projected on First 2 Principal Components",
xlab="First Principal Component",
ylab="Second Principal Component",
col=c("red","green3","blue")[iris$Species])
```
### 13\.5\.4 PCA functions in R
```
irispca=prcomp(iris[1:4])
# Variance Explained
summary(irispca)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 2.0563 0.49262 0.2797 0.15439
## Proportion of Variance 0.9246 0.05307 0.0171 0.00521
## Cumulative Proportion 0.9246 0.97769 0.9948 1.00000
```
```
# Eigenvectors:
irispca$rotation
```
```
## PC1 PC2 PC3 PC4
## Sepal.Length 0.36138659 -0.65658877 0.58202985 0.3154872
## Sepal.Width -0.08452251 -0.73016143 -0.59791083 -0.3197231
## Petal.Length 0.85667061 0.17337266 -0.07623608 -0.4798390
## Petal.Width 0.35828920 0.07548102 -0.54583143 0.7536574
```
```
# Coordinates of first 10 observations along PCs:
irispca$x[1:10, ]
```
```
## PC1 PC2 PC3 PC4
## [1,] -2.684126 -0.31939725 0.02791483 0.002262437
## [2,] -2.714142 0.17700123 0.21046427 0.099026550
## [3,] -2.888991 0.14494943 -0.01790026 0.019968390
## [4,] -2.745343 0.31829898 -0.03155937 -0.075575817
## [5,] -2.728717 -0.32675451 -0.09007924 -0.061258593
## [6,] -2.280860 -0.74133045 -0.16867766 -0.024200858
## [7,] -2.820538 0.08946138 -0.25789216 -0.048143106
## [8,] -2.626145 -0.16338496 0.02187932 -0.045297871
## [9,] -2.886383 0.57831175 -0.02075957 -0.026744736
## [10,] -2.672756 0.11377425 0.19763272 -0.056295401
```
All of the information we computed using eigenvectors aligns with what we see here, except that the coordinates/scores and the loadings of Principal Component 3 are of the opposite sign. In light of what we know about eigenvectors representing *directions*, this should be no cause for alarm. The `prcomp` function arrived at the unit basis vector pointing in the negative direction of the one we found directly from the `eig` function \- which should negate all the coordinates and leave us with an equivalent mirror image in all of our projections.
### 13\.5\.5 The Biplot
One additional feature that R users have created is the **biplot**. The PCA biplot allows us to see where our original variables fall in the space of the principal components. Highly correlated variables will fall along the same direction (or exactly opposite directions) as a change in one of these variables correlates to a change in the other. Uncorrelated variables will appear further apart. The length of the variable vectors on the biplot tell us the degree to which variability in variable is explained in that direction. Shorter vectors have less variability than longer vectors. So in the biplot below, petal width and petal length point in the same direction indicating that these variables share a relatively high degree of correlation. However, the vector for petal width is much shorter than that of petal length, which means you can expect a higher degree of change in petal length as you proceed to the right along PC1\. PC1 explains more of the variance in petal length than it does petal width. If we were to imagine a third PC orthogonal to the plane shown, petal width is likely to exist at much larger angle off the plane \- here, it is being projected down from that 3\-dimensional picture.
```
biplot(irispca, col = c("gray", "blue"))
```
We can examine some of the outlying observations to see how they align with these projected variable directions. It helps to compare them to the quartiles of the data. Also keep in mind the direction of the arrows in the plot. If the arrow points down then the positive direction is down \- indicating observations which are greater than the mean. Let’s pick out observations 42 and 132 and see what the actual data points look like in comparison to the rest of the sample population.
```
summary(iris[1:4])
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Min. :4.300 Min. :2.000 Min. :1.000 Min. :0.100
## 1st Qu.:5.100 1st Qu.:2.800 1st Qu.:1.600 1st Qu.:0.300
## Median :5.800 Median :3.000 Median :4.350 Median :1.300
## Mean :5.843 Mean :3.057 Mean :3.758 Mean :1.199
## 3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100 3rd Qu.:1.800
## Max. :7.900 Max. :4.400 Max. :6.900 Max. :2.500
```
```
# Consider orientation of outlying observations:
iris[42, ]
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 42 4.5 2.3 1.3 0.3 setosa
```
```
iris[132, ]
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 132 7.9 3.8 6.4 2 virginica
```
### 13\.5\.1 Covariance PCA
Let’s start with the covariance matrix which is the default setting for the `prcomp` function in R. It’s worth repeating that a dedicated principal component function like `prcomp()` is superior in numerical stability and efficiency to the lines of code in the next section. **The only reason for directly computing the covariance matrix and its eigenvalues and eigenvectors (as opposed to `prcomp()`) is for edification. Computing a PCA in this manner, just this once, will help us grasp the exact mathematics of the situation and empower us to use built in functions with greater flexibility and understanding.**
### 13\.5\.2 Principal Components, Loadings, and Variance Explained
```
covM = cov(iris[1:4])
eig=eigen(covM,symmetric=TRUE,only.values=FALSE)
c=colnames(iris[1:4])
eig$values
```
```
## [1] 4.22824171 0.24267075 0.07820950 0.02383509
```
```
# Label the loadings
rownames(eig$vectors)=c(colnames(iris[1:4]))
# eig$vectors
```
The eigenvalues tell us how much of the total variance in the data is directed along each eigenvector. Thus, the amount of variance along \\(\\mathbf{v}\_1\\) is \\(\\lambda\_1\\) and the *proportion* of variance explained by the first principal component is
\\\[\\frac{\\lambda\_1}{\\lambda\_1\+\\lambda\_2\+\\lambda\_3\+\\lambda\_4}\\]
```
eig$values[1]/sum(eig$values)
```
```
## [1] 0.9246187
```
Thus 92% of the variation in the Iris data is explained by the first component alone. What if we consider the first and second principal component directions? Using this two dimensional representation (approximation/projection) we can capture the following proportion of variance:
\\\[\\frac{\\lambda\_1\+\\lambda\_2}{\\lambda\_1\+\\lambda\_2\+\\lambda\_3\+\\lambda\_4}\\]
```
sum(eig$values[1:2])/sum(eig$values)
```
```
## [1] 0.9776852
```
With two dimensions, we explain 97\.8% of the variance in these 4 variables! The entries in each eigenvector are called the **loadings** of the variables on the component. The loadings give us an idea how important each variable is to each component. For example, it seems that the third variable in our dataset (Petal Length) is dominating the first principal component. This should not come as too much of a shock \- that variable had (by far) the largest amount of variation of the four. In order to capture the most amount of variance in a single dimension, we should certainly be considering this variable strongly. The variable with the next largest variance, Sepal Length, dominates the second principal component.
**Note:** *Had Petal Length and Sepal Length been correlated, they would not have dominated separate principal components, they would have shared one. These two variables are not correlated and thus their variation cannot be captured along the same direction.*
### 13\.5\.3 Scores and PCA Projection
Lets plot the *projection* of the four\-dimensional iris data onto the two dimensional space spanned by the first 2 principal components. To do this, we need coordinates. These coordinates are commonly called **scores** in statistical texts. We can find the coordinates of the data on the principal components by solving the system
\\\[\\mathbf{X}\=\\mathbf{A}\\mathbf{V}^T\\]
where \\(\\mathbf{X}\\) is our original iris data **(centered to have mean \= 0\)** and \\(\\mathbf{A}\\) is a matrix of coordinates in the new principal component space, spanned by the eigenvectors in \\(\\mathbf{V}\\).
Solving this system is simple enough \- since \\(\\mathbf{V}\\) is an orthogonal matrix per Theorem [13\.1](pca.html#thm:eigsym). Let’s confirm this:
```
eig$vectors %*% t(eig$vectors)
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Sepal.Length 1.000000e+00 4.163336e-17 -2.775558e-17 -2.775558e-17
## Sepal.Width 4.163336e-17 1.000000e+00 1.665335e-16 1.942890e-16
## Petal.Length -2.775558e-17 1.665335e-16 1.000000e+00 -2.220446e-16
## Petal.Width -2.775558e-17 1.942890e-16 -2.220446e-16 1.000000e+00
```
```
t(eig$vectors) %*% eig$vectors
```
```
## [,1] [,2] [,3] [,4]
## [1,] 1.000000e+00 -2.289835e-16 0.000000e+00 -1.110223e-16
## [2,] -2.289835e-16 1.000000e+00 2.775558e-17 -1.318390e-16
## [3,] 0.000000e+00 2.775558e-17 1.000000e+00 1.110223e-16
## [4,] -1.110223e-16 -1.318390e-16 1.110223e-16 1.000000e+00
```
We’ll have to settle for precision at 15 decimal places. Close enough!
To find the scores, we simply subtract the means from our original variables to create the data matrix \\(\\mathbf{X}\\) and compute
\\\[\\mathbf{A}\=\\mathbf{X}\\mathbf{V}\\]
```
# The scale function centers and scales by default
X=scale(iris[1:4],center=TRUE,scale=FALSE)
# Create data.frame from matrix for plotting purposes.
scores=data.frame(X %*% eig$vectors)
# Change default variable names
colnames(scores)=c("Prin1","Prin2","Prin3","Prin4")
# Print coordinates/scores of first 10 observations
scores[1:10, ]
```
```
## Prin1 Prin2 Prin3 Prin4
## 1 -2.684126 -0.31939725 -0.02791483 0.002262437
## 2 -2.714142 0.17700123 -0.21046427 0.099026550
## 3 -2.888991 0.14494943 0.01790026 0.019968390
## 4 -2.745343 0.31829898 0.03155937 -0.075575817
## 5 -2.728717 -0.32675451 0.09007924 -0.061258593
## 6 -2.280860 -0.74133045 0.16867766 -0.024200858
## 7 -2.820538 0.08946138 0.25789216 -0.048143106
## 8 -2.626145 -0.16338496 -0.02187932 -0.045297871
## 9 -2.886383 0.57831175 0.02075957 -0.026744736
## 10 -2.672756 0.11377425 -0.19763272 -0.056295401
```
To this point, we have simply computed coordinates (scores) on a new set of axis (formed by the principal components, i.e. the eigenvectors). These axes are orthogonal and are aligned with the directions of maximal variance in the data. When we consider only a subset of principal components (like 2 the two components here that account for 97\.8% of the variance), we are projecting the data onto a lower dimensional space. Generally, this is one of the primary goals of PCA: Project the data down into a lower dimensional space (*onto the span of the principal components*) while keeping the maximum amount of information (i.e. variance).
Thus, we know that almost 98% of the data’s variance can be seen in two\-dimensions using the first two principal components. Let’s go ahead and see what this looks like:
```
plot(scores$Prin1, scores$Prin2,
main="Data Projected on First 2 Principal Components",
xlab="First Principal Component",
ylab="Second Principal Component",
col=c("red","green3","blue")[iris$Species])
```
### 13\.5\.4 PCA functions in R
```
irispca=prcomp(iris[1:4])
# Variance Explained
summary(irispca)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 2.0563 0.49262 0.2797 0.15439
## Proportion of Variance 0.9246 0.05307 0.0171 0.00521
## Cumulative Proportion 0.9246 0.97769 0.9948 1.00000
```
```
# Eigenvectors:
irispca$rotation
```
```
## PC1 PC2 PC3 PC4
## Sepal.Length 0.36138659 -0.65658877 0.58202985 0.3154872
## Sepal.Width -0.08452251 -0.73016143 -0.59791083 -0.3197231
## Petal.Length 0.85667061 0.17337266 -0.07623608 -0.4798390
## Petal.Width 0.35828920 0.07548102 -0.54583143 0.7536574
```
```
# Coordinates of first 10 observations along PCs:
irispca$x[1:10, ]
```
```
## PC1 PC2 PC3 PC4
## [1,] -2.684126 -0.31939725 0.02791483 0.002262437
## [2,] -2.714142 0.17700123 0.21046427 0.099026550
## [3,] -2.888991 0.14494943 -0.01790026 0.019968390
## [4,] -2.745343 0.31829898 -0.03155937 -0.075575817
## [5,] -2.728717 -0.32675451 -0.09007924 -0.061258593
## [6,] -2.280860 -0.74133045 -0.16867766 -0.024200858
## [7,] -2.820538 0.08946138 -0.25789216 -0.048143106
## [8,] -2.626145 -0.16338496 0.02187932 -0.045297871
## [9,] -2.886383 0.57831175 -0.02075957 -0.026744736
## [10,] -2.672756 0.11377425 0.19763272 -0.056295401
```
All of the information we computed using eigenvectors aligns with what we see here, except that the coordinates/scores and the loadings of Principal Component 3 are of the opposite sign. In light of what we know about eigenvectors representing *directions*, this should be no cause for alarm. The `prcomp` function arrived at the unit basis vector pointing in the negative direction of the one we found directly from the `eig` function \- which should negate all the coordinates and leave us with an equivalent mirror image in all of our projections.
### 13\.5\.5 The Biplot
One additional feature that R users have created is the **biplot**. The PCA biplot allows us to see where our original variables fall in the space of the principal components. Highly correlated variables will fall along the same direction (or exactly opposite directions) as a change in one of these variables correlates to a change in the other. Uncorrelated variables will appear further apart. The length of the variable vectors on the biplot tell us the degree to which variability in variable is explained in that direction. Shorter vectors have less variability than longer vectors. So in the biplot below, petal width and petal length point in the same direction indicating that these variables share a relatively high degree of correlation. However, the vector for petal width is much shorter than that of petal length, which means you can expect a higher degree of change in petal length as you proceed to the right along PC1\. PC1 explains more of the variance in petal length than it does petal width. If we were to imagine a third PC orthogonal to the plane shown, petal width is likely to exist at much larger angle off the plane \- here, it is being projected down from that 3\-dimensional picture.
```
biplot(irispca, col = c("gray", "blue"))
```
We can examine some of the outlying observations to see how they align with these projected variable directions. It helps to compare them to the quartiles of the data. Also keep in mind the direction of the arrows in the plot. If the arrow points down then the positive direction is down \- indicating observations which are greater than the mean. Let’s pick out observations 42 and 132 and see what the actual data points look like in comparison to the rest of the sample population.
```
summary(iris[1:4])
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## Min. :4.300 Min. :2.000 Min. :1.000 Min. :0.100
## 1st Qu.:5.100 1st Qu.:2.800 1st Qu.:1.600 1st Qu.:0.300
## Median :5.800 Median :3.000 Median :4.350 Median :1.300
## Mean :5.843 Mean :3.057 Mean :3.758 Mean :1.199
## 3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100 3rd Qu.:1.800
## Max. :7.900 Max. :4.400 Max. :6.900 Max. :2.500
```
```
# Consider orientation of outlying observations:
iris[42, ]
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 42 4.5 2.3 1.3 0.3 setosa
```
```
iris[132, ]
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 132 7.9 3.8 6.4 2 virginica
```
13\.6 Variable Clustering with PCA
----------------------------------
The direction arrows on the biplot are merely the coefficients of the original variables when combined to make principal components. Don’t forget that principal components are simply linear combinations of the original variables.
For example, here we have the first principal component (the first column of \\(\\V\\)), \\(\\mathbf{v}\_1\\) as:
```
eig$vectors[,1]
```
```
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## 0.36138659 -0.08452251 0.85667061 0.35828920
```
This means that the **coordinates of the data along** the first principal component, which we’ll denote here as \\(PC\_1\\) are given by a simple linear combination of our original variables after centering (for covariance PCA) or standardization (for correlation PCA)
\\\[PC\_1 \= 0\.36Sepal.Length\-0\.08Sepal.Width\+0\.85Petal.Length \+0\.35Petal.Width\\]
the same equation could be written for each of the vectors of coordinates along principal components, \\(PC\_1,\\dots, PC\_4\\).
Essentially, we have a system of equations telling us that the rows of \\(\\V^T\\) (i.e. the columns of \\(\\V\\)) give us the weights of each variable for each principal component:
\\\[\\begin{equation}
\\tag{13\.2}
\\begin{bmatrix} PC\_1\\\\PC\_2\\\\PC\_3\\\\PC\_4\\end{bmatrix} \= \\mathbf{V}^T\\begin{bmatrix}Sepal.Length\\\\Sepal.Width\\\\Petal.Length\\\\Petal.Width\\end{bmatrix}
\\end{equation}\\]
Thus, if want the coordinates of our original variables in terms of Principal Components (so that we can plot them as we do in the biplot) we need to look no further than the rows of the matrix \\(\\mathbf{V}\\) as
\\\[\\begin{equation}
\\tag{13\.3}
\\begin{bmatrix}Sepal.Length\\\\Sepal.Width\\\\Petal.Length\\\\Petal.Width\\end{bmatrix} \=\\mathbf{V}\\begin{bmatrix} PC\_1\\\\PC\_2\\\\PC\_3\\\\PC\_4\\end{bmatrix}
\\end{equation}\\]
means that the rows of \\(\\mathbf{V}\\) give us the coordinates of our original variables in the PCA space. The transition from Equation [(13\.2\)](pca.html#eq:cpc1) to Equation [(13\.3\)](pca.html#eq:cpc2) is provided by the orthogonality of the eigenvectors per Theorem [13\.1](pca.html#thm:eigsym).
```
#First entry in each eigenvectors give coefficients for Variable 1:
eig$vectors[1,]
```
```
## [1] 0.3613866 -0.6565888 -0.5820299 0.3154872
```
\\\[Sepal.Length \= 0\.361 PC\_1 \- 0\.657 PC\_2 \- 0\.582 PC\_3 \+ 0\.315 PC\_4\\]
You can see this on the biplot. The vector shown for Sepal.Length is (0\.361, \-0\.656\), which is the two dimensional projection formed by throwing out components 3 and 4\.
Variables which lie upon similar directions in the PCA space tend to change together in a similar fashion. We might consider Petal.Width and Petal.Length as a cluster of variables because they share a direction on the biplot, which means they represent much of the same information (the underlying construct being the “size of the petal” in this case).
### 13\.6\.1 Correlation PCA
We can complete the same analysis using the correlation matrix. I’ll leave it as an exercise to compute the Principal Component loadings and scores and variance explained directly from eigenvectors and eigenvalues. You should do this and compare your results to the R output. *(Beware: you must transform your data before solving for the scores. With the covariance version, this meant centering \- for the correlation version, this means standardization as well)*
```
irispca2=prcomp(iris[1:4], cor=TRUE)
```
```
## Warning: In prcomp.default(iris[1:4], cor = TRUE) :
## extra argument 'cor' will be disregarded
```
```
summary(irispca2)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 2.0563 0.49262 0.2797 0.15439
## Proportion of Variance 0.9246 0.05307 0.0171 0.00521
## Cumulative Proportion 0.9246 0.97769 0.9948 1.00000
```
```
irispca2$rotation
```
```
## PC1 PC2 PC3 PC4
## Sepal.Length 0.36138659 -0.65658877 0.58202985 0.3154872
## Sepal.Width -0.08452251 -0.73016143 -0.59791083 -0.3197231
## Petal.Length 0.85667061 0.17337266 -0.07623608 -0.4798390
## Petal.Width 0.35828920 0.07548102 -0.54583143 0.7536574
```
```
irispca2$x[1:10,]
```
```
## PC1 PC2 PC3 PC4
## [1,] -2.684126 -0.31939725 0.02791483 0.002262437
## [2,] -2.714142 0.17700123 0.21046427 0.099026550
## [3,] -2.888991 0.14494943 -0.01790026 0.019968390
## [4,] -2.745343 0.31829898 -0.03155937 -0.075575817
## [5,] -2.728717 -0.32675451 -0.09007924 -0.061258593
## [6,] -2.280860 -0.74133045 -0.16867766 -0.024200858
## [7,] -2.820538 0.08946138 -0.25789216 -0.048143106
## [8,] -2.626145 -0.16338496 0.02187932 -0.045297871
## [9,] -2.886383 0.57831175 -0.02075957 -0.026744736
## [10,] -2.672756 0.11377425 0.19763272 -0.056295401
```
```
plot(irispca2$x[,1],irispca2$x[,2],
main="Data Projected on First 2 Principal Components",
xlab="First Principal Component",
ylab="Second Principal Component",
col=c("red","green3","blue")[iris$Species])
```
```
biplot(irispca2)
```
Here you can see the direction vectors of the original variables are relatively uniform in length in the PCA space. This is due to the standardization in the correlation matrix. However, the general message is the same: Petal.Width and Petal.Length Cluster together, and many of the same observations appear “on the fray” on the PCA space \- although not all of them!
### 13\.6\.2 Which Projection is Better?
What do you think? It depends on the task, and it depends on the data. One flavor of PCA is not “better” than the other. Correlation PCA is appropriate when the scales of your attributes differ wildly, and covariance PCA would be inappropriate in that situation. But in all other scenarios, when the scales of our attributes are roughly the same, we should always consider both dimension reductions and make a decision based upon the resulting output (variance explained, projection plots, loadings).
For the iris data, The results in terms of variable clustering are pretty much the same. For clustering/classifying the 3 species of flowers, we can see better separation in the covariance version.
### 13\.6\.3 Beware of biplots
Be careful not to draw improper conclusions from biplots. Particularly, be careful about situations where the first two principal components do not summarize the majority of the variance. If a large amount of variance is captured by the 3rd or 4th (or higher) principal components, then we must keep in mind that the variable projections on the first two principal components are flattened out versions of a higher dimensional picture. If a variable vector appears short in the 2\-dimensional projection, it means one of two things:
* That variable has small variance
* That variable appears to have small variance when depicted in the space of the first two principal components, but truly has a larger variance which is represented by 3rd or higher principal components.
Let’s take a look at an example of this. We’ll generate 500 rows of data on 4 nearly independent normal random variables. Since these variables are uncorrelated, we might expect that the 4 orthogonal principal components will line up relatively close to the original variables. If this doesn’t happen, then at the very least we can expect the biplot to show little to no correlation between the variables. We’ll give variables \\(2\\) and \\(3\\) the largest variance. Multiple runs of this code will generate different results with similar implications.
```
means=c(2,4,1,3)
sigmas=c(7,9,10,8)
sample.size=500
data=mapply(function(mu,sig){rnorm(mu,sig, n=sample.size)},mu=means,sig=sigmas)
cor(data)
```
```
## [,1] [,2] [,3] [,4]
## [1,] 1.00000000 -0.00301237 0.053142703 0.123924125
## [2,] -0.00301237 1.00000000 -0.023528385 -0.029730772
## [3,] 0.05314270 -0.02352838 1.000000000 -0.009356552
## [4,] 0.12392412 -0.02973077 -0.009356552 1.000000000
```
```
pc=prcomp(data,scale=TRUE)
summary(pc)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 1.0664 1.0066 0.9961 0.9259
## Proportion of Variance 0.2843 0.2533 0.2480 0.2143
## Cumulative Proportion 0.2843 0.5376 0.7857 1.0000
```
```
pc$rotation
```
```
## PC1 PC2 PC3 PC4
## [1,] -0.6888797 -0.1088649 0.2437045 0.6739446
## [2,] 0.1995146 -0.5361096 0.8018612 -0.1726242
## [3,] -0.2568194 0.7601069 0.5028666 -0.3215688
## [4,] -0.6478290 -0.3506744 -0.2115465 -0.6423341
```
```
biplot(pc)
```
Figure 13\.2: BiPlot of Iris Data
Obviously, the wrong conclusion to make from this biplot is that Variables 1 and 4 are correlated. Variables 1 and 4 do not load highly on the first two principal components \- in the *whole* 4\-dimensional principal component space they are nearly orthogonal to each other and to variables 1 and 2\. Thus, their orthogonal projections appear near the origin of this 2\-dimensional subspace.
The morals of the story:
* Always corroborate your results using the variable loadings and the amount of variation explained by each variable.
* When a variable shows up near the origin in a biplot, it is generally not well represented by your two\-dimensional approximation of the data.
### 13\.6\.1 Correlation PCA
We can complete the same analysis using the correlation matrix. I’ll leave it as an exercise to compute the Principal Component loadings and scores and variance explained directly from eigenvectors and eigenvalues. You should do this and compare your results to the R output. *(Beware: you must transform your data before solving for the scores. With the covariance version, this meant centering \- for the correlation version, this means standardization as well)*
```
irispca2=prcomp(iris[1:4], cor=TRUE)
```
```
## Warning: In prcomp.default(iris[1:4], cor = TRUE) :
## extra argument 'cor' will be disregarded
```
```
summary(irispca2)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 2.0563 0.49262 0.2797 0.15439
## Proportion of Variance 0.9246 0.05307 0.0171 0.00521
## Cumulative Proportion 0.9246 0.97769 0.9948 1.00000
```
```
irispca2$rotation
```
```
## PC1 PC2 PC3 PC4
## Sepal.Length 0.36138659 -0.65658877 0.58202985 0.3154872
## Sepal.Width -0.08452251 -0.73016143 -0.59791083 -0.3197231
## Petal.Length 0.85667061 0.17337266 -0.07623608 -0.4798390
## Petal.Width 0.35828920 0.07548102 -0.54583143 0.7536574
```
```
irispca2$x[1:10,]
```
```
## PC1 PC2 PC3 PC4
## [1,] -2.684126 -0.31939725 0.02791483 0.002262437
## [2,] -2.714142 0.17700123 0.21046427 0.099026550
## [3,] -2.888991 0.14494943 -0.01790026 0.019968390
## [4,] -2.745343 0.31829898 -0.03155937 -0.075575817
## [5,] -2.728717 -0.32675451 -0.09007924 -0.061258593
## [6,] -2.280860 -0.74133045 -0.16867766 -0.024200858
## [7,] -2.820538 0.08946138 -0.25789216 -0.048143106
## [8,] -2.626145 -0.16338496 0.02187932 -0.045297871
## [9,] -2.886383 0.57831175 -0.02075957 -0.026744736
## [10,] -2.672756 0.11377425 0.19763272 -0.056295401
```
```
plot(irispca2$x[,1],irispca2$x[,2],
main="Data Projected on First 2 Principal Components",
xlab="First Principal Component",
ylab="Second Principal Component",
col=c("red","green3","blue")[iris$Species])
```
```
biplot(irispca2)
```
Here you can see the direction vectors of the original variables are relatively uniform in length in the PCA space. This is due to the standardization in the correlation matrix. However, the general message is the same: Petal.Width and Petal.Length Cluster together, and many of the same observations appear “on the fray” on the PCA space \- although not all of them!
### 13\.6\.2 Which Projection is Better?
What do you think? It depends on the task, and it depends on the data. One flavor of PCA is not “better” than the other. Correlation PCA is appropriate when the scales of your attributes differ wildly, and covariance PCA would be inappropriate in that situation. But in all other scenarios, when the scales of our attributes are roughly the same, we should always consider both dimension reductions and make a decision based upon the resulting output (variance explained, projection plots, loadings).
For the iris data, The results in terms of variable clustering are pretty much the same. For clustering/classifying the 3 species of flowers, we can see better separation in the covariance version.
### 13\.6\.3 Beware of biplots
Be careful not to draw improper conclusions from biplots. Particularly, be careful about situations where the first two principal components do not summarize the majority of the variance. If a large amount of variance is captured by the 3rd or 4th (or higher) principal components, then we must keep in mind that the variable projections on the first two principal components are flattened out versions of a higher dimensional picture. If a variable vector appears short in the 2\-dimensional projection, it means one of two things:
* That variable has small variance
* That variable appears to have small variance when depicted in the space of the first two principal components, but truly has a larger variance which is represented by 3rd or higher principal components.
Let’s take a look at an example of this. We’ll generate 500 rows of data on 4 nearly independent normal random variables. Since these variables are uncorrelated, we might expect that the 4 orthogonal principal components will line up relatively close to the original variables. If this doesn’t happen, then at the very least we can expect the biplot to show little to no correlation between the variables. We’ll give variables \\(2\\) and \\(3\\) the largest variance. Multiple runs of this code will generate different results with similar implications.
```
means=c(2,4,1,3)
sigmas=c(7,9,10,8)
sample.size=500
data=mapply(function(mu,sig){rnorm(mu,sig, n=sample.size)},mu=means,sig=sigmas)
cor(data)
```
```
## [,1] [,2] [,3] [,4]
## [1,] 1.00000000 -0.00301237 0.053142703 0.123924125
## [2,] -0.00301237 1.00000000 -0.023528385 -0.029730772
## [3,] 0.05314270 -0.02352838 1.000000000 -0.009356552
## [4,] 0.12392412 -0.02973077 -0.009356552 1.000000000
```
```
pc=prcomp(data,scale=TRUE)
summary(pc)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 1.0664 1.0066 0.9961 0.9259
## Proportion of Variance 0.2843 0.2533 0.2480 0.2143
## Cumulative Proportion 0.2843 0.5376 0.7857 1.0000
```
```
pc$rotation
```
```
## PC1 PC2 PC3 PC4
## [1,] -0.6888797 -0.1088649 0.2437045 0.6739446
## [2,] 0.1995146 -0.5361096 0.8018612 -0.1726242
## [3,] -0.2568194 0.7601069 0.5028666 -0.3215688
## [4,] -0.6478290 -0.3506744 -0.2115465 -0.6423341
```
```
biplot(pc)
```
Figure 13\.2: BiPlot of Iris Data
Obviously, the wrong conclusion to make from this biplot is that Variables 1 and 4 are correlated. Variables 1 and 4 do not load highly on the first two principal components \- in the *whole* 4\-dimensional principal component space they are nearly orthogonal to each other and to variables 1 and 2\. Thus, their orthogonal projections appear near the origin of this 2\-dimensional subspace.
The morals of the story:
* Always corroborate your results using the variable loadings and the amount of variation explained by each variable.
* When a variable shows up near the origin in a biplot, it is generally not well represented by your two\-dimensional approximation of the data.
| Field Specific |
shainarace.github.io | https://shainarace.github.io/LinearAlgebra/pcaapp.html |
Chapter 14 Applications of Principal Components
===============================================
Principal components have a number of applications across many areas of statistics. In the next sections, we will explore their usefulness in the context of dimension reduction.
14\.1 Dimension reduction
-------------------------
It is quite common for an analyst to have too many variables. There are two different solutions to this problem:
1. **Feature Selection**: Choose a subset of existing variables to be used in a model.
2. **Feature Extraction**: Create a new set of features which are combinations of original variables.
### 14\.1\.1 Feature Selection
Let’s think for a minute about feature selection. What are we really doing when we consider a subset of our existing variables? Take the two dimensional data in Example [**??**](#ex:pcabasis) (while two\-dimensions rarely necessitate dimension reduction, the geometrical interpretation extends to higher dimensions as usual!). The centered data appears as follows:
Now say we perform some kind of feature selection (there are a number of ways to do this, chi\-square tests for instances) and we determine that the variable \\(\\x\_2\\) is more important than \\(\\x\_1\\). So we throw out \\(\\x\_2\\) and we’ve reduced the dimensions from \\(p\=2\\) to \\(k\=1\\). Geometrically, what does our new data look like? By dropping \\(\\x\_1\\) we set all of those horizontal coordinates to zero. In other words, we **project the data orthogonally** onto the \\(\\x\_2\\) axis, as illustrated in Figure [14\.1](pcaapp.html#fig:pcpointsselect).
Figure 14\.1: Geometrical Interpretation of Feature Selection: When we “drop” the variable \\(\\x\_1\\) from our analysis, we are projecting the data onto the span(\\(\\x\_2\\))
Now, how much information (variance) did we lose with this projection? The total variance in the original data is
\\\[\\\|\\x\_1\\\|^2\+\\\|\\x\_2\\\|^2\.\\]
The variance of our data reduction is
\\\[\\\|\\x\_2\\\|^2\.\\]
Thus, the proportion of the total information (variance) we’ve kept is
\\\[\\frac{\\\|\\x\_2\\\|^2}{\\\|\\x\_1\\\|^2\+\\\|\\x\_2\\\|^2}\=\\frac{6\.01}{5\.6\+6\.01} \= 51\.7\\%.\\]
Our reduced dimensional data contains only 51\.7% of the variance of the original data. We’ve lost a lot of information!
The fact that feature selection omits variance in our predictor variables does not make it a bad thing! Obviously, getting rid of variables which have no relationship to a target variable (in the case of *supervised* modeling like prediction and classification) is a good thing. But, in the case of *unsupervised* learning techniques, where there is no target variable involved, we must be extra careful when it comes to feature selection. In summary,
1. Feature Selection is important. Examples include:
* Removing variables which have little to no impact on a target variable in supervised modeling (forward/backward/stepwise selection).
* Removing variables which have obvious strong correlation with other predictors.
* Removing variables that are not interesting in unsupervised learning (For example, you may not want to use the words “the” and “of” when clustering text).- Feature Selection is an orthogonal projection of the original data onto the span of the variables you choose to keep.
- Feature selection should always be done with care and justification.
* In regression, could create problems of endogeneity (errors correlated with predictors \- omitted variable bias).
* For unsupervised modelling, could lose important information.\<.ol\>
### 14\.1\.2 Feature Extraction
PCA is the most common form of feature extraction. The rotation of the space shown in Example [**??**](#ex:pcabasis) represents the creation of new features which are linear combinations of the original features. If we have \\(p\\) potential variables for a model and want to reduce that number to \\(k\\), then the first \\(k\\) principal components combine the individual variables in such a way that is guaranteed to capture as much \`\`information’’ (variance) as possible. Again, take our two\-dimensional data as an example. When we reduce our data down to one\-dimension using principal components, we essentially do the same orthogonal projection that we did in Feature Selection, only in this case we conduct that projection in the new basis of principal components. Recall that for this data, our first principal component \\(\\v\_1\\) was \\\[\\v\_1 \= \\pm 0\.69 \\\\0\.73 \\mp.\\]
Projecting the data onto the first principal component is illustrated in Figure [14\.2](pcaapp.html#fig:pcaproj).
Figure 14\.2: Illustration of Feature Extraction via PCA
How much variance do we keep with \\(k\\) principal components? The proportion of variance explained by each principal component is the ratio of the corresponding eigenvalue to the sum of the eigenvalues (which gives the total amount of variance in the data).
:::{theorem name\=‘Proportion of Variance Explained’ \#pcpropvar}
The proportion of variance explained by the projection of the data onto principal component \\(\\v\_i\\) is
\\\[\\frac{\\lambda\_i}{\\sum\_{j\=1}^p \\lambda\_j}.\\]
Similarly, the proportion of variance explained by the projection of the data onto the first \\(k\\) principal components (\\(k\<j\\)) is
\\\[ \\frac{\\sum\_{i\=1}^k\\lambda\_i}{\\sum\_{j\=1}^p \\lambda\_j}\\]
:::
In our simple 2 dimensional example we were able to keep
\\\[\\frac{\\lambda\_1}{\\lambda\_1\+\\lambda\_2}\=\\frac{10\.61}{10\.61\+1\.00} \= 91\.38\\%\\]
of our variance in one dimension.
14\.2 Exploratory Analysis
--------------------------
### 14\.2\.1 UK Food Consumption
#### 14\.2\.1\.1 Explore the Data
The data for this example can be read directly from our course webpage. When we first examine the data, we will see that the rows correspond to different types of food/drink and the columns correspond to the 4 countries within the UK. Our first matter of business is transposing this data so that the 4 countries become our observations (i.e. rows).
```
food=read.csv("http://birch.iaa.ncsu.edu/~slrace/LinearAlgebra2021/Code/ukfood.csv",
header=TRUE,row.names=1)
```
```
library(reshape2) #melt data matrix into 3 columns
library(ggplot2) #heatmap
head(food)
```
```
## England Wales Scotland N.Ireland
## Cheese 105 103 103 66
## Carcass meat 245 227 242 267
## Other meat 685 803 750 586
## Fish 147 160 122 93
## Fats and oils 193 235 184 209
## Sugars 156 175 147 139
```
```
food=as.data.frame(t(food))
head(food)
```
```
## Cheese Carcass meat Other meat Fish Fats and oils Sugars
## England 105 245 685 147 193 156
## Wales 103 227 803 160 235 175
## Scotland 103 242 750 122 184 147
## N.Ireland 66 267 586 93 209 139
## Fresh potatoes Fresh Veg Other Veg Processed potatoes Processed Veg
## England 720 253 488 198 360
## Wales 874 265 570 203 365
## Scotland 566 171 418 220 337
## N.Ireland 1033 143 355 187 334
## Fresh fruit Cereals Beverages Soft drinks Alcoholic drinks
## England 1102 1472 57 1374 375
## Wales 1137 1582 73 1256 475
## Scotland 957 1462 53 1572 458
## N.Ireland 674 1494 47 1506 135
## Confectionery
## England 54
## Wales 64
## Scotland 62
## N.Ireland 41
```
Next we will visualize the information in this data using a simple heat map. To do this we will standardize and then melt the data using the `reshape2` package, and then use a `ggplot()` heatmap.
```
food.std = scale(food, center=T, scale = T)
food.melt = melt(food.std, id.vars = row.names(food.std), measure.vars = 1:17)
ggplot(data = food.melt, aes(x=Var1, y=Var2, fill=value)) +
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-2,2), space = "Lab"
) + theme_minimal()+
theme(axis.title.x = element_blank(),axis.title.y = element_blank(),
axis.text.y = element_text(face = 'bold', size = 12, colour = 'black'),
axis.text.x = element_text(angle = 45, vjust = 1, face = 'bold',
size = 12, colour = 'black', hjust = 1))+coord_fixed()
```
#### 14\.2\.1\.2 `prcomp()` function for PCA
The `prcomp()` function is the one I most often recommend for reasonably sized principal component calculations in R. This function returns a list with class “prcomp” containing the following components (from help prcomp):
1. `sdev`: the standard deviations of the principal components (i.e., the square roots of the eigenvalues of the covariance/correlation matrix, though the calculation is actually done with the singular values of the data matrix).
- `rotation`: the matrix of *variable loadings* (i.e., a matrix whose columns contain the eigenvectors). The function princomp returns this in the element loadings.
- `x`: if retx is true *the value of the rotated data (i.e. the scores)* (the centred (and scaled if requested) data multiplied by the rotation matrix) is returned. Hence, cov(x) is the diagonal matrix \\(diag(sdev^2\)\\). For the formula method, napredict() is applied to handle the treatment of values omitted by the na.action.
- `center`, `scale`: the centering and scaling used, or FALSE.
The option `scale = TRUE` inside the `prcomp()` function instructs the program to use **orrelation PCA**. *The default is covariance PCA*.
```
pca=prcomp(food, scale = T)
```
This first plot just looks at magnitudes of eigenvalues \- it is essentially the screeplot in barchart form.
```
summary(pca)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 3.4082 2.0562 1.07524 6.344e-16
## Proportion of Variance 0.6833 0.2487 0.06801 0.000e+00
## Cumulative Proportion 0.6833 0.9320 1.00000 1.000e+00
```
```
plot(pca, main = "Bar-style Screeplot")
```
The next plot views our four datapoints (locations) projected onto the 2\-dimensional subspace
(from 17 dimensions) that captures as much information (i.e. variance) as possible.
```
plot(pca$x,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
main = 'The four observations projected into 2-dimensional space')
text(pca$x[,1], pca$x[,2],row.names(food))
```
#### 14\.2\.1\.3 The BiPlot
Now we can also view our original variable axes projected down onto that same space!
```
biplot(pca$x,pca$rotation, cex = c(1.5, 1), col = c('black','red'))#,
```
Figure 14\.3: BiPlot: The observations and variables projected onto the same plane.
```
# xlim = c(-0.8,0.8), ylim = c(-0.6,0.7))
```
#### 14\.2\.1\.4 Formatting the biplot for readability
I will soon introduce the `autoplot()` function from the `ggfortify` package, but for now I just want to show you that you can specify *which* variables (and observations) to include in the biplot by directly specifying the loadings matrix and scores matrix of interest in the biplot function:
```
desired.variables = c(2,4,6,8,10)
biplot(pca$x, pca$rotation[desired.variables,1:2], cex = c(1.5, 1),
col = c('black','red'), xlim = c(-6,5), ylim = c(-4,4))
```
Figure 14\.4: Specify a Subset of Variables/Observations to Include in the Biplot
#### 14\.2\.1\.5 What are all these axes?
Those numbers relate to the scores on PC1 and PC2 (sometimes normalized so that each new variable has variance 1 \- and sometimes not) and the loadings on PC1 and PC2 (sometimes normalized so that each variable vector is a unit vector \- and sometimes scaled by the eigenvalues or square roots of the eigenvalues in some fashion).
Generally, I’ve rarely found it useful to hunt down how each package is rendering the axes the biplot, as they should be providing the same information regardless of the scale of the *numbers* on the axes. We don’t actually use those numbers to help us draw conclusions. We use the directions of the arrows and the layout of the points in reference to those direction arrows.
```
vmax = varimax(pca$rotation[,1:2])
new.scores = pca$x[,1:2] %*% vmax$rotmat
biplot(new.scores, vmax$loadings[,1:2],
# xlim=c(-60,60),
# ylim=c(-60,60),
cex = c(1.5, 1),
xlab = 'Rotated Axis 1',
ylab = 'Rotated Axis 2')
```
Figure 14\.5: Biplot with Rotated Loadings
```
vmax$loadings[,1:2]
```
```
## PC1 PC2
## Cheese 0.02571143 0.34751491
## Carcass meat -0.16660468 -0.24450375
## Other meat 0.11243721 0.27569481
## Fish 0.22437069 0.17788300
## Fats and oils 0.35728064 -0.22128124
## Sugars 0.30247003 0.07908986
## Fresh potatoes 0.22174898 -0.40880955
## Fresh Veg 0.26432097 0.09953752
## Other Veg 0.27836185 0.11640174
## Processed potatoes -0.17545152 0.39011648
## Processed Veg 0.29583164 0.05084727
## Fresh fruit 0.15852128 0.24360131
## Cereals 0.34963293 -0.13363398
## Beverages 0.30030152 0.07604823
## Soft drinks -0.36374762 0.07438738
## Alcoholic drinks 0.04243636 0.34240944
## Confectionery 0.05450175 0.32474821
```
14\.3 FIFA Soccer Players
-------------------------
#### 14\.3\.0\.1 Explore the Data
We begin by loading in the data and taking a quick look at the variables that we’ll be using in our PCA for this exercise. You may need to install the packages from the following `library()` statements.
```
library(reshape2) #melt correlation matrix into 3 columns
library(ggplot2) #correlation heatmap
library(ggfortify) #autoplot bi-plot
library(viridis) # magma palette
```
```
## Loading required package: viridisLite
```
```
library(plotrix) # color.legend
```
Now we’ll read the data directly from the web, take a peek at the first 5 rows, and explore some summary statistics.
```
## Name Age Photo
## 1 Cristiano Ronaldo 32 https://cdn.sofifa.org/48/18/players/20801.png
## 2 L. Messi 30 https://cdn.sofifa.org/48/18/players/158023.png
## 3 Neymar 25 https://cdn.sofifa.org/48/18/players/190871.png
## 4 L. Suárez 30 https://cdn.sofifa.org/48/18/players/176580.png
## 5 M. Neuer 31 https://cdn.sofifa.org/48/18/players/167495.png
## 6 R. Lewandowski 28 https://cdn.sofifa.org/48/18/players/188545.png
## Nationality Flag Overall Potential
## 1 Portugal https://cdn.sofifa.org/flags/38.png 94 94
## 2 Argentina https://cdn.sofifa.org/flags/52.png 93 93
## 3 Brazil https://cdn.sofifa.org/flags/54.png 92 94
## 4 Uruguay https://cdn.sofifa.org/flags/60.png 92 92
## 5 Germany https://cdn.sofifa.org/flags/21.png 92 92
## 6 Poland https://cdn.sofifa.org/flags/37.png 91 91
## Club Club.Logo Value Wage
## 1 Real Madrid CF https://cdn.sofifa.org/24/18/teams/243.png €95.5M €565K
## 2 FC Barcelona https://cdn.sofifa.org/24/18/teams/241.png €105M €565K
## 3 Paris Saint-Germain https://cdn.sofifa.org/24/18/teams/73.png €123M €280K
## 4 FC Barcelona https://cdn.sofifa.org/24/18/teams/241.png €97M €510K
## 5 FC Bayern Munich https://cdn.sofifa.org/24/18/teams/21.png €61M €230K
## 6 FC Bayern Munich https://cdn.sofifa.org/24/18/teams/21.png €92M €355K
## Special Acceleration Aggression Agility Balance Ball.control Composure
## 1 2228 89 63 89 63 93 95
## 2 2154 92 48 90 95 95 96
## 3 2100 94 56 96 82 95 92
## 4 2291 88 78 86 60 91 83
## 5 1493 58 29 52 35 48 70
## 6 2143 79 80 78 80 89 87
## Crossing Curve Dribbling Finishing Free.kick.accuracy GK.diving GK.handling
## 1 85 81 91 94 76 7 11
## 2 77 89 97 95 90 6 11
## 3 75 81 96 89 84 9 9
## 4 77 86 86 94 84 27 25
## 5 15 14 30 13 11 91 90
## 6 62 77 85 91 84 15 6
## GK.kicking GK.positioning GK.reflexes Heading.accuracy Interceptions Jumping
## 1 15 14 11 88 29 95
## 2 15 14 8 71 22 68
## 3 15 15 11 62 36 61
## 4 31 33 37 77 41 69
## 5 95 91 89 25 30 78
## 6 12 8 10 85 39 84
## Long.passing Long.shots Marking Penalties Positioning Reactions Short.passing
## 1 77 92 22 85 95 96 83
## 2 87 88 13 74 93 95 88
## 3 75 77 21 81 90 88 81
## 4 64 86 30 85 92 93 83
## 5 59 16 10 47 12 85 55
## 6 65 83 25 81 91 91 83
## Shot.power Sliding.tackle Sprint.speed Stamina Standing.tackle Strength
## 1 94 23 91 92 31 80
## 2 85 26 87 73 28 59
## 3 80 33 90 78 24 53
## 4 87 38 77 89 45 80
## 5 25 11 61 44 10 83
## 6 88 19 83 79 42 84
## Vision Volleys position
## 1 85 88 1
## 2 90 85 1
## 3 80 83 1
## 4 84 88 1
## 5 70 11 4
## 6 78 87 1
```
```
## Acceleration Aggression Agility Balance Ball.control
## Min. :11.00 Min. :11.00 Min. :14.00 Min. :11.00 Min. : 8
## 1st Qu.:56.00 1st Qu.:43.00 1st Qu.:55.00 1st Qu.:56.00 1st Qu.:53
## Median :67.00 Median :58.00 Median :65.00 Median :66.00 Median :62
## Mean :64.48 Mean :55.74 Mean :63.25 Mean :63.76 Mean :58
## 3rd Qu.:75.00 3rd Qu.:69.00 3rd Qu.:74.00 3rd Qu.:74.00 3rd Qu.:69
## Max. :96.00 Max. :96.00 Max. :96.00 Max. :96.00 Max. :95
## Composure Crossing Curve Dribbling Finishing
## Min. : 5.00 Min. : 5.0 Min. : 6.0 Min. : 2.00 Min. : 2.00
## 1st Qu.:51.00 1st Qu.:37.0 1st Qu.:34.0 1st Qu.:48.00 1st Qu.:29.00
## Median :60.00 Median :54.0 Median :48.0 Median :60.00 Median :48.00
## Mean :57.82 Mean :49.7 Mean :47.2 Mean :54.94 Mean :45.18
## 3rd Qu.:67.00 3rd Qu.:64.0 3rd Qu.:62.0 3rd Qu.:68.00 3rd Qu.:61.00
## Max. :96.00 Max. :91.0 Max. :92.0 Max. :97.00 Max. :95.00
## Free.kick.accuracy GK.diving GK.handling GK.kicking
## Min. : 4.00 Min. : 1.00 Min. : 1.00 Min. : 1.00
## 1st Qu.:31.00 1st Qu.: 8.00 1st Qu.: 8.00 1st Qu.: 8.00
## Median :42.00 Median :11.00 Median :11.00 Median :11.00
## Mean :43.08 Mean :16.78 Mean :16.55 Mean :16.42
## 3rd Qu.:57.00 3rd Qu.:14.00 3rd Qu.:14.00 3rd Qu.:14.00
## Max. :93.00 Max. :91.00 Max. :91.00 Max. :95.00
## GK.positioning GK.reflexes Heading.accuracy Interceptions
## Min. : 1.00 Min. : 1.00 Min. : 4.00 Min. : 4.00
## 1st Qu.: 8.00 1st Qu.: 8.00 1st Qu.:44.00 1st Qu.:26.00
## Median :11.00 Median :11.00 Median :55.00 Median :52.00
## Mean :16.54 Mean :16.91 Mean :52.26 Mean :46.53
## 3rd Qu.:14.00 3rd Qu.:14.00 3rd Qu.:64.00 3rd Qu.:64.00
## Max. :91.00 Max. :90.00 Max. :94.00 Max. :92.00
## Jumping Long.passing Long.shots Marking
## Min. :15.00 Min. : 7.00 Min. : 3.00 Min. : 4.00
## 1st Qu.:58.00 1st Qu.:42.00 1st Qu.:32.00 1st Qu.:22.00
## Median :66.00 Median :56.00 Median :51.00 Median :48.00
## Mean :64.84 Mean :52.37 Mean :47.11 Mean :44.09
## 3rd Qu.:73.00 3rd Qu.:64.00 3rd Qu.:62.00 3rd Qu.:63.00
## Max. :95.00 Max. :93.00 Max. :92.00 Max. :92.00
## Penalties Positioning Reactions Short.passing
## Min. : 5.00 Min. : 2.00 Min. :28.00 Min. :10.00
## 1st Qu.:39.00 1st Qu.:38.00 1st Qu.:55.00 1st Qu.:53.00
## Median :50.00 Median :54.00 Median :62.00 Median :62.00
## Mean :48.92 Mean :49.53 Mean :61.85 Mean :58.22
## 3rd Qu.:61.00 3rd Qu.:64.00 3rd Qu.:68.00 3rd Qu.:68.00
## Max. :92.00 Max. :95.00 Max. :96.00 Max. :92.00
## Shot.power Sliding.tackle Sprint.speed Stamina
## Min. : 3.00 Min. : 4.00 Min. :11.00 Min. :12.00
## 1st Qu.:46.00 1st Qu.:24.00 1st Qu.:57.00 1st Qu.:56.00
## Median :59.00 Median :52.00 Median :67.00 Median :66.00
## Mean :55.57 Mean :45.56 Mean :64.72 Mean :63.13
## 3rd Qu.:68.00 3rd Qu.:64.00 3rd Qu.:75.00 3rd Qu.:74.00
## Max. :94.00 Max. :91.00 Max. :96.00 Max. :95.00
## Standing.tackle Strength Vision Volleys
## Min. : 4.00 Min. :20.00 Min. :10.00 Min. : 4.00
## 1st Qu.:26.00 1st Qu.:58.00 1st Qu.:43.00 1st Qu.:30.00
## Median :54.00 Median :66.00 Median :54.00 Median :44.00
## Mean :47.41 Mean :65.24 Mean :52.93 Mean :43.13
## 3rd Qu.:66.00 3rd Qu.:74.00 3rd Qu.:64.00 3rd Qu.:57.00
## Max. :92.00 Max. :98.00 Max. :94.00 Max. :91.00
```
These variables are scores on the scale of \[0,100] that measure 34 key abilities of soccer players. No player has ever earned a score of 100 on any of these attributes \- no player is *perfect*!
It would be natural to assume some correlation between these variables and indeed, we see lots of it in the following heatmap visualization of the correlation matrix.
```
cor.matrix = cor(fifa[,13:46])
cor.matrix = melt(cor.matrix)
ggplot(data = cor.matrix, aes(x=Var1, y=Var2, fill=value)) +
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1,1), space = "Lab",
name="Correlation") + theme_minimal()+
theme(axis.title.x = element_blank(),axis.title.y = element_blank(),
axis.text.x = element_text(angle = 45, vjust = 1,
size = 9, hjust = 1))+coord_fixed()
```
Figure 14\.6: Heatmap of correlation matrix for 34 variables of interest
What jumps out right away are the “GK” (Goal Keeping) abilities \- these attributes have *very* strong positive correlation with one another and negative correlation with the other abilities. After all, goal keepers are not traditionally well known for their dribbling, passing, and finishing abilities!
Outside of that, we see a lot of red in this correlation matrix – many attributes share a lot of information. This is the type of situation where PCA shines.
#### 14\.3\.0\.2 Principal Components Analysis
Let’s take a look at the principal components analysis. Since the variables are on the same scale, I’ll start with **covariance PCA** (the default in R’s `prcomp()` function).
```
fifa.pca = prcomp(fifa[,13:46] )
```
We can then print the summary of variance explained and the loadings on the first 3 components:
```
summary(fifa.pca)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6
## Standard deviation 74.8371 43.5787 23.28767 20.58146 16.12477 10.71539
## Proportion of Variance 0.5647 0.1915 0.05468 0.04271 0.02621 0.01158
## Cumulative Proportion 0.5647 0.7561 0.81081 0.85352 0.87973 0.89131
## PC7 PC8 PC9 PC10 PC11 PC12 PC13
## Standard deviation 10.17785 9.11852 8.98065 8.5082 8.41550 7.93741 7.15935
## Proportion of Variance 0.01044 0.00838 0.00813 0.0073 0.00714 0.00635 0.00517
## Cumulative Proportion 0.90175 0.91013 0.91827 0.9256 0.93270 0.93906 0.94422
## PC14 PC15 PC16 PC17 PC18 PC19 PC20
## Standard deviation 7.06502 6.68497 6.56406 6.50459 6.22369 6.08812 6.00578
## Proportion of Variance 0.00503 0.00451 0.00434 0.00427 0.00391 0.00374 0.00364
## Cumulative Proportion 0.94926 0.95376 0.95811 0.96237 0.96628 0.97001 0.97365
## PC21 PC22 PC23 PC24 PC25 PC26 PC27
## Standard deviation 5.91320 5.66946 5.45018 5.15051 4.86761 4.34786 4.1098
## Proportion of Variance 0.00353 0.00324 0.00299 0.00267 0.00239 0.00191 0.0017
## Cumulative Proportion 0.97718 0.98042 0.98341 0.98609 0.98848 0.99038 0.9921
## PC28 PC29 PC30 PC31 PC32 PC33 PC34
## Standard deviation 4.05716 3.46035 3.37936 3.31179 3.1429 3.01667 2.95098
## Proportion of Variance 0.00166 0.00121 0.00115 0.00111 0.0010 0.00092 0.00088
## Cumulative Proportion 0.99374 0.99495 0.99610 0.99721 0.9982 0.99912 1.00000
```
```
fifa.pca$rotation[,1:3]
```
```
## PC1 PC2 PC3
## Acceleration -0.13674335 0.0944478107 -0.141193842
## Aggression -0.15322857 -0.2030537953 0.105372978
## Agility -0.13598896 0.1196301737 -0.017763073
## Balance -0.11474980 0.0865672989 -0.072629834
## Ball.control -0.21256812 0.0585990154 0.038243802
## Composure -0.13288575 -0.0005635262 0.163887637
## Crossing -0.21347202 0.0458210228 0.124741235
## Curve -0.20656129 0.1254947094 0.180634730
## Dribbling -0.23090613 0.1259819707 -0.002905379
## Finishing -0.19431248 0.2534086437 0.006524693
## Free.kick.accuracy -0.18528508 0.0960404650 0.219976709
## GK.diving 0.20757999 0.0480952942 0.326161934
## GK.handling 0.19811125 0.0464542553 0.314165622
## GK.kicking 0.19261876 0.0456942190 0.304722126
## GK.positioning 0.19889113 0.0456384196 0.317850121
## GK.reflexes 0.21081755 0.0489895700 0.332751195
## Heading.accuracy -0.17218607 -0.1115416097 -0.125135161
## Interceptions -0.15038835 -0.3669025376 0.162064432
## Jumping -0.03805419 -0.0579221746 0.012263523
## Long.passing -0.16849827 -0.0435009943 0.224584171
## Long.shots -0.21415526 0.1677851237 0.157466462
## Marking -0.14863254 -0.4076616902 0.078298039
## Penalties -0.16328049 0.1407803994 0.024403976
## Positioning -0.22053959 0.1797895382 0.020734699
## Reactions -0.04780774 0.0001844959 0.250247098
## Short.passing -0.18176636 -0.0033124240 0.118611543
## Shot.power -0.19592137 0.0989340925 0.101707386
## Sliding.tackle -0.14977558 -0.4024030355 0.069945935
## Sprint.speed -0.13387287 0.0804847541 -0.146049405
## Stamina -0.17231648 -0.0634639786 -0.016509650
## Standing.tackle -0.15992073 -0.4039763876 0.086418583
## Strength -0.02186264 -0.1151018222 0.096053864
## Vision -0.13027169 0.1152237536 0.260985686
## Volleys -0.18465028 0.1888480712 0.076974579
```
It’s clear we can capture a large amount of the variance in this data with just a few components. In fact **just 2 components yield 76% of the variance!**
Now let’s look at some projections of the players onto those 2 principal components. The scores are located in the `fifa.pca$x` matrix.
```
plot(fifa.pca$x[,1],fifa.pca$x[,2], col=alpha(c('red','blue','green','black')[as.factor(fifa$position)],0.4), pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2', main = 'Projection of Players onto 2 PCs, Colored by Position')
legend(125,-45, c('Forward','Defense','Midfield','GoalKeeper'), c('red','blue','green','black'), bty = 'n', cex=1.1)
```
Figure 14\.7: Projection of the FIFA players’ skill data into 2 dimensions. Player positions are evident.
The plot easily separates the field players from the goal keepers, and the forwards from the defenders. As one might expect, midfielders are sandwiched by the forwards and defenders, as they play both roles on the field. The labeling of player position was imperfect and done using a list of the players’ preferred positions, and it’s likely we are seeing that in some of the players labeled as midfielders that appear above the cloud of red points.
We can also attempt a 3\-dimensional projection of this data:
```
library(plotly)
library(processx)
colors=alpha(c('red','blue','green','black')[as.factor(fifa$position)],0.4)
graph = plot_ly(x = fifa.pca$x[,1],
y = fifa.pca$x[,2],
z= fifa.pca$x[,3],
type='scatter3d',
mode="markers",
marker = list(color=colors))
graph
```
Figure 14\.8: Projection of the FIFA players’ skill data into 3 dimensions. Player positions are evident.
#### 14\.3\.0\.3 The BiPlot
BiPlots can be tricky when we have so much data and so many variables. As you will see, the default image leaves much to be desired, and will motivate our move to the `ggfortify` library to use the `autoplot()` function. The image takes too long to render and is practically unreadable with the whole dataset, so I demonstrate the default `biplot()` function with a sample of the observations.
```
biplot(fifa.pca$x[sample(1:16501,2000),],fifa.pca$rotation[,1:2], cex=0.5, arrow.len = 0.1)
```
Figure 14\.9: The default biplot function leaves much to be desired here
The autoplot function uses the \`ggplot2\`\`\` package and is superior when we have more data.
```
autoplot(fifa.pca, data = fifa,
colour = alpha(c('red','blue','green','orange')[as.factor(fifa$pos)],0.4),
loadings = TRUE, loadings.colour = 'black',
loadings.label = TRUE, loadings.label.size = 3.5, loadings.label.alpha = 1,
loadings.label.fontface='bold',
loadings.label.colour = 'black',
loadings.label.repel=T)
```
```
## Warning: `select_()` was deprecated in dplyr 0.7.0.
## Please use `select()` instead.
```
```
## Warning in if (value %in% columns) {: the condition has length > 1 and only the
## first element will be used
```
Figure 14\.10: The `autoplot()` biplot has many more options for readability.
Many expected conclusions can be drawn from this biplot. The defenders tend to have stronger skills of *interception, slide tackling, standing tackling,* and *marking*, while forwards are generally stronger when it comes to *finishing, long.shots, volleys, agility* etc. Midfielders are likely to be stronger with *crossing, passing, ball.control,* and *stamina.*
#### 14\.3\.0\.4 Further Exploration
Let’s see what happens if we color by the variable ‘overall’ which is designed to rank a player’s overall quality of play.
```
palette(alpha(magma(100),0.6))
plot(fifa.pca$x[,1],fifa.pca$x[,2], col=fifa$Overall,pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2')
color.legend(130,-100,220,-90,seq(0,100,50),alpha(magma(100),0.6),gradient="x")
```
Figure 14\.11: Projection of Players onto 2 PCs, Colored by “Overall” Ability
We can attempt to label some of the outliers, too. First, we’ll look at the 0\.001 and 0\.999 quantiles to get a sense of what coordinates we want to highlight. Then we’ll label any players outside of those bounds and surely find some familiar names.
```
# This first chunk is identical to the chunk above. I have to reproduce the plot to label it.
palette(alpha(magma(100),0.6))
plot(fifa.pca$x[,1], fifa.pca$x[,2], col=fifa$Overall,pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2',
xlim=c(-175,250), ylim = c(-150,150))
color.legend(130,-100,220,-90,seq(0,100,50),alpha(magma(100),0.6),gradient="x")
# Identify quantiles (high/low) for each PC
(quant1h = quantile(fifa.pca$x[,1],0.9997))
```
```
## 99.97%
## 215.4003
```
```
(quant1l = quantile(fifa.pca$x[,1],0.0003))
```
```
## 0.03%
## -130.1493
```
```
(quant2h = quantile(fifa.pca$x[,2],0.9997))
```
```
## 99.97%
## 100.208
```
```
(quant2l = quantile(fifa.pca$x[,2],0.0003))
```
```
## 0.03%
## -101.8846
```
```
# Next I create a logical vector which identifies the outliers
# (i.e. TRUE = outlier, FALSE = not outlier)
outliers = fifa.pca$x[,1] > quant1h | fifa.pca$x[,1] < quant1l |
fifa.pca$x[,2] > quant2h | fifa.pca$x[,2] < quant2l
# Here I label them by name, jittering the coordinates of the text so it's more readable
text(jitter(fifa.pca$x[outliers,1],factor=1), jitter(fifa.pca$x[outliers,2],factor=600), fifa$Name[outliers], cex=0.7)
```
What about by wage? First we need to convert their salary, denominated in Euros, to a numeric variable.
```
# First, observe the problem with the Wage column as it stands
head(fifa$Wage)
```
```
## [1] "€565K" "€565K" "€280K" "€510K" "€230K" "€355K"
```
```
# Use regular expressions to remove the Euro sign and K from the wage column
# then covert to numeric
fifa$Wage = as.numeric(gsub('[€K]', '', fifa$Wage))
# new data:
head(fifa$Wage)
```
```
## [1] 565 565 280 510 230 355
```
```
palette(alpha(magma(100),0.6))
plot(fifa.pca$x[,1], fifa.pca$x[,2], col=fifa$Wage,pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2')
color.legend(130,-100,220,-90,c(min(fifa$Wage),max(fifa$Wage)),alpha(magma(100),0.6),gradient="x")
```
Figure 14\.12: Projection of Players onto 2 Principal Components, Colored by Wage
#### 14\.3\.0\.5 Rotations of Principal Components
We might be able to align our axes more squarely with groups of original variables that are strongly correlated and tell a story. Perhaps we might be able to find latent variables that indicate the position specific ability of players. Let’s see what falls out after varimax and quartimax rotation. Recall that in order to employ rotations, we have to first decide on a number of components. A quick look at a screeplot or cumulative proportion variance explained should help to that aim.
```
plot(cumsum(fifa.pca$sdev^2)/sum(fifa.pca$sdev^2),
type = 'b',
cex=.75,
xlab = "# of components",
ylab = "% variance explained")
```
Figure 14\.13: Cumulative proportion of variance explained by rank of the decomposition (i.e. the number of components)
Let’s use 3 components, since the marginal benefit of using additional components seems small. Once we rotate the loadings, we can try to use a heatmap to visualize what they might represent.
```
vmax = varimax(fifa.pca$rotation[,1:3])
loadings = fifa.pca$rotation[,1:3]%*%vmax$rotmat
melt.loadings = melt(loadings)
ggplot(data = melt.loadings, aes(x=Var2, y=Var1, fill=value)) +
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1,1))
```
14\.4 Cancer Genetics
---------------------
Read in the data. The load() function reads in a dataset that has 20532 columns and may take some time. You may want to save and clear your environment (or open a new RStudio window) if you have other work open.
```
load('LAdata/geneCancerUCI.RData')
table(cancerlabels$Class)
```
```
##
## BRCA COAD KIRC LUAD PRAD
## 300 78 146 141 136
```
Original Source: *The cancer genome atlas pan\-cancer analysis project*
* BRCA \= Breast Invasive Carcinoma
* COAD \= Colon Adenocarcinoma
* KIRC \= Kidney Renal clear cell Carcinoma
* LUAD \= Lung Adenocarcinoma
* PRAD \= Prostate Adenocarcinoma
We are going to want to plot the data points according to their different classification labels. We should pick out a nice color palette for categorical attributes. We chose to assign palette `Dark2` but feel free to choose any categorical palette that attracts you in the code below!
```
library(RColorBrewer)
display.brewer.all()
palette(brewer.pal(n = 8, name = "Dark2"))
```
The first step is typically to explore the data. Obviously we can’t look at ALL the scatter plots of input variables. For the fun of it, let’s look at a few of these scatter plots which we’ll pick at random. First pick two column numbers at random, then draw the plot, coloring by the label. You could repeat this chunk several times to explore different combinations. Can you find one that does a good job of separating any of the types of cancer?
```
par(mfrow=c(2,3))
for(i in 1:6){
randomColumns = sample(2:20532,2)
plot(cancer[,randomColumns],col = cancerlabels$Class)
}
```
Figure 14\.14: Random 2\-Dimensional Projections of Cancer Data
To restore our plot window from that 3\-by\-2 grid, we run `dev.off()`
```
dev.off()
```
```
## null device
## 1
```
### 14\.4\.1 Computing the PCA
The function is the one I most often recommend for reasonably sized principal component calculations in R. This function returns a list with class “prcomp” containing the following components (from help prcomp):
1. : the standard deviations of the principal components (i.e., the square roots of the eigenvalues of the covariance/correlation matrix, though the calculation is actually done with the singular values of the data matrix).
2. : the matrix of *variable loadings* (i.e., a matrix whose columns contain the eigenvectors). The function princomp returns this in the element loadings.
3. : if retx is true *the value of the rotated data (i.e. the scores)* (the centred (and scaled if requested) data multiplied by the rotation matrix) is returned. Hence, cov(x) is the diagonal matrix \\(diag(sdev^2\)\\). For the formula method, napredict() is applied to handle the treatment of values omitted by the na.action.
4. : the centering and scaling used, or FALSE.
The option inside the function instructs the program to use **correlation PCA**. The **default is covariance PCA**.
Now let’s compute the *first three* principal components and examine the data projected onto the first 2 axes. We can then look in 3 dimensions.
```
pcaOut = prcomp(cancer,rank = 3, scale = F)
```
```
plot(pcaOut$x[,1], pcaOut$x[,2],
col = cancerlabels$Class,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
main = 'Genetic Samples Projected into 2-dimensions \n using COVARIANCE PCA')
```
Figure 14\.15: Covariance PCA of genetic data
### 14\.4\.2 3D plot with package
Make sure the plotly package is installed for the 3d plot. To get the plot points colored by group, we need to execute the following command that creates a vector of colors (specifying a color for each observation).
```
colors = factor(palette())
colors = colors[cancerlabels$Class]
table(colors, cancerlabels$Class)
```
```
##
## colors BRCA COAD KIRC LUAD PRAD
## #00000499 300 0 0 0 0
## #01010799 0 78 0 0 0
## #02020B99 0 0 146 0 0
## #03031199 0 0 0 141 0
## #05041799 0 0 0 0 136
## #07061C99 0 0 0 0 0
## #09072199 0 0 0 0 0
## #0C092699 0 0 0 0 0
## #0F0B2C99 0 0 0 0 0
## #120D3299 0 0 0 0 0
## #150E3799 0 0 0 0 0
## #180F3E99 0 0 0 0 0
## #1C104499 0 0 0 0 0
## #1F114A99 0 0 0 0 0
## #22115099 0 0 0 0 0
## #26125799 0 0 0 0 0
## #2A115D99 0 0 0 0 0
## #2F116399 0 0 0 0 0
## #33106899 0 0 0 0 0
## #38106C99 0 0 0 0 0
## #3C0F7199 0 0 0 0 0
## #400F7499 0 0 0 0 0
## #45107799 0 0 0 0 0
## #49107899 0 0 0 0 0
## #4E117B99 0 0 0 0 0
## #51127C99 0 0 0 0 0
## #56147D99 0 0 0 0 0
## #5A167E99 0 0 0 0 0
## #5D177F99 0 0 0 0 0
## #61198099 0 0 0 0 0
## #661A8099 0 0 0 0 0
## #6A1C8199 0 0 0 0 0
## #6D1D8199 0 0 0 0 0
## #721F8199 0 0 0 0 0
## #76218199 0 0 0 0 0
## #79228299 0 0 0 0 0
## #7D248299 0 0 0 0 0
## #82258199 0 0 0 0 0
## #86278199 0 0 0 0 0
## #8A298199 0 0 0 0 0
## #8E2A8199 0 0 0 0 0
## #922B8099 0 0 0 0 0
## #962C8099 0 0 0 0 0
## #9B2E7F99 0 0 0 0 0
## #9F2F7F99 0 0 0 0 0
## #A3307E99 0 0 0 0 0
## #A7317D99 0 0 0 0 0
## #AB337C99 0 0 0 0 0
## #AF357B99 0 0 0 0 0
## #B3367A99 0 0 0 0 0
## #B8377999 0 0 0 0 0
## #BC397899 0 0 0 0 0
## #C03A7699 0 0 0 0 0
## #C43C7599 0 0 0 0 0
## #C83E7399 0 0 0 0 0
## #CD407199 0 0 0 0 0
## #D0416F99 0 0 0 0 0
## #D5446D99 0 0 0 0 0
## #D8456C99 0 0 0 0 0
## #DC486999 0 0 0 0 0
## #DF4B6899 0 0 0 0 0
## #E34E6599 0 0 0 0 0
## #E6516399 0 0 0 0 0
## #E9556299 0 0 0 0 0
## #EC586099 0 0 0 0 0
## #EE5C5E99 0 0 0 0 0
## #F1605D99 0 0 0 0 0
## #F2655C99 0 0 0 0 0
## #F4695C99 0 0 0 0 0
## #F66D5C99 0 0 0 0 0
## #F7735C99 0 0 0 0 0
## #F9785D99 0 0 0 0 0
## #F97C5D99 0 0 0 0 0
## #FA815F99 0 0 0 0 0
## #FB866199 0 0 0 0 0
## #FC8A6299 0 0 0 0 0
## #FC906599 0 0 0 0 0
## #FCEFB199 0 0 0 0 0
## #FCF4B699 0 0 0 0 0
## #FCF8BA99 0 0 0 0 0
## #FCFDBF99 0 0 0 0 0
## #FD956799 0 0 0 0 0
## #FD9A6A99 0 0 0 0 0
## #FDDC9E99 0 0 0 0 0
## #FDE1A299 0 0 0 0 0
## #FDE5A799 0 0 0 0 0
## #FDEBAB99 0 0 0 0 0
## #FE9E6C99 0 0 0 0 0
## #FEA36F99 0 0 0 0 0
## #FEA87399 0 0 0 0 0
## #FEAC7699 0 0 0 0 0
## #FEB27A99 0 0 0 0 0
## #FEB67D99 0 0 0 0 0
## #FEBB8199 0 0 0 0 0
## #FEC08599 0 0 0 0 0
## #FEC48899 0 0 0 0 0
## #FEC98D99 0 0 0 0 0
## #FECD9099 0 0 0 0 0
## #FED39599 0 0 0 0 0
## #FED79999 0 0 0 0 0
```
```
library(plotly)
graph = plot_ly(x = pcaOut$x[,1],
y = pcaOut$x[,2],
z= pcaOut$x[,3],
type='scatter3d',
mode="markers",
marker = list(color=colors))
graph
```
### 14\.4\.3 3D plot with package
```
library(rgl)
```
```
##
## Attaching package: 'rgl'
```
```
## The following object is masked from 'package:plotrix':
##
## mtext3d
```
```
knitr::knit_hooks$set(webgl = hook_webgl)
```
Make sure the rgl package is installed for the 3d plot.
```
plot3d(x = pcaOut$x[,1],
y = pcaOut$x[,2],
z= pcaOut$x[,3],
col = colors,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
zlab = "Principal Component 3")
```
You must enable Javascript to view this page properly.
### 14\.4\.4 Variance explained
Proportion of Variance explained by 2,3 components:
```
summary(pcaOut)
```
```
## Importance of first k=3 (out of 801) components:
## PC1 PC2 PC3
## Standard deviation 75.7407 61.6805 58.57297
## Proportion of Variance 0.1584 0.1050 0.09472
## Cumulative Proportion 0.1584 0.2634 0.35815
```
```
# Alternatively, if you had computed the ALL the principal components (omitted the rank=3 option) then
# you could directly compute the proportions of variance explained using what we know about the
# eigenvalues:
# sum(pcaOut$sdev[1:2]^2)/sum(pcaOut$sdev^2)
# sum(pcaOut$sdev[1:3]^2)/sum(pcaOut$sdev^2)
```
### 14\.4\.5 Using Correlation PCA
The data involved in this exercise are actually on the same scale, and normalizing them may not be in your best interest because of this. However, it’s always a good idea to explore both decompositions if you have time.
```
pca.cor = prcomp(cancer, rank=3, scale =T)
```
An error message! Cannot rescale a constant/zero column to unit variance. Solution: check for columns with zero variance and remove them. Then, re\-check dimensions of the matrix to see how many columns we lost.
```
cancer = cancer[,apply(cancer, 2, sd)>0 ]
dim(cancer)
```
```
## [1] 801 20264
```
Once we’ve taken care of those zero\-variance columns, we can proceed to compute the correlation PCA:
```
pca.cor = prcomp(cancer, rank=3, scale =T)
```
```
plot(pca.cor$x[,1], pca.cor$x[,2],
col = cancerlabels$Class,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
main = 'Genetic Samples Projected into 2-dimensions \n using CORRELATION PCA')
```
Figure 14\.16: Correlation PCA of genetic data
And it’s clear just from the 2\-dimensional projection that correlation PCA does not seem to work as well as covariance PCA when it comes to separating the 4 different types of cancer.
Indeed, we can confirm this from the proportion of variance explained, which is substantially lower than that of covariance PCA:
```
summary(pca.cor)
```
```
## Importance of first k=3 (out of 801) components:
## PC1 PC2 PC3
## Standard deviation 46.2145 42.11838 39.7823
## Proportion of Variance 0.1054 0.08754 0.0781
## Cumulative Proportion 0.1054 0.19294 0.2710
```
### 14\.4\.6 Range standardization as an alternative to covariance PCA
We can also put all the variables on a scale of 0 to 1 if we’re concerned about issues with scale (in this case, scale wasn’t an issue \- but the following approach still might be provide interesting projections in some datasets). This transformation would be as follows for each variable \\(\\mathbf{x}\\):
\\\[\\frac{\\mathbf{x} \- \\min(\\mathbf{x})}{\\max(\\mathbf{x})\-\\min(\\mathbf{x})}\\]
```
cancer = cancer[,apply(cancer,2,sd)>0]
min = apply(cancer,2,min)
range = apply(cancer,2, function(x){max(x)-min(x)})
minmax.cancer=scale(cancer,center=min,scale=range)
```
Then we can compute the covariance PCA of that range\-standardized data without concern:
```
minmax.pca = prcomp(minmax.cancer, rank=3, scale=F )
```
```
plot(minmax.pca$x[,1],minmax.pca$x[,2],col = cancerlabels$Class, xlab = "Principal Component 1", ylab = "Principal Component 2")
```
Figure 14\.17: Covariance PCA of range standardized genetic data
14\.1 Dimension reduction
-------------------------
It is quite common for an analyst to have too many variables. There are two different solutions to this problem:
1. **Feature Selection**: Choose a subset of existing variables to be used in a model.
2. **Feature Extraction**: Create a new set of features which are combinations of original variables.
### 14\.1\.1 Feature Selection
Let’s think for a minute about feature selection. What are we really doing when we consider a subset of our existing variables? Take the two dimensional data in Example [**??**](#ex:pcabasis) (while two\-dimensions rarely necessitate dimension reduction, the geometrical interpretation extends to higher dimensions as usual!). The centered data appears as follows:
Now say we perform some kind of feature selection (there are a number of ways to do this, chi\-square tests for instances) and we determine that the variable \\(\\x\_2\\) is more important than \\(\\x\_1\\). So we throw out \\(\\x\_2\\) and we’ve reduced the dimensions from \\(p\=2\\) to \\(k\=1\\). Geometrically, what does our new data look like? By dropping \\(\\x\_1\\) we set all of those horizontal coordinates to zero. In other words, we **project the data orthogonally** onto the \\(\\x\_2\\) axis, as illustrated in Figure [14\.1](pcaapp.html#fig:pcpointsselect).
Figure 14\.1: Geometrical Interpretation of Feature Selection: When we “drop” the variable \\(\\x\_1\\) from our analysis, we are projecting the data onto the span(\\(\\x\_2\\))
Now, how much information (variance) did we lose with this projection? The total variance in the original data is
\\\[\\\|\\x\_1\\\|^2\+\\\|\\x\_2\\\|^2\.\\]
The variance of our data reduction is
\\\[\\\|\\x\_2\\\|^2\.\\]
Thus, the proportion of the total information (variance) we’ve kept is
\\\[\\frac{\\\|\\x\_2\\\|^2}{\\\|\\x\_1\\\|^2\+\\\|\\x\_2\\\|^2}\=\\frac{6\.01}{5\.6\+6\.01} \= 51\.7\\%.\\]
Our reduced dimensional data contains only 51\.7% of the variance of the original data. We’ve lost a lot of information!
The fact that feature selection omits variance in our predictor variables does not make it a bad thing! Obviously, getting rid of variables which have no relationship to a target variable (in the case of *supervised* modeling like prediction and classification) is a good thing. But, in the case of *unsupervised* learning techniques, where there is no target variable involved, we must be extra careful when it comes to feature selection. In summary,
1. Feature Selection is important. Examples include:
* Removing variables which have little to no impact on a target variable in supervised modeling (forward/backward/stepwise selection).
* Removing variables which have obvious strong correlation with other predictors.
* Removing variables that are not interesting in unsupervised learning (For example, you may not want to use the words “the” and “of” when clustering text).- Feature Selection is an orthogonal projection of the original data onto the span of the variables you choose to keep.
- Feature selection should always be done with care and justification.
* In regression, could create problems of endogeneity (errors correlated with predictors \- omitted variable bias).
* For unsupervised modelling, could lose important information.\<.ol\>
### 14\.1\.2 Feature Extraction
PCA is the most common form of feature extraction. The rotation of the space shown in Example [**??**](#ex:pcabasis) represents the creation of new features which are linear combinations of the original features. If we have \\(p\\) potential variables for a model and want to reduce that number to \\(k\\), then the first \\(k\\) principal components combine the individual variables in such a way that is guaranteed to capture as much \`\`information’’ (variance) as possible. Again, take our two\-dimensional data as an example. When we reduce our data down to one\-dimension using principal components, we essentially do the same orthogonal projection that we did in Feature Selection, only in this case we conduct that projection in the new basis of principal components. Recall that for this data, our first principal component \\(\\v\_1\\) was \\\[\\v\_1 \= \\pm 0\.69 \\\\0\.73 \\mp.\\]
Projecting the data onto the first principal component is illustrated in Figure [14\.2](pcaapp.html#fig:pcaproj).
Figure 14\.2: Illustration of Feature Extraction via PCA
How much variance do we keep with \\(k\\) principal components? The proportion of variance explained by each principal component is the ratio of the corresponding eigenvalue to the sum of the eigenvalues (which gives the total amount of variance in the data).
:::{theorem name\=‘Proportion of Variance Explained’ \#pcpropvar}
The proportion of variance explained by the projection of the data onto principal component \\(\\v\_i\\) is
\\\[\\frac{\\lambda\_i}{\\sum\_{j\=1}^p \\lambda\_j}.\\]
Similarly, the proportion of variance explained by the projection of the data onto the first \\(k\\) principal components (\\(k\<j\\)) is
\\\[ \\frac{\\sum\_{i\=1}^k\\lambda\_i}{\\sum\_{j\=1}^p \\lambda\_j}\\]
:::
In our simple 2 dimensional example we were able to keep
\\\[\\frac{\\lambda\_1}{\\lambda\_1\+\\lambda\_2}\=\\frac{10\.61}{10\.61\+1\.00} \= 91\.38\\%\\]
of our variance in one dimension.
### 14\.1\.1 Feature Selection
Let’s think for a minute about feature selection. What are we really doing when we consider a subset of our existing variables? Take the two dimensional data in Example [**??**](#ex:pcabasis) (while two\-dimensions rarely necessitate dimension reduction, the geometrical interpretation extends to higher dimensions as usual!). The centered data appears as follows:
Now say we perform some kind of feature selection (there are a number of ways to do this, chi\-square tests for instances) and we determine that the variable \\(\\x\_2\\) is more important than \\(\\x\_1\\). So we throw out \\(\\x\_2\\) and we’ve reduced the dimensions from \\(p\=2\\) to \\(k\=1\\). Geometrically, what does our new data look like? By dropping \\(\\x\_1\\) we set all of those horizontal coordinates to zero. In other words, we **project the data orthogonally** onto the \\(\\x\_2\\) axis, as illustrated in Figure [14\.1](pcaapp.html#fig:pcpointsselect).
Figure 14\.1: Geometrical Interpretation of Feature Selection: When we “drop” the variable \\(\\x\_1\\) from our analysis, we are projecting the data onto the span(\\(\\x\_2\\))
Now, how much information (variance) did we lose with this projection? The total variance in the original data is
\\\[\\\|\\x\_1\\\|^2\+\\\|\\x\_2\\\|^2\.\\]
The variance of our data reduction is
\\\[\\\|\\x\_2\\\|^2\.\\]
Thus, the proportion of the total information (variance) we’ve kept is
\\\[\\frac{\\\|\\x\_2\\\|^2}{\\\|\\x\_1\\\|^2\+\\\|\\x\_2\\\|^2}\=\\frac{6\.01}{5\.6\+6\.01} \= 51\.7\\%.\\]
Our reduced dimensional data contains only 51\.7% of the variance of the original data. We’ve lost a lot of information!
The fact that feature selection omits variance in our predictor variables does not make it a bad thing! Obviously, getting rid of variables which have no relationship to a target variable (in the case of *supervised* modeling like prediction and classification) is a good thing. But, in the case of *unsupervised* learning techniques, where there is no target variable involved, we must be extra careful when it comes to feature selection. In summary,
1. Feature Selection is important. Examples include:
* Removing variables which have little to no impact on a target variable in supervised modeling (forward/backward/stepwise selection).
* Removing variables which have obvious strong correlation with other predictors.
* Removing variables that are not interesting in unsupervised learning (For example, you may not want to use the words “the” and “of” when clustering text).- Feature Selection is an orthogonal projection of the original data onto the span of the variables you choose to keep.
- Feature selection should always be done with care and justification.
* In regression, could create problems of endogeneity (errors correlated with predictors \- omitted variable bias).
* For unsupervised modelling, could lose important information.\<.ol\>
### 14\.1\.2 Feature Extraction
PCA is the most common form of feature extraction. The rotation of the space shown in Example [**??**](#ex:pcabasis) represents the creation of new features which are linear combinations of the original features. If we have \\(p\\) potential variables for a model and want to reduce that number to \\(k\\), then the first \\(k\\) principal components combine the individual variables in such a way that is guaranteed to capture as much \`\`information’’ (variance) as possible. Again, take our two\-dimensional data as an example. When we reduce our data down to one\-dimension using principal components, we essentially do the same orthogonal projection that we did in Feature Selection, only in this case we conduct that projection in the new basis of principal components. Recall that for this data, our first principal component \\(\\v\_1\\) was \\\[\\v\_1 \= \\pm 0\.69 \\\\0\.73 \\mp.\\]
Projecting the data onto the first principal component is illustrated in Figure [14\.2](pcaapp.html#fig:pcaproj).
Figure 14\.2: Illustration of Feature Extraction via PCA
How much variance do we keep with \\(k\\) principal components? The proportion of variance explained by each principal component is the ratio of the corresponding eigenvalue to the sum of the eigenvalues (which gives the total amount of variance in the data).
:::{theorem name\=‘Proportion of Variance Explained’ \#pcpropvar}
The proportion of variance explained by the projection of the data onto principal component \\(\\v\_i\\) is
\\\[\\frac{\\lambda\_i}{\\sum\_{j\=1}^p \\lambda\_j}.\\]
Similarly, the proportion of variance explained by the projection of the data onto the first \\(k\\) principal components (\\(k\<j\\)) is
\\\[ \\frac{\\sum\_{i\=1}^k\\lambda\_i}{\\sum\_{j\=1}^p \\lambda\_j}\\]
:::
In our simple 2 dimensional example we were able to keep
\\\[\\frac{\\lambda\_1}{\\lambda\_1\+\\lambda\_2}\=\\frac{10\.61}{10\.61\+1\.00} \= 91\.38\\%\\]
of our variance in one dimension.
14\.2 Exploratory Analysis
--------------------------
### 14\.2\.1 UK Food Consumption
#### 14\.2\.1\.1 Explore the Data
The data for this example can be read directly from our course webpage. When we first examine the data, we will see that the rows correspond to different types of food/drink and the columns correspond to the 4 countries within the UK. Our first matter of business is transposing this data so that the 4 countries become our observations (i.e. rows).
```
food=read.csv("http://birch.iaa.ncsu.edu/~slrace/LinearAlgebra2021/Code/ukfood.csv",
header=TRUE,row.names=1)
```
```
library(reshape2) #melt data matrix into 3 columns
library(ggplot2) #heatmap
head(food)
```
```
## England Wales Scotland N.Ireland
## Cheese 105 103 103 66
## Carcass meat 245 227 242 267
## Other meat 685 803 750 586
## Fish 147 160 122 93
## Fats and oils 193 235 184 209
## Sugars 156 175 147 139
```
```
food=as.data.frame(t(food))
head(food)
```
```
## Cheese Carcass meat Other meat Fish Fats and oils Sugars
## England 105 245 685 147 193 156
## Wales 103 227 803 160 235 175
## Scotland 103 242 750 122 184 147
## N.Ireland 66 267 586 93 209 139
## Fresh potatoes Fresh Veg Other Veg Processed potatoes Processed Veg
## England 720 253 488 198 360
## Wales 874 265 570 203 365
## Scotland 566 171 418 220 337
## N.Ireland 1033 143 355 187 334
## Fresh fruit Cereals Beverages Soft drinks Alcoholic drinks
## England 1102 1472 57 1374 375
## Wales 1137 1582 73 1256 475
## Scotland 957 1462 53 1572 458
## N.Ireland 674 1494 47 1506 135
## Confectionery
## England 54
## Wales 64
## Scotland 62
## N.Ireland 41
```
Next we will visualize the information in this data using a simple heat map. To do this we will standardize and then melt the data using the `reshape2` package, and then use a `ggplot()` heatmap.
```
food.std = scale(food, center=T, scale = T)
food.melt = melt(food.std, id.vars = row.names(food.std), measure.vars = 1:17)
ggplot(data = food.melt, aes(x=Var1, y=Var2, fill=value)) +
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-2,2), space = "Lab"
) + theme_minimal()+
theme(axis.title.x = element_blank(),axis.title.y = element_blank(),
axis.text.y = element_text(face = 'bold', size = 12, colour = 'black'),
axis.text.x = element_text(angle = 45, vjust = 1, face = 'bold',
size = 12, colour = 'black', hjust = 1))+coord_fixed()
```
#### 14\.2\.1\.2 `prcomp()` function for PCA
The `prcomp()` function is the one I most often recommend for reasonably sized principal component calculations in R. This function returns a list with class “prcomp” containing the following components (from help prcomp):
1. `sdev`: the standard deviations of the principal components (i.e., the square roots of the eigenvalues of the covariance/correlation matrix, though the calculation is actually done with the singular values of the data matrix).
- `rotation`: the matrix of *variable loadings* (i.e., a matrix whose columns contain the eigenvectors). The function princomp returns this in the element loadings.
- `x`: if retx is true *the value of the rotated data (i.e. the scores)* (the centred (and scaled if requested) data multiplied by the rotation matrix) is returned. Hence, cov(x) is the diagonal matrix \\(diag(sdev^2\)\\). For the formula method, napredict() is applied to handle the treatment of values omitted by the na.action.
- `center`, `scale`: the centering and scaling used, or FALSE.
The option `scale = TRUE` inside the `prcomp()` function instructs the program to use **orrelation PCA**. *The default is covariance PCA*.
```
pca=prcomp(food, scale = T)
```
This first plot just looks at magnitudes of eigenvalues \- it is essentially the screeplot in barchart form.
```
summary(pca)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 3.4082 2.0562 1.07524 6.344e-16
## Proportion of Variance 0.6833 0.2487 0.06801 0.000e+00
## Cumulative Proportion 0.6833 0.9320 1.00000 1.000e+00
```
```
plot(pca, main = "Bar-style Screeplot")
```
The next plot views our four datapoints (locations) projected onto the 2\-dimensional subspace
(from 17 dimensions) that captures as much information (i.e. variance) as possible.
```
plot(pca$x,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
main = 'The four observations projected into 2-dimensional space')
text(pca$x[,1], pca$x[,2],row.names(food))
```
#### 14\.2\.1\.3 The BiPlot
Now we can also view our original variable axes projected down onto that same space!
```
biplot(pca$x,pca$rotation, cex = c(1.5, 1), col = c('black','red'))#,
```
Figure 14\.3: BiPlot: The observations and variables projected onto the same plane.
```
# xlim = c(-0.8,0.8), ylim = c(-0.6,0.7))
```
#### 14\.2\.1\.4 Formatting the biplot for readability
I will soon introduce the `autoplot()` function from the `ggfortify` package, but for now I just want to show you that you can specify *which* variables (and observations) to include in the biplot by directly specifying the loadings matrix and scores matrix of interest in the biplot function:
```
desired.variables = c(2,4,6,8,10)
biplot(pca$x, pca$rotation[desired.variables,1:2], cex = c(1.5, 1),
col = c('black','red'), xlim = c(-6,5), ylim = c(-4,4))
```
Figure 14\.4: Specify a Subset of Variables/Observations to Include in the Biplot
#### 14\.2\.1\.5 What are all these axes?
Those numbers relate to the scores on PC1 and PC2 (sometimes normalized so that each new variable has variance 1 \- and sometimes not) and the loadings on PC1 and PC2 (sometimes normalized so that each variable vector is a unit vector \- and sometimes scaled by the eigenvalues or square roots of the eigenvalues in some fashion).
Generally, I’ve rarely found it useful to hunt down how each package is rendering the axes the biplot, as they should be providing the same information regardless of the scale of the *numbers* on the axes. We don’t actually use those numbers to help us draw conclusions. We use the directions of the arrows and the layout of the points in reference to those direction arrows.
```
vmax = varimax(pca$rotation[,1:2])
new.scores = pca$x[,1:2] %*% vmax$rotmat
biplot(new.scores, vmax$loadings[,1:2],
# xlim=c(-60,60),
# ylim=c(-60,60),
cex = c(1.5, 1),
xlab = 'Rotated Axis 1',
ylab = 'Rotated Axis 2')
```
Figure 14\.5: Biplot with Rotated Loadings
```
vmax$loadings[,1:2]
```
```
## PC1 PC2
## Cheese 0.02571143 0.34751491
## Carcass meat -0.16660468 -0.24450375
## Other meat 0.11243721 0.27569481
## Fish 0.22437069 0.17788300
## Fats and oils 0.35728064 -0.22128124
## Sugars 0.30247003 0.07908986
## Fresh potatoes 0.22174898 -0.40880955
## Fresh Veg 0.26432097 0.09953752
## Other Veg 0.27836185 0.11640174
## Processed potatoes -0.17545152 0.39011648
## Processed Veg 0.29583164 0.05084727
## Fresh fruit 0.15852128 0.24360131
## Cereals 0.34963293 -0.13363398
## Beverages 0.30030152 0.07604823
## Soft drinks -0.36374762 0.07438738
## Alcoholic drinks 0.04243636 0.34240944
## Confectionery 0.05450175 0.32474821
```
### 14\.2\.1 UK Food Consumption
#### 14\.2\.1\.1 Explore the Data
The data for this example can be read directly from our course webpage. When we first examine the data, we will see that the rows correspond to different types of food/drink and the columns correspond to the 4 countries within the UK. Our first matter of business is transposing this data so that the 4 countries become our observations (i.e. rows).
```
food=read.csv("http://birch.iaa.ncsu.edu/~slrace/LinearAlgebra2021/Code/ukfood.csv",
header=TRUE,row.names=1)
```
```
library(reshape2) #melt data matrix into 3 columns
library(ggplot2) #heatmap
head(food)
```
```
## England Wales Scotland N.Ireland
## Cheese 105 103 103 66
## Carcass meat 245 227 242 267
## Other meat 685 803 750 586
## Fish 147 160 122 93
## Fats and oils 193 235 184 209
## Sugars 156 175 147 139
```
```
food=as.data.frame(t(food))
head(food)
```
```
## Cheese Carcass meat Other meat Fish Fats and oils Sugars
## England 105 245 685 147 193 156
## Wales 103 227 803 160 235 175
## Scotland 103 242 750 122 184 147
## N.Ireland 66 267 586 93 209 139
## Fresh potatoes Fresh Veg Other Veg Processed potatoes Processed Veg
## England 720 253 488 198 360
## Wales 874 265 570 203 365
## Scotland 566 171 418 220 337
## N.Ireland 1033 143 355 187 334
## Fresh fruit Cereals Beverages Soft drinks Alcoholic drinks
## England 1102 1472 57 1374 375
## Wales 1137 1582 73 1256 475
## Scotland 957 1462 53 1572 458
## N.Ireland 674 1494 47 1506 135
## Confectionery
## England 54
## Wales 64
## Scotland 62
## N.Ireland 41
```
Next we will visualize the information in this data using a simple heat map. To do this we will standardize and then melt the data using the `reshape2` package, and then use a `ggplot()` heatmap.
```
food.std = scale(food, center=T, scale = T)
food.melt = melt(food.std, id.vars = row.names(food.std), measure.vars = 1:17)
ggplot(data = food.melt, aes(x=Var1, y=Var2, fill=value)) +
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-2,2), space = "Lab"
) + theme_minimal()+
theme(axis.title.x = element_blank(),axis.title.y = element_blank(),
axis.text.y = element_text(face = 'bold', size = 12, colour = 'black'),
axis.text.x = element_text(angle = 45, vjust = 1, face = 'bold',
size = 12, colour = 'black', hjust = 1))+coord_fixed()
```
#### 14\.2\.1\.2 `prcomp()` function for PCA
The `prcomp()` function is the one I most often recommend for reasonably sized principal component calculations in R. This function returns a list with class “prcomp” containing the following components (from help prcomp):
1. `sdev`: the standard deviations of the principal components (i.e., the square roots of the eigenvalues of the covariance/correlation matrix, though the calculation is actually done with the singular values of the data matrix).
- `rotation`: the matrix of *variable loadings* (i.e., a matrix whose columns contain the eigenvectors). The function princomp returns this in the element loadings.
- `x`: if retx is true *the value of the rotated data (i.e. the scores)* (the centred (and scaled if requested) data multiplied by the rotation matrix) is returned. Hence, cov(x) is the diagonal matrix \\(diag(sdev^2\)\\). For the formula method, napredict() is applied to handle the treatment of values omitted by the na.action.
- `center`, `scale`: the centering and scaling used, or FALSE.
The option `scale = TRUE` inside the `prcomp()` function instructs the program to use **orrelation PCA**. *The default is covariance PCA*.
```
pca=prcomp(food, scale = T)
```
This first plot just looks at magnitudes of eigenvalues \- it is essentially the screeplot in barchart form.
```
summary(pca)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 3.4082 2.0562 1.07524 6.344e-16
## Proportion of Variance 0.6833 0.2487 0.06801 0.000e+00
## Cumulative Proportion 0.6833 0.9320 1.00000 1.000e+00
```
```
plot(pca, main = "Bar-style Screeplot")
```
The next plot views our four datapoints (locations) projected onto the 2\-dimensional subspace
(from 17 dimensions) that captures as much information (i.e. variance) as possible.
```
plot(pca$x,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
main = 'The four observations projected into 2-dimensional space')
text(pca$x[,1], pca$x[,2],row.names(food))
```
#### 14\.2\.1\.3 The BiPlot
Now we can also view our original variable axes projected down onto that same space!
```
biplot(pca$x,pca$rotation, cex = c(1.5, 1), col = c('black','red'))#,
```
Figure 14\.3: BiPlot: The observations and variables projected onto the same plane.
```
# xlim = c(-0.8,0.8), ylim = c(-0.6,0.7))
```
#### 14\.2\.1\.4 Formatting the biplot for readability
I will soon introduce the `autoplot()` function from the `ggfortify` package, but for now I just want to show you that you can specify *which* variables (and observations) to include in the biplot by directly specifying the loadings matrix and scores matrix of interest in the biplot function:
```
desired.variables = c(2,4,6,8,10)
biplot(pca$x, pca$rotation[desired.variables,1:2], cex = c(1.5, 1),
col = c('black','red'), xlim = c(-6,5), ylim = c(-4,4))
```
Figure 14\.4: Specify a Subset of Variables/Observations to Include in the Biplot
#### 14\.2\.1\.5 What are all these axes?
Those numbers relate to the scores on PC1 and PC2 (sometimes normalized so that each new variable has variance 1 \- and sometimes not) and the loadings on PC1 and PC2 (sometimes normalized so that each variable vector is a unit vector \- and sometimes scaled by the eigenvalues or square roots of the eigenvalues in some fashion).
Generally, I’ve rarely found it useful to hunt down how each package is rendering the axes the biplot, as they should be providing the same information regardless of the scale of the *numbers* on the axes. We don’t actually use those numbers to help us draw conclusions. We use the directions of the arrows and the layout of the points in reference to those direction arrows.
```
vmax = varimax(pca$rotation[,1:2])
new.scores = pca$x[,1:2] %*% vmax$rotmat
biplot(new.scores, vmax$loadings[,1:2],
# xlim=c(-60,60),
# ylim=c(-60,60),
cex = c(1.5, 1),
xlab = 'Rotated Axis 1',
ylab = 'Rotated Axis 2')
```
Figure 14\.5: Biplot with Rotated Loadings
```
vmax$loadings[,1:2]
```
```
## PC1 PC2
## Cheese 0.02571143 0.34751491
## Carcass meat -0.16660468 -0.24450375
## Other meat 0.11243721 0.27569481
## Fish 0.22437069 0.17788300
## Fats and oils 0.35728064 -0.22128124
## Sugars 0.30247003 0.07908986
## Fresh potatoes 0.22174898 -0.40880955
## Fresh Veg 0.26432097 0.09953752
## Other Veg 0.27836185 0.11640174
## Processed potatoes -0.17545152 0.39011648
## Processed Veg 0.29583164 0.05084727
## Fresh fruit 0.15852128 0.24360131
## Cereals 0.34963293 -0.13363398
## Beverages 0.30030152 0.07604823
## Soft drinks -0.36374762 0.07438738
## Alcoholic drinks 0.04243636 0.34240944
## Confectionery 0.05450175 0.32474821
```
#### 14\.2\.1\.1 Explore the Data
The data for this example can be read directly from our course webpage. When we first examine the data, we will see that the rows correspond to different types of food/drink and the columns correspond to the 4 countries within the UK. Our first matter of business is transposing this data so that the 4 countries become our observations (i.e. rows).
```
food=read.csv("http://birch.iaa.ncsu.edu/~slrace/LinearAlgebra2021/Code/ukfood.csv",
header=TRUE,row.names=1)
```
```
library(reshape2) #melt data matrix into 3 columns
library(ggplot2) #heatmap
head(food)
```
```
## England Wales Scotland N.Ireland
## Cheese 105 103 103 66
## Carcass meat 245 227 242 267
## Other meat 685 803 750 586
## Fish 147 160 122 93
## Fats and oils 193 235 184 209
## Sugars 156 175 147 139
```
```
food=as.data.frame(t(food))
head(food)
```
```
## Cheese Carcass meat Other meat Fish Fats and oils Sugars
## England 105 245 685 147 193 156
## Wales 103 227 803 160 235 175
## Scotland 103 242 750 122 184 147
## N.Ireland 66 267 586 93 209 139
## Fresh potatoes Fresh Veg Other Veg Processed potatoes Processed Veg
## England 720 253 488 198 360
## Wales 874 265 570 203 365
## Scotland 566 171 418 220 337
## N.Ireland 1033 143 355 187 334
## Fresh fruit Cereals Beverages Soft drinks Alcoholic drinks
## England 1102 1472 57 1374 375
## Wales 1137 1582 73 1256 475
## Scotland 957 1462 53 1572 458
## N.Ireland 674 1494 47 1506 135
## Confectionery
## England 54
## Wales 64
## Scotland 62
## N.Ireland 41
```
Next we will visualize the information in this data using a simple heat map. To do this we will standardize and then melt the data using the `reshape2` package, and then use a `ggplot()` heatmap.
```
food.std = scale(food, center=T, scale = T)
food.melt = melt(food.std, id.vars = row.names(food.std), measure.vars = 1:17)
ggplot(data = food.melt, aes(x=Var1, y=Var2, fill=value)) +
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-2,2), space = "Lab"
) + theme_minimal()+
theme(axis.title.x = element_blank(),axis.title.y = element_blank(),
axis.text.y = element_text(face = 'bold', size = 12, colour = 'black'),
axis.text.x = element_text(angle = 45, vjust = 1, face = 'bold',
size = 12, colour = 'black', hjust = 1))+coord_fixed()
```
#### 14\.2\.1\.2 `prcomp()` function for PCA
The `prcomp()` function is the one I most often recommend for reasonably sized principal component calculations in R. This function returns a list with class “prcomp” containing the following components (from help prcomp):
1. `sdev`: the standard deviations of the principal components (i.e., the square roots of the eigenvalues of the covariance/correlation matrix, though the calculation is actually done with the singular values of the data matrix).
- `rotation`: the matrix of *variable loadings* (i.e., a matrix whose columns contain the eigenvectors). The function princomp returns this in the element loadings.
- `x`: if retx is true *the value of the rotated data (i.e. the scores)* (the centred (and scaled if requested) data multiplied by the rotation matrix) is returned. Hence, cov(x) is the diagonal matrix \\(diag(sdev^2\)\\). For the formula method, napredict() is applied to handle the treatment of values omitted by the na.action.
- `center`, `scale`: the centering and scaling used, or FALSE.
The option `scale = TRUE` inside the `prcomp()` function instructs the program to use **orrelation PCA**. *The default is covariance PCA*.
```
pca=prcomp(food, scale = T)
```
This first plot just looks at magnitudes of eigenvalues \- it is essentially the screeplot in barchart form.
```
summary(pca)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4
## Standard deviation 3.4082 2.0562 1.07524 6.344e-16
## Proportion of Variance 0.6833 0.2487 0.06801 0.000e+00
## Cumulative Proportion 0.6833 0.9320 1.00000 1.000e+00
```
```
plot(pca, main = "Bar-style Screeplot")
```
The next plot views our four datapoints (locations) projected onto the 2\-dimensional subspace
(from 17 dimensions) that captures as much information (i.e. variance) as possible.
```
plot(pca$x,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
main = 'The four observations projected into 2-dimensional space')
text(pca$x[,1], pca$x[,2],row.names(food))
```
#### 14\.2\.1\.3 The BiPlot
Now we can also view our original variable axes projected down onto that same space!
```
biplot(pca$x,pca$rotation, cex = c(1.5, 1), col = c('black','red'))#,
```
Figure 14\.3: BiPlot: The observations and variables projected onto the same plane.
```
# xlim = c(-0.8,0.8), ylim = c(-0.6,0.7))
```
#### 14\.2\.1\.4 Formatting the biplot for readability
I will soon introduce the `autoplot()` function from the `ggfortify` package, but for now I just want to show you that you can specify *which* variables (and observations) to include in the biplot by directly specifying the loadings matrix and scores matrix of interest in the biplot function:
```
desired.variables = c(2,4,6,8,10)
biplot(pca$x, pca$rotation[desired.variables,1:2], cex = c(1.5, 1),
col = c('black','red'), xlim = c(-6,5), ylim = c(-4,4))
```
Figure 14\.4: Specify a Subset of Variables/Observations to Include in the Biplot
#### 14\.2\.1\.5 What are all these axes?
Those numbers relate to the scores on PC1 and PC2 (sometimes normalized so that each new variable has variance 1 \- and sometimes not) and the loadings on PC1 and PC2 (sometimes normalized so that each variable vector is a unit vector \- and sometimes scaled by the eigenvalues or square roots of the eigenvalues in some fashion).
Generally, I’ve rarely found it useful to hunt down how each package is rendering the axes the biplot, as they should be providing the same information regardless of the scale of the *numbers* on the axes. We don’t actually use those numbers to help us draw conclusions. We use the directions of the arrows and the layout of the points in reference to those direction arrows.
```
vmax = varimax(pca$rotation[,1:2])
new.scores = pca$x[,1:2] %*% vmax$rotmat
biplot(new.scores, vmax$loadings[,1:2],
# xlim=c(-60,60),
# ylim=c(-60,60),
cex = c(1.5, 1),
xlab = 'Rotated Axis 1',
ylab = 'Rotated Axis 2')
```
Figure 14\.5: Biplot with Rotated Loadings
```
vmax$loadings[,1:2]
```
```
## PC1 PC2
## Cheese 0.02571143 0.34751491
## Carcass meat -0.16660468 -0.24450375
## Other meat 0.11243721 0.27569481
## Fish 0.22437069 0.17788300
## Fats and oils 0.35728064 -0.22128124
## Sugars 0.30247003 0.07908986
## Fresh potatoes 0.22174898 -0.40880955
## Fresh Veg 0.26432097 0.09953752
## Other Veg 0.27836185 0.11640174
## Processed potatoes -0.17545152 0.39011648
## Processed Veg 0.29583164 0.05084727
## Fresh fruit 0.15852128 0.24360131
## Cereals 0.34963293 -0.13363398
## Beverages 0.30030152 0.07604823
## Soft drinks -0.36374762 0.07438738
## Alcoholic drinks 0.04243636 0.34240944
## Confectionery 0.05450175 0.32474821
```
14\.3 FIFA Soccer Players
-------------------------
#### 14\.3\.0\.1 Explore the Data
We begin by loading in the data and taking a quick look at the variables that we’ll be using in our PCA for this exercise. You may need to install the packages from the following `library()` statements.
```
library(reshape2) #melt correlation matrix into 3 columns
library(ggplot2) #correlation heatmap
library(ggfortify) #autoplot bi-plot
library(viridis) # magma palette
```
```
## Loading required package: viridisLite
```
```
library(plotrix) # color.legend
```
Now we’ll read the data directly from the web, take a peek at the first 5 rows, and explore some summary statistics.
```
## Name Age Photo
## 1 Cristiano Ronaldo 32 https://cdn.sofifa.org/48/18/players/20801.png
## 2 L. Messi 30 https://cdn.sofifa.org/48/18/players/158023.png
## 3 Neymar 25 https://cdn.sofifa.org/48/18/players/190871.png
## 4 L. Suárez 30 https://cdn.sofifa.org/48/18/players/176580.png
## 5 M. Neuer 31 https://cdn.sofifa.org/48/18/players/167495.png
## 6 R. Lewandowski 28 https://cdn.sofifa.org/48/18/players/188545.png
## Nationality Flag Overall Potential
## 1 Portugal https://cdn.sofifa.org/flags/38.png 94 94
## 2 Argentina https://cdn.sofifa.org/flags/52.png 93 93
## 3 Brazil https://cdn.sofifa.org/flags/54.png 92 94
## 4 Uruguay https://cdn.sofifa.org/flags/60.png 92 92
## 5 Germany https://cdn.sofifa.org/flags/21.png 92 92
## 6 Poland https://cdn.sofifa.org/flags/37.png 91 91
## Club Club.Logo Value Wage
## 1 Real Madrid CF https://cdn.sofifa.org/24/18/teams/243.png €95.5M €565K
## 2 FC Barcelona https://cdn.sofifa.org/24/18/teams/241.png €105M €565K
## 3 Paris Saint-Germain https://cdn.sofifa.org/24/18/teams/73.png €123M €280K
## 4 FC Barcelona https://cdn.sofifa.org/24/18/teams/241.png €97M €510K
## 5 FC Bayern Munich https://cdn.sofifa.org/24/18/teams/21.png €61M €230K
## 6 FC Bayern Munich https://cdn.sofifa.org/24/18/teams/21.png €92M €355K
## Special Acceleration Aggression Agility Balance Ball.control Composure
## 1 2228 89 63 89 63 93 95
## 2 2154 92 48 90 95 95 96
## 3 2100 94 56 96 82 95 92
## 4 2291 88 78 86 60 91 83
## 5 1493 58 29 52 35 48 70
## 6 2143 79 80 78 80 89 87
## Crossing Curve Dribbling Finishing Free.kick.accuracy GK.diving GK.handling
## 1 85 81 91 94 76 7 11
## 2 77 89 97 95 90 6 11
## 3 75 81 96 89 84 9 9
## 4 77 86 86 94 84 27 25
## 5 15 14 30 13 11 91 90
## 6 62 77 85 91 84 15 6
## GK.kicking GK.positioning GK.reflexes Heading.accuracy Interceptions Jumping
## 1 15 14 11 88 29 95
## 2 15 14 8 71 22 68
## 3 15 15 11 62 36 61
## 4 31 33 37 77 41 69
## 5 95 91 89 25 30 78
## 6 12 8 10 85 39 84
## Long.passing Long.shots Marking Penalties Positioning Reactions Short.passing
## 1 77 92 22 85 95 96 83
## 2 87 88 13 74 93 95 88
## 3 75 77 21 81 90 88 81
## 4 64 86 30 85 92 93 83
## 5 59 16 10 47 12 85 55
## 6 65 83 25 81 91 91 83
## Shot.power Sliding.tackle Sprint.speed Stamina Standing.tackle Strength
## 1 94 23 91 92 31 80
## 2 85 26 87 73 28 59
## 3 80 33 90 78 24 53
## 4 87 38 77 89 45 80
## 5 25 11 61 44 10 83
## 6 88 19 83 79 42 84
## Vision Volleys position
## 1 85 88 1
## 2 90 85 1
## 3 80 83 1
## 4 84 88 1
## 5 70 11 4
## 6 78 87 1
```
```
## Acceleration Aggression Agility Balance Ball.control
## Min. :11.00 Min. :11.00 Min. :14.00 Min. :11.00 Min. : 8
## 1st Qu.:56.00 1st Qu.:43.00 1st Qu.:55.00 1st Qu.:56.00 1st Qu.:53
## Median :67.00 Median :58.00 Median :65.00 Median :66.00 Median :62
## Mean :64.48 Mean :55.74 Mean :63.25 Mean :63.76 Mean :58
## 3rd Qu.:75.00 3rd Qu.:69.00 3rd Qu.:74.00 3rd Qu.:74.00 3rd Qu.:69
## Max. :96.00 Max. :96.00 Max. :96.00 Max. :96.00 Max. :95
## Composure Crossing Curve Dribbling Finishing
## Min. : 5.00 Min. : 5.0 Min. : 6.0 Min. : 2.00 Min. : 2.00
## 1st Qu.:51.00 1st Qu.:37.0 1st Qu.:34.0 1st Qu.:48.00 1st Qu.:29.00
## Median :60.00 Median :54.0 Median :48.0 Median :60.00 Median :48.00
## Mean :57.82 Mean :49.7 Mean :47.2 Mean :54.94 Mean :45.18
## 3rd Qu.:67.00 3rd Qu.:64.0 3rd Qu.:62.0 3rd Qu.:68.00 3rd Qu.:61.00
## Max. :96.00 Max. :91.0 Max. :92.0 Max. :97.00 Max. :95.00
## Free.kick.accuracy GK.diving GK.handling GK.kicking
## Min. : 4.00 Min. : 1.00 Min. : 1.00 Min. : 1.00
## 1st Qu.:31.00 1st Qu.: 8.00 1st Qu.: 8.00 1st Qu.: 8.00
## Median :42.00 Median :11.00 Median :11.00 Median :11.00
## Mean :43.08 Mean :16.78 Mean :16.55 Mean :16.42
## 3rd Qu.:57.00 3rd Qu.:14.00 3rd Qu.:14.00 3rd Qu.:14.00
## Max. :93.00 Max. :91.00 Max. :91.00 Max. :95.00
## GK.positioning GK.reflexes Heading.accuracy Interceptions
## Min. : 1.00 Min. : 1.00 Min. : 4.00 Min. : 4.00
## 1st Qu.: 8.00 1st Qu.: 8.00 1st Qu.:44.00 1st Qu.:26.00
## Median :11.00 Median :11.00 Median :55.00 Median :52.00
## Mean :16.54 Mean :16.91 Mean :52.26 Mean :46.53
## 3rd Qu.:14.00 3rd Qu.:14.00 3rd Qu.:64.00 3rd Qu.:64.00
## Max. :91.00 Max. :90.00 Max. :94.00 Max. :92.00
## Jumping Long.passing Long.shots Marking
## Min. :15.00 Min. : 7.00 Min. : 3.00 Min. : 4.00
## 1st Qu.:58.00 1st Qu.:42.00 1st Qu.:32.00 1st Qu.:22.00
## Median :66.00 Median :56.00 Median :51.00 Median :48.00
## Mean :64.84 Mean :52.37 Mean :47.11 Mean :44.09
## 3rd Qu.:73.00 3rd Qu.:64.00 3rd Qu.:62.00 3rd Qu.:63.00
## Max. :95.00 Max. :93.00 Max. :92.00 Max. :92.00
## Penalties Positioning Reactions Short.passing
## Min. : 5.00 Min. : 2.00 Min. :28.00 Min. :10.00
## 1st Qu.:39.00 1st Qu.:38.00 1st Qu.:55.00 1st Qu.:53.00
## Median :50.00 Median :54.00 Median :62.00 Median :62.00
## Mean :48.92 Mean :49.53 Mean :61.85 Mean :58.22
## 3rd Qu.:61.00 3rd Qu.:64.00 3rd Qu.:68.00 3rd Qu.:68.00
## Max. :92.00 Max. :95.00 Max. :96.00 Max. :92.00
## Shot.power Sliding.tackle Sprint.speed Stamina
## Min. : 3.00 Min. : 4.00 Min. :11.00 Min. :12.00
## 1st Qu.:46.00 1st Qu.:24.00 1st Qu.:57.00 1st Qu.:56.00
## Median :59.00 Median :52.00 Median :67.00 Median :66.00
## Mean :55.57 Mean :45.56 Mean :64.72 Mean :63.13
## 3rd Qu.:68.00 3rd Qu.:64.00 3rd Qu.:75.00 3rd Qu.:74.00
## Max. :94.00 Max. :91.00 Max. :96.00 Max. :95.00
## Standing.tackle Strength Vision Volleys
## Min. : 4.00 Min. :20.00 Min. :10.00 Min. : 4.00
## 1st Qu.:26.00 1st Qu.:58.00 1st Qu.:43.00 1st Qu.:30.00
## Median :54.00 Median :66.00 Median :54.00 Median :44.00
## Mean :47.41 Mean :65.24 Mean :52.93 Mean :43.13
## 3rd Qu.:66.00 3rd Qu.:74.00 3rd Qu.:64.00 3rd Qu.:57.00
## Max. :92.00 Max. :98.00 Max. :94.00 Max. :91.00
```
These variables are scores on the scale of \[0,100] that measure 34 key abilities of soccer players. No player has ever earned a score of 100 on any of these attributes \- no player is *perfect*!
It would be natural to assume some correlation between these variables and indeed, we see lots of it in the following heatmap visualization of the correlation matrix.
```
cor.matrix = cor(fifa[,13:46])
cor.matrix = melt(cor.matrix)
ggplot(data = cor.matrix, aes(x=Var1, y=Var2, fill=value)) +
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1,1), space = "Lab",
name="Correlation") + theme_minimal()+
theme(axis.title.x = element_blank(),axis.title.y = element_blank(),
axis.text.x = element_text(angle = 45, vjust = 1,
size = 9, hjust = 1))+coord_fixed()
```
Figure 14\.6: Heatmap of correlation matrix for 34 variables of interest
What jumps out right away are the “GK” (Goal Keeping) abilities \- these attributes have *very* strong positive correlation with one another and negative correlation with the other abilities. After all, goal keepers are not traditionally well known for their dribbling, passing, and finishing abilities!
Outside of that, we see a lot of red in this correlation matrix – many attributes share a lot of information. This is the type of situation where PCA shines.
#### 14\.3\.0\.2 Principal Components Analysis
Let’s take a look at the principal components analysis. Since the variables are on the same scale, I’ll start with **covariance PCA** (the default in R’s `prcomp()` function).
```
fifa.pca = prcomp(fifa[,13:46] )
```
We can then print the summary of variance explained and the loadings on the first 3 components:
```
summary(fifa.pca)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6
## Standard deviation 74.8371 43.5787 23.28767 20.58146 16.12477 10.71539
## Proportion of Variance 0.5647 0.1915 0.05468 0.04271 0.02621 0.01158
## Cumulative Proportion 0.5647 0.7561 0.81081 0.85352 0.87973 0.89131
## PC7 PC8 PC9 PC10 PC11 PC12 PC13
## Standard deviation 10.17785 9.11852 8.98065 8.5082 8.41550 7.93741 7.15935
## Proportion of Variance 0.01044 0.00838 0.00813 0.0073 0.00714 0.00635 0.00517
## Cumulative Proportion 0.90175 0.91013 0.91827 0.9256 0.93270 0.93906 0.94422
## PC14 PC15 PC16 PC17 PC18 PC19 PC20
## Standard deviation 7.06502 6.68497 6.56406 6.50459 6.22369 6.08812 6.00578
## Proportion of Variance 0.00503 0.00451 0.00434 0.00427 0.00391 0.00374 0.00364
## Cumulative Proportion 0.94926 0.95376 0.95811 0.96237 0.96628 0.97001 0.97365
## PC21 PC22 PC23 PC24 PC25 PC26 PC27
## Standard deviation 5.91320 5.66946 5.45018 5.15051 4.86761 4.34786 4.1098
## Proportion of Variance 0.00353 0.00324 0.00299 0.00267 0.00239 0.00191 0.0017
## Cumulative Proportion 0.97718 0.98042 0.98341 0.98609 0.98848 0.99038 0.9921
## PC28 PC29 PC30 PC31 PC32 PC33 PC34
## Standard deviation 4.05716 3.46035 3.37936 3.31179 3.1429 3.01667 2.95098
## Proportion of Variance 0.00166 0.00121 0.00115 0.00111 0.0010 0.00092 0.00088
## Cumulative Proportion 0.99374 0.99495 0.99610 0.99721 0.9982 0.99912 1.00000
```
```
fifa.pca$rotation[,1:3]
```
```
## PC1 PC2 PC3
## Acceleration -0.13674335 0.0944478107 -0.141193842
## Aggression -0.15322857 -0.2030537953 0.105372978
## Agility -0.13598896 0.1196301737 -0.017763073
## Balance -0.11474980 0.0865672989 -0.072629834
## Ball.control -0.21256812 0.0585990154 0.038243802
## Composure -0.13288575 -0.0005635262 0.163887637
## Crossing -0.21347202 0.0458210228 0.124741235
## Curve -0.20656129 0.1254947094 0.180634730
## Dribbling -0.23090613 0.1259819707 -0.002905379
## Finishing -0.19431248 0.2534086437 0.006524693
## Free.kick.accuracy -0.18528508 0.0960404650 0.219976709
## GK.diving 0.20757999 0.0480952942 0.326161934
## GK.handling 0.19811125 0.0464542553 0.314165622
## GK.kicking 0.19261876 0.0456942190 0.304722126
## GK.positioning 0.19889113 0.0456384196 0.317850121
## GK.reflexes 0.21081755 0.0489895700 0.332751195
## Heading.accuracy -0.17218607 -0.1115416097 -0.125135161
## Interceptions -0.15038835 -0.3669025376 0.162064432
## Jumping -0.03805419 -0.0579221746 0.012263523
## Long.passing -0.16849827 -0.0435009943 0.224584171
## Long.shots -0.21415526 0.1677851237 0.157466462
## Marking -0.14863254 -0.4076616902 0.078298039
## Penalties -0.16328049 0.1407803994 0.024403976
## Positioning -0.22053959 0.1797895382 0.020734699
## Reactions -0.04780774 0.0001844959 0.250247098
## Short.passing -0.18176636 -0.0033124240 0.118611543
## Shot.power -0.19592137 0.0989340925 0.101707386
## Sliding.tackle -0.14977558 -0.4024030355 0.069945935
## Sprint.speed -0.13387287 0.0804847541 -0.146049405
## Stamina -0.17231648 -0.0634639786 -0.016509650
## Standing.tackle -0.15992073 -0.4039763876 0.086418583
## Strength -0.02186264 -0.1151018222 0.096053864
## Vision -0.13027169 0.1152237536 0.260985686
## Volleys -0.18465028 0.1888480712 0.076974579
```
It’s clear we can capture a large amount of the variance in this data with just a few components. In fact **just 2 components yield 76% of the variance!**
Now let’s look at some projections of the players onto those 2 principal components. The scores are located in the `fifa.pca$x` matrix.
```
plot(fifa.pca$x[,1],fifa.pca$x[,2], col=alpha(c('red','blue','green','black')[as.factor(fifa$position)],0.4), pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2', main = 'Projection of Players onto 2 PCs, Colored by Position')
legend(125,-45, c('Forward','Defense','Midfield','GoalKeeper'), c('red','blue','green','black'), bty = 'n', cex=1.1)
```
Figure 14\.7: Projection of the FIFA players’ skill data into 2 dimensions. Player positions are evident.
The plot easily separates the field players from the goal keepers, and the forwards from the defenders. As one might expect, midfielders are sandwiched by the forwards and defenders, as they play both roles on the field. The labeling of player position was imperfect and done using a list of the players’ preferred positions, and it’s likely we are seeing that in some of the players labeled as midfielders that appear above the cloud of red points.
We can also attempt a 3\-dimensional projection of this data:
```
library(plotly)
library(processx)
colors=alpha(c('red','blue','green','black')[as.factor(fifa$position)],0.4)
graph = plot_ly(x = fifa.pca$x[,1],
y = fifa.pca$x[,2],
z= fifa.pca$x[,3],
type='scatter3d',
mode="markers",
marker = list(color=colors))
graph
```
Figure 14\.8: Projection of the FIFA players’ skill data into 3 dimensions. Player positions are evident.
#### 14\.3\.0\.3 The BiPlot
BiPlots can be tricky when we have so much data and so many variables. As you will see, the default image leaves much to be desired, and will motivate our move to the `ggfortify` library to use the `autoplot()` function. The image takes too long to render and is practically unreadable with the whole dataset, so I demonstrate the default `biplot()` function with a sample of the observations.
```
biplot(fifa.pca$x[sample(1:16501,2000),],fifa.pca$rotation[,1:2], cex=0.5, arrow.len = 0.1)
```
Figure 14\.9: The default biplot function leaves much to be desired here
The autoplot function uses the \`ggplot2\`\`\` package and is superior when we have more data.
```
autoplot(fifa.pca, data = fifa,
colour = alpha(c('red','blue','green','orange')[as.factor(fifa$pos)],0.4),
loadings = TRUE, loadings.colour = 'black',
loadings.label = TRUE, loadings.label.size = 3.5, loadings.label.alpha = 1,
loadings.label.fontface='bold',
loadings.label.colour = 'black',
loadings.label.repel=T)
```
```
## Warning: `select_()` was deprecated in dplyr 0.7.0.
## Please use `select()` instead.
```
```
## Warning in if (value %in% columns) {: the condition has length > 1 and only the
## first element will be used
```
Figure 14\.10: The `autoplot()` biplot has many more options for readability.
Many expected conclusions can be drawn from this biplot. The defenders tend to have stronger skills of *interception, slide tackling, standing tackling,* and *marking*, while forwards are generally stronger when it comes to *finishing, long.shots, volleys, agility* etc. Midfielders are likely to be stronger with *crossing, passing, ball.control,* and *stamina.*
#### 14\.3\.0\.4 Further Exploration
Let’s see what happens if we color by the variable ‘overall’ which is designed to rank a player’s overall quality of play.
```
palette(alpha(magma(100),0.6))
plot(fifa.pca$x[,1],fifa.pca$x[,2], col=fifa$Overall,pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2')
color.legend(130,-100,220,-90,seq(0,100,50),alpha(magma(100),0.6),gradient="x")
```
Figure 14\.11: Projection of Players onto 2 PCs, Colored by “Overall” Ability
We can attempt to label some of the outliers, too. First, we’ll look at the 0\.001 and 0\.999 quantiles to get a sense of what coordinates we want to highlight. Then we’ll label any players outside of those bounds and surely find some familiar names.
```
# This first chunk is identical to the chunk above. I have to reproduce the plot to label it.
palette(alpha(magma(100),0.6))
plot(fifa.pca$x[,1], fifa.pca$x[,2], col=fifa$Overall,pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2',
xlim=c(-175,250), ylim = c(-150,150))
color.legend(130,-100,220,-90,seq(0,100,50),alpha(magma(100),0.6),gradient="x")
# Identify quantiles (high/low) for each PC
(quant1h = quantile(fifa.pca$x[,1],0.9997))
```
```
## 99.97%
## 215.4003
```
```
(quant1l = quantile(fifa.pca$x[,1],0.0003))
```
```
## 0.03%
## -130.1493
```
```
(quant2h = quantile(fifa.pca$x[,2],0.9997))
```
```
## 99.97%
## 100.208
```
```
(quant2l = quantile(fifa.pca$x[,2],0.0003))
```
```
## 0.03%
## -101.8846
```
```
# Next I create a logical vector which identifies the outliers
# (i.e. TRUE = outlier, FALSE = not outlier)
outliers = fifa.pca$x[,1] > quant1h | fifa.pca$x[,1] < quant1l |
fifa.pca$x[,2] > quant2h | fifa.pca$x[,2] < quant2l
# Here I label them by name, jittering the coordinates of the text so it's more readable
text(jitter(fifa.pca$x[outliers,1],factor=1), jitter(fifa.pca$x[outliers,2],factor=600), fifa$Name[outliers], cex=0.7)
```
What about by wage? First we need to convert their salary, denominated in Euros, to a numeric variable.
```
# First, observe the problem with the Wage column as it stands
head(fifa$Wage)
```
```
## [1] "€565K" "€565K" "€280K" "€510K" "€230K" "€355K"
```
```
# Use regular expressions to remove the Euro sign and K from the wage column
# then covert to numeric
fifa$Wage = as.numeric(gsub('[€K]', '', fifa$Wage))
# new data:
head(fifa$Wage)
```
```
## [1] 565 565 280 510 230 355
```
```
palette(alpha(magma(100),0.6))
plot(fifa.pca$x[,1], fifa.pca$x[,2], col=fifa$Wage,pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2')
color.legend(130,-100,220,-90,c(min(fifa$Wage),max(fifa$Wage)),alpha(magma(100),0.6),gradient="x")
```
Figure 14\.12: Projection of Players onto 2 Principal Components, Colored by Wage
#### 14\.3\.0\.5 Rotations of Principal Components
We might be able to align our axes more squarely with groups of original variables that are strongly correlated and tell a story. Perhaps we might be able to find latent variables that indicate the position specific ability of players. Let’s see what falls out after varimax and quartimax rotation. Recall that in order to employ rotations, we have to first decide on a number of components. A quick look at a screeplot or cumulative proportion variance explained should help to that aim.
```
plot(cumsum(fifa.pca$sdev^2)/sum(fifa.pca$sdev^2),
type = 'b',
cex=.75,
xlab = "# of components",
ylab = "% variance explained")
```
Figure 14\.13: Cumulative proportion of variance explained by rank of the decomposition (i.e. the number of components)
Let’s use 3 components, since the marginal benefit of using additional components seems small. Once we rotate the loadings, we can try to use a heatmap to visualize what they might represent.
```
vmax = varimax(fifa.pca$rotation[,1:3])
loadings = fifa.pca$rotation[,1:3]%*%vmax$rotmat
melt.loadings = melt(loadings)
ggplot(data = melt.loadings, aes(x=Var2, y=Var1, fill=value)) +
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1,1))
```
#### 14\.3\.0\.1 Explore the Data
We begin by loading in the data and taking a quick look at the variables that we’ll be using in our PCA for this exercise. You may need to install the packages from the following `library()` statements.
```
library(reshape2) #melt correlation matrix into 3 columns
library(ggplot2) #correlation heatmap
library(ggfortify) #autoplot bi-plot
library(viridis) # magma palette
```
```
## Loading required package: viridisLite
```
```
library(plotrix) # color.legend
```
Now we’ll read the data directly from the web, take a peek at the first 5 rows, and explore some summary statistics.
```
## Name Age Photo
## 1 Cristiano Ronaldo 32 https://cdn.sofifa.org/48/18/players/20801.png
## 2 L. Messi 30 https://cdn.sofifa.org/48/18/players/158023.png
## 3 Neymar 25 https://cdn.sofifa.org/48/18/players/190871.png
## 4 L. Suárez 30 https://cdn.sofifa.org/48/18/players/176580.png
## 5 M. Neuer 31 https://cdn.sofifa.org/48/18/players/167495.png
## 6 R. Lewandowski 28 https://cdn.sofifa.org/48/18/players/188545.png
## Nationality Flag Overall Potential
## 1 Portugal https://cdn.sofifa.org/flags/38.png 94 94
## 2 Argentina https://cdn.sofifa.org/flags/52.png 93 93
## 3 Brazil https://cdn.sofifa.org/flags/54.png 92 94
## 4 Uruguay https://cdn.sofifa.org/flags/60.png 92 92
## 5 Germany https://cdn.sofifa.org/flags/21.png 92 92
## 6 Poland https://cdn.sofifa.org/flags/37.png 91 91
## Club Club.Logo Value Wage
## 1 Real Madrid CF https://cdn.sofifa.org/24/18/teams/243.png €95.5M €565K
## 2 FC Barcelona https://cdn.sofifa.org/24/18/teams/241.png €105M €565K
## 3 Paris Saint-Germain https://cdn.sofifa.org/24/18/teams/73.png €123M €280K
## 4 FC Barcelona https://cdn.sofifa.org/24/18/teams/241.png €97M €510K
## 5 FC Bayern Munich https://cdn.sofifa.org/24/18/teams/21.png €61M €230K
## 6 FC Bayern Munich https://cdn.sofifa.org/24/18/teams/21.png €92M €355K
## Special Acceleration Aggression Agility Balance Ball.control Composure
## 1 2228 89 63 89 63 93 95
## 2 2154 92 48 90 95 95 96
## 3 2100 94 56 96 82 95 92
## 4 2291 88 78 86 60 91 83
## 5 1493 58 29 52 35 48 70
## 6 2143 79 80 78 80 89 87
## Crossing Curve Dribbling Finishing Free.kick.accuracy GK.diving GK.handling
## 1 85 81 91 94 76 7 11
## 2 77 89 97 95 90 6 11
## 3 75 81 96 89 84 9 9
## 4 77 86 86 94 84 27 25
## 5 15 14 30 13 11 91 90
## 6 62 77 85 91 84 15 6
## GK.kicking GK.positioning GK.reflexes Heading.accuracy Interceptions Jumping
## 1 15 14 11 88 29 95
## 2 15 14 8 71 22 68
## 3 15 15 11 62 36 61
## 4 31 33 37 77 41 69
## 5 95 91 89 25 30 78
## 6 12 8 10 85 39 84
## Long.passing Long.shots Marking Penalties Positioning Reactions Short.passing
## 1 77 92 22 85 95 96 83
## 2 87 88 13 74 93 95 88
## 3 75 77 21 81 90 88 81
## 4 64 86 30 85 92 93 83
## 5 59 16 10 47 12 85 55
## 6 65 83 25 81 91 91 83
## Shot.power Sliding.tackle Sprint.speed Stamina Standing.tackle Strength
## 1 94 23 91 92 31 80
## 2 85 26 87 73 28 59
## 3 80 33 90 78 24 53
## 4 87 38 77 89 45 80
## 5 25 11 61 44 10 83
## 6 88 19 83 79 42 84
## Vision Volleys position
## 1 85 88 1
## 2 90 85 1
## 3 80 83 1
## 4 84 88 1
## 5 70 11 4
## 6 78 87 1
```
```
## Acceleration Aggression Agility Balance Ball.control
## Min. :11.00 Min. :11.00 Min. :14.00 Min. :11.00 Min. : 8
## 1st Qu.:56.00 1st Qu.:43.00 1st Qu.:55.00 1st Qu.:56.00 1st Qu.:53
## Median :67.00 Median :58.00 Median :65.00 Median :66.00 Median :62
## Mean :64.48 Mean :55.74 Mean :63.25 Mean :63.76 Mean :58
## 3rd Qu.:75.00 3rd Qu.:69.00 3rd Qu.:74.00 3rd Qu.:74.00 3rd Qu.:69
## Max. :96.00 Max. :96.00 Max. :96.00 Max. :96.00 Max. :95
## Composure Crossing Curve Dribbling Finishing
## Min. : 5.00 Min. : 5.0 Min. : 6.0 Min. : 2.00 Min. : 2.00
## 1st Qu.:51.00 1st Qu.:37.0 1st Qu.:34.0 1st Qu.:48.00 1st Qu.:29.00
## Median :60.00 Median :54.0 Median :48.0 Median :60.00 Median :48.00
## Mean :57.82 Mean :49.7 Mean :47.2 Mean :54.94 Mean :45.18
## 3rd Qu.:67.00 3rd Qu.:64.0 3rd Qu.:62.0 3rd Qu.:68.00 3rd Qu.:61.00
## Max. :96.00 Max. :91.0 Max. :92.0 Max. :97.00 Max. :95.00
## Free.kick.accuracy GK.diving GK.handling GK.kicking
## Min. : 4.00 Min. : 1.00 Min. : 1.00 Min. : 1.00
## 1st Qu.:31.00 1st Qu.: 8.00 1st Qu.: 8.00 1st Qu.: 8.00
## Median :42.00 Median :11.00 Median :11.00 Median :11.00
## Mean :43.08 Mean :16.78 Mean :16.55 Mean :16.42
## 3rd Qu.:57.00 3rd Qu.:14.00 3rd Qu.:14.00 3rd Qu.:14.00
## Max. :93.00 Max. :91.00 Max. :91.00 Max. :95.00
## GK.positioning GK.reflexes Heading.accuracy Interceptions
## Min. : 1.00 Min. : 1.00 Min. : 4.00 Min. : 4.00
## 1st Qu.: 8.00 1st Qu.: 8.00 1st Qu.:44.00 1st Qu.:26.00
## Median :11.00 Median :11.00 Median :55.00 Median :52.00
## Mean :16.54 Mean :16.91 Mean :52.26 Mean :46.53
## 3rd Qu.:14.00 3rd Qu.:14.00 3rd Qu.:64.00 3rd Qu.:64.00
## Max. :91.00 Max. :90.00 Max. :94.00 Max. :92.00
## Jumping Long.passing Long.shots Marking
## Min. :15.00 Min. : 7.00 Min. : 3.00 Min. : 4.00
## 1st Qu.:58.00 1st Qu.:42.00 1st Qu.:32.00 1st Qu.:22.00
## Median :66.00 Median :56.00 Median :51.00 Median :48.00
## Mean :64.84 Mean :52.37 Mean :47.11 Mean :44.09
## 3rd Qu.:73.00 3rd Qu.:64.00 3rd Qu.:62.00 3rd Qu.:63.00
## Max. :95.00 Max. :93.00 Max. :92.00 Max. :92.00
## Penalties Positioning Reactions Short.passing
## Min. : 5.00 Min. : 2.00 Min. :28.00 Min. :10.00
## 1st Qu.:39.00 1st Qu.:38.00 1st Qu.:55.00 1st Qu.:53.00
## Median :50.00 Median :54.00 Median :62.00 Median :62.00
## Mean :48.92 Mean :49.53 Mean :61.85 Mean :58.22
## 3rd Qu.:61.00 3rd Qu.:64.00 3rd Qu.:68.00 3rd Qu.:68.00
## Max. :92.00 Max. :95.00 Max. :96.00 Max. :92.00
## Shot.power Sliding.tackle Sprint.speed Stamina
## Min. : 3.00 Min. : 4.00 Min. :11.00 Min. :12.00
## 1st Qu.:46.00 1st Qu.:24.00 1st Qu.:57.00 1st Qu.:56.00
## Median :59.00 Median :52.00 Median :67.00 Median :66.00
## Mean :55.57 Mean :45.56 Mean :64.72 Mean :63.13
## 3rd Qu.:68.00 3rd Qu.:64.00 3rd Qu.:75.00 3rd Qu.:74.00
## Max. :94.00 Max. :91.00 Max. :96.00 Max. :95.00
## Standing.tackle Strength Vision Volleys
## Min. : 4.00 Min. :20.00 Min. :10.00 Min. : 4.00
## 1st Qu.:26.00 1st Qu.:58.00 1st Qu.:43.00 1st Qu.:30.00
## Median :54.00 Median :66.00 Median :54.00 Median :44.00
## Mean :47.41 Mean :65.24 Mean :52.93 Mean :43.13
## 3rd Qu.:66.00 3rd Qu.:74.00 3rd Qu.:64.00 3rd Qu.:57.00
## Max. :92.00 Max. :98.00 Max. :94.00 Max. :91.00
```
These variables are scores on the scale of \[0,100] that measure 34 key abilities of soccer players. No player has ever earned a score of 100 on any of these attributes \- no player is *perfect*!
It would be natural to assume some correlation between these variables and indeed, we see lots of it in the following heatmap visualization of the correlation matrix.
```
cor.matrix = cor(fifa[,13:46])
cor.matrix = melt(cor.matrix)
ggplot(data = cor.matrix, aes(x=Var1, y=Var2, fill=value)) +
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1,1), space = "Lab",
name="Correlation") + theme_minimal()+
theme(axis.title.x = element_blank(),axis.title.y = element_blank(),
axis.text.x = element_text(angle = 45, vjust = 1,
size = 9, hjust = 1))+coord_fixed()
```
Figure 14\.6: Heatmap of correlation matrix for 34 variables of interest
What jumps out right away are the “GK” (Goal Keeping) abilities \- these attributes have *very* strong positive correlation with one another and negative correlation with the other abilities. After all, goal keepers are not traditionally well known for their dribbling, passing, and finishing abilities!
Outside of that, we see a lot of red in this correlation matrix – many attributes share a lot of information. This is the type of situation where PCA shines.
#### 14\.3\.0\.2 Principal Components Analysis
Let’s take a look at the principal components analysis. Since the variables are on the same scale, I’ll start with **covariance PCA** (the default in R’s `prcomp()` function).
```
fifa.pca = prcomp(fifa[,13:46] )
```
We can then print the summary of variance explained and the loadings on the first 3 components:
```
summary(fifa.pca)
```
```
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6
## Standard deviation 74.8371 43.5787 23.28767 20.58146 16.12477 10.71539
## Proportion of Variance 0.5647 0.1915 0.05468 0.04271 0.02621 0.01158
## Cumulative Proportion 0.5647 0.7561 0.81081 0.85352 0.87973 0.89131
## PC7 PC8 PC9 PC10 PC11 PC12 PC13
## Standard deviation 10.17785 9.11852 8.98065 8.5082 8.41550 7.93741 7.15935
## Proportion of Variance 0.01044 0.00838 0.00813 0.0073 0.00714 0.00635 0.00517
## Cumulative Proportion 0.90175 0.91013 0.91827 0.9256 0.93270 0.93906 0.94422
## PC14 PC15 PC16 PC17 PC18 PC19 PC20
## Standard deviation 7.06502 6.68497 6.56406 6.50459 6.22369 6.08812 6.00578
## Proportion of Variance 0.00503 0.00451 0.00434 0.00427 0.00391 0.00374 0.00364
## Cumulative Proportion 0.94926 0.95376 0.95811 0.96237 0.96628 0.97001 0.97365
## PC21 PC22 PC23 PC24 PC25 PC26 PC27
## Standard deviation 5.91320 5.66946 5.45018 5.15051 4.86761 4.34786 4.1098
## Proportion of Variance 0.00353 0.00324 0.00299 0.00267 0.00239 0.00191 0.0017
## Cumulative Proportion 0.97718 0.98042 0.98341 0.98609 0.98848 0.99038 0.9921
## PC28 PC29 PC30 PC31 PC32 PC33 PC34
## Standard deviation 4.05716 3.46035 3.37936 3.31179 3.1429 3.01667 2.95098
## Proportion of Variance 0.00166 0.00121 0.00115 0.00111 0.0010 0.00092 0.00088
## Cumulative Proportion 0.99374 0.99495 0.99610 0.99721 0.9982 0.99912 1.00000
```
```
fifa.pca$rotation[,1:3]
```
```
## PC1 PC2 PC3
## Acceleration -0.13674335 0.0944478107 -0.141193842
## Aggression -0.15322857 -0.2030537953 0.105372978
## Agility -0.13598896 0.1196301737 -0.017763073
## Balance -0.11474980 0.0865672989 -0.072629834
## Ball.control -0.21256812 0.0585990154 0.038243802
## Composure -0.13288575 -0.0005635262 0.163887637
## Crossing -0.21347202 0.0458210228 0.124741235
## Curve -0.20656129 0.1254947094 0.180634730
## Dribbling -0.23090613 0.1259819707 -0.002905379
## Finishing -0.19431248 0.2534086437 0.006524693
## Free.kick.accuracy -0.18528508 0.0960404650 0.219976709
## GK.diving 0.20757999 0.0480952942 0.326161934
## GK.handling 0.19811125 0.0464542553 0.314165622
## GK.kicking 0.19261876 0.0456942190 0.304722126
## GK.positioning 0.19889113 0.0456384196 0.317850121
## GK.reflexes 0.21081755 0.0489895700 0.332751195
## Heading.accuracy -0.17218607 -0.1115416097 -0.125135161
## Interceptions -0.15038835 -0.3669025376 0.162064432
## Jumping -0.03805419 -0.0579221746 0.012263523
## Long.passing -0.16849827 -0.0435009943 0.224584171
## Long.shots -0.21415526 0.1677851237 0.157466462
## Marking -0.14863254 -0.4076616902 0.078298039
## Penalties -0.16328049 0.1407803994 0.024403976
## Positioning -0.22053959 0.1797895382 0.020734699
## Reactions -0.04780774 0.0001844959 0.250247098
## Short.passing -0.18176636 -0.0033124240 0.118611543
## Shot.power -0.19592137 0.0989340925 0.101707386
## Sliding.tackle -0.14977558 -0.4024030355 0.069945935
## Sprint.speed -0.13387287 0.0804847541 -0.146049405
## Stamina -0.17231648 -0.0634639786 -0.016509650
## Standing.tackle -0.15992073 -0.4039763876 0.086418583
## Strength -0.02186264 -0.1151018222 0.096053864
## Vision -0.13027169 0.1152237536 0.260985686
## Volleys -0.18465028 0.1888480712 0.076974579
```
It’s clear we can capture a large amount of the variance in this data with just a few components. In fact **just 2 components yield 76% of the variance!**
Now let’s look at some projections of the players onto those 2 principal components. The scores are located in the `fifa.pca$x` matrix.
```
plot(fifa.pca$x[,1],fifa.pca$x[,2], col=alpha(c('red','blue','green','black')[as.factor(fifa$position)],0.4), pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2', main = 'Projection of Players onto 2 PCs, Colored by Position')
legend(125,-45, c('Forward','Defense','Midfield','GoalKeeper'), c('red','blue','green','black'), bty = 'n', cex=1.1)
```
Figure 14\.7: Projection of the FIFA players’ skill data into 2 dimensions. Player positions are evident.
The plot easily separates the field players from the goal keepers, and the forwards from the defenders. As one might expect, midfielders are sandwiched by the forwards and defenders, as they play both roles on the field. The labeling of player position was imperfect and done using a list of the players’ preferred positions, and it’s likely we are seeing that in some of the players labeled as midfielders that appear above the cloud of red points.
We can also attempt a 3\-dimensional projection of this data:
```
library(plotly)
library(processx)
colors=alpha(c('red','blue','green','black')[as.factor(fifa$position)],0.4)
graph = plot_ly(x = fifa.pca$x[,1],
y = fifa.pca$x[,2],
z= fifa.pca$x[,3],
type='scatter3d',
mode="markers",
marker = list(color=colors))
graph
```
Figure 14\.8: Projection of the FIFA players’ skill data into 3 dimensions. Player positions are evident.
#### 14\.3\.0\.3 The BiPlot
BiPlots can be tricky when we have so much data and so many variables. As you will see, the default image leaves much to be desired, and will motivate our move to the `ggfortify` library to use the `autoplot()` function. The image takes too long to render and is practically unreadable with the whole dataset, so I demonstrate the default `biplot()` function with a sample of the observations.
```
biplot(fifa.pca$x[sample(1:16501,2000),],fifa.pca$rotation[,1:2], cex=0.5, arrow.len = 0.1)
```
Figure 14\.9: The default biplot function leaves much to be desired here
The autoplot function uses the \`ggplot2\`\`\` package and is superior when we have more data.
```
autoplot(fifa.pca, data = fifa,
colour = alpha(c('red','blue','green','orange')[as.factor(fifa$pos)],0.4),
loadings = TRUE, loadings.colour = 'black',
loadings.label = TRUE, loadings.label.size = 3.5, loadings.label.alpha = 1,
loadings.label.fontface='bold',
loadings.label.colour = 'black',
loadings.label.repel=T)
```
```
## Warning: `select_()` was deprecated in dplyr 0.7.0.
## Please use `select()` instead.
```
```
## Warning in if (value %in% columns) {: the condition has length > 1 and only the
## first element will be used
```
Figure 14\.10: The `autoplot()` biplot has many more options for readability.
Many expected conclusions can be drawn from this biplot. The defenders tend to have stronger skills of *interception, slide tackling, standing tackling,* and *marking*, while forwards are generally stronger when it comes to *finishing, long.shots, volleys, agility* etc. Midfielders are likely to be stronger with *crossing, passing, ball.control,* and *stamina.*
#### 14\.3\.0\.4 Further Exploration
Let’s see what happens if we color by the variable ‘overall’ which is designed to rank a player’s overall quality of play.
```
palette(alpha(magma(100),0.6))
plot(fifa.pca$x[,1],fifa.pca$x[,2], col=fifa$Overall,pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2')
color.legend(130,-100,220,-90,seq(0,100,50),alpha(magma(100),0.6),gradient="x")
```
Figure 14\.11: Projection of Players onto 2 PCs, Colored by “Overall” Ability
We can attempt to label some of the outliers, too. First, we’ll look at the 0\.001 and 0\.999 quantiles to get a sense of what coordinates we want to highlight. Then we’ll label any players outside of those bounds and surely find some familiar names.
```
# This first chunk is identical to the chunk above. I have to reproduce the plot to label it.
palette(alpha(magma(100),0.6))
plot(fifa.pca$x[,1], fifa.pca$x[,2], col=fifa$Overall,pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2',
xlim=c(-175,250), ylim = c(-150,150))
color.legend(130,-100,220,-90,seq(0,100,50),alpha(magma(100),0.6),gradient="x")
# Identify quantiles (high/low) for each PC
(quant1h = quantile(fifa.pca$x[,1],0.9997))
```
```
## 99.97%
## 215.4003
```
```
(quant1l = quantile(fifa.pca$x[,1],0.0003))
```
```
## 0.03%
## -130.1493
```
```
(quant2h = quantile(fifa.pca$x[,2],0.9997))
```
```
## 99.97%
## 100.208
```
```
(quant2l = quantile(fifa.pca$x[,2],0.0003))
```
```
## 0.03%
## -101.8846
```
```
# Next I create a logical vector which identifies the outliers
# (i.e. TRUE = outlier, FALSE = not outlier)
outliers = fifa.pca$x[,1] > quant1h | fifa.pca$x[,1] < quant1l |
fifa.pca$x[,2] > quant2h | fifa.pca$x[,2] < quant2l
# Here I label them by name, jittering the coordinates of the text so it's more readable
text(jitter(fifa.pca$x[outliers,1],factor=1), jitter(fifa.pca$x[outliers,2],factor=600), fifa$Name[outliers], cex=0.7)
```
What about by wage? First we need to convert their salary, denominated in Euros, to a numeric variable.
```
# First, observe the problem with the Wage column as it stands
head(fifa$Wage)
```
```
## [1] "€565K" "€565K" "€280K" "€510K" "€230K" "€355K"
```
```
# Use regular expressions to remove the Euro sign and K from the wage column
# then covert to numeric
fifa$Wage = as.numeric(gsub('[€K]', '', fifa$Wage))
# new data:
head(fifa$Wage)
```
```
## [1] 565 565 280 510 230 355
```
```
palette(alpha(magma(100),0.6))
plot(fifa.pca$x[,1], fifa.pca$x[,2], col=fifa$Wage,pch=16, xlab = 'Principal Component 1', ylab='Principal Component 2')
color.legend(130,-100,220,-90,c(min(fifa$Wage),max(fifa$Wage)),alpha(magma(100),0.6),gradient="x")
```
Figure 14\.12: Projection of Players onto 2 Principal Components, Colored by Wage
#### 14\.3\.0\.5 Rotations of Principal Components
We might be able to align our axes more squarely with groups of original variables that are strongly correlated and tell a story. Perhaps we might be able to find latent variables that indicate the position specific ability of players. Let’s see what falls out after varimax and quartimax rotation. Recall that in order to employ rotations, we have to first decide on a number of components. A quick look at a screeplot or cumulative proportion variance explained should help to that aim.
```
plot(cumsum(fifa.pca$sdev^2)/sum(fifa.pca$sdev^2),
type = 'b',
cex=.75,
xlab = "# of components",
ylab = "% variance explained")
```
Figure 14\.13: Cumulative proportion of variance explained by rank of the decomposition (i.e. the number of components)
Let’s use 3 components, since the marginal benefit of using additional components seems small. Once we rotate the loadings, we can try to use a heatmap to visualize what they might represent.
```
vmax = varimax(fifa.pca$rotation[,1:3])
loadings = fifa.pca$rotation[,1:3]%*%vmax$rotmat
melt.loadings = melt(loadings)
ggplot(data = melt.loadings, aes(x=Var2, y=Var1, fill=value)) +
geom_tile(color = "white")+
scale_fill_gradient2(low = "blue", high = "red", mid = "white",
midpoint = 0, limit = c(-1,1))
```
14\.4 Cancer Genetics
---------------------
Read in the data. The load() function reads in a dataset that has 20532 columns and may take some time. You may want to save and clear your environment (or open a new RStudio window) if you have other work open.
```
load('LAdata/geneCancerUCI.RData')
table(cancerlabels$Class)
```
```
##
## BRCA COAD KIRC LUAD PRAD
## 300 78 146 141 136
```
Original Source: *The cancer genome atlas pan\-cancer analysis project*
* BRCA \= Breast Invasive Carcinoma
* COAD \= Colon Adenocarcinoma
* KIRC \= Kidney Renal clear cell Carcinoma
* LUAD \= Lung Adenocarcinoma
* PRAD \= Prostate Adenocarcinoma
We are going to want to plot the data points according to their different classification labels. We should pick out a nice color palette for categorical attributes. We chose to assign palette `Dark2` but feel free to choose any categorical palette that attracts you in the code below!
```
library(RColorBrewer)
display.brewer.all()
palette(brewer.pal(n = 8, name = "Dark2"))
```
The first step is typically to explore the data. Obviously we can’t look at ALL the scatter plots of input variables. For the fun of it, let’s look at a few of these scatter plots which we’ll pick at random. First pick two column numbers at random, then draw the plot, coloring by the label. You could repeat this chunk several times to explore different combinations. Can you find one that does a good job of separating any of the types of cancer?
```
par(mfrow=c(2,3))
for(i in 1:6){
randomColumns = sample(2:20532,2)
plot(cancer[,randomColumns],col = cancerlabels$Class)
}
```
Figure 14\.14: Random 2\-Dimensional Projections of Cancer Data
To restore our plot window from that 3\-by\-2 grid, we run `dev.off()`
```
dev.off()
```
```
## null device
## 1
```
### 14\.4\.1 Computing the PCA
The function is the one I most often recommend for reasonably sized principal component calculations in R. This function returns a list with class “prcomp” containing the following components (from help prcomp):
1. : the standard deviations of the principal components (i.e., the square roots of the eigenvalues of the covariance/correlation matrix, though the calculation is actually done with the singular values of the data matrix).
2. : the matrix of *variable loadings* (i.e., a matrix whose columns contain the eigenvectors). The function princomp returns this in the element loadings.
3. : if retx is true *the value of the rotated data (i.e. the scores)* (the centred (and scaled if requested) data multiplied by the rotation matrix) is returned. Hence, cov(x) is the diagonal matrix \\(diag(sdev^2\)\\). For the formula method, napredict() is applied to handle the treatment of values omitted by the na.action.
4. : the centering and scaling used, or FALSE.
The option inside the function instructs the program to use **correlation PCA**. The **default is covariance PCA**.
Now let’s compute the *first three* principal components and examine the data projected onto the first 2 axes. We can then look in 3 dimensions.
```
pcaOut = prcomp(cancer,rank = 3, scale = F)
```
```
plot(pcaOut$x[,1], pcaOut$x[,2],
col = cancerlabels$Class,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
main = 'Genetic Samples Projected into 2-dimensions \n using COVARIANCE PCA')
```
Figure 14\.15: Covariance PCA of genetic data
### 14\.4\.2 3D plot with package
Make sure the plotly package is installed for the 3d plot. To get the plot points colored by group, we need to execute the following command that creates a vector of colors (specifying a color for each observation).
```
colors = factor(palette())
colors = colors[cancerlabels$Class]
table(colors, cancerlabels$Class)
```
```
##
## colors BRCA COAD KIRC LUAD PRAD
## #00000499 300 0 0 0 0
## #01010799 0 78 0 0 0
## #02020B99 0 0 146 0 0
## #03031199 0 0 0 141 0
## #05041799 0 0 0 0 136
## #07061C99 0 0 0 0 0
## #09072199 0 0 0 0 0
## #0C092699 0 0 0 0 0
## #0F0B2C99 0 0 0 0 0
## #120D3299 0 0 0 0 0
## #150E3799 0 0 0 0 0
## #180F3E99 0 0 0 0 0
## #1C104499 0 0 0 0 0
## #1F114A99 0 0 0 0 0
## #22115099 0 0 0 0 0
## #26125799 0 0 0 0 0
## #2A115D99 0 0 0 0 0
## #2F116399 0 0 0 0 0
## #33106899 0 0 0 0 0
## #38106C99 0 0 0 0 0
## #3C0F7199 0 0 0 0 0
## #400F7499 0 0 0 0 0
## #45107799 0 0 0 0 0
## #49107899 0 0 0 0 0
## #4E117B99 0 0 0 0 0
## #51127C99 0 0 0 0 0
## #56147D99 0 0 0 0 0
## #5A167E99 0 0 0 0 0
## #5D177F99 0 0 0 0 0
## #61198099 0 0 0 0 0
## #661A8099 0 0 0 0 0
## #6A1C8199 0 0 0 0 0
## #6D1D8199 0 0 0 0 0
## #721F8199 0 0 0 0 0
## #76218199 0 0 0 0 0
## #79228299 0 0 0 0 0
## #7D248299 0 0 0 0 0
## #82258199 0 0 0 0 0
## #86278199 0 0 0 0 0
## #8A298199 0 0 0 0 0
## #8E2A8199 0 0 0 0 0
## #922B8099 0 0 0 0 0
## #962C8099 0 0 0 0 0
## #9B2E7F99 0 0 0 0 0
## #9F2F7F99 0 0 0 0 0
## #A3307E99 0 0 0 0 0
## #A7317D99 0 0 0 0 0
## #AB337C99 0 0 0 0 0
## #AF357B99 0 0 0 0 0
## #B3367A99 0 0 0 0 0
## #B8377999 0 0 0 0 0
## #BC397899 0 0 0 0 0
## #C03A7699 0 0 0 0 0
## #C43C7599 0 0 0 0 0
## #C83E7399 0 0 0 0 0
## #CD407199 0 0 0 0 0
## #D0416F99 0 0 0 0 0
## #D5446D99 0 0 0 0 0
## #D8456C99 0 0 0 0 0
## #DC486999 0 0 0 0 0
## #DF4B6899 0 0 0 0 0
## #E34E6599 0 0 0 0 0
## #E6516399 0 0 0 0 0
## #E9556299 0 0 0 0 0
## #EC586099 0 0 0 0 0
## #EE5C5E99 0 0 0 0 0
## #F1605D99 0 0 0 0 0
## #F2655C99 0 0 0 0 0
## #F4695C99 0 0 0 0 0
## #F66D5C99 0 0 0 0 0
## #F7735C99 0 0 0 0 0
## #F9785D99 0 0 0 0 0
## #F97C5D99 0 0 0 0 0
## #FA815F99 0 0 0 0 0
## #FB866199 0 0 0 0 0
## #FC8A6299 0 0 0 0 0
## #FC906599 0 0 0 0 0
## #FCEFB199 0 0 0 0 0
## #FCF4B699 0 0 0 0 0
## #FCF8BA99 0 0 0 0 0
## #FCFDBF99 0 0 0 0 0
## #FD956799 0 0 0 0 0
## #FD9A6A99 0 0 0 0 0
## #FDDC9E99 0 0 0 0 0
## #FDE1A299 0 0 0 0 0
## #FDE5A799 0 0 0 0 0
## #FDEBAB99 0 0 0 0 0
## #FE9E6C99 0 0 0 0 0
## #FEA36F99 0 0 0 0 0
## #FEA87399 0 0 0 0 0
## #FEAC7699 0 0 0 0 0
## #FEB27A99 0 0 0 0 0
## #FEB67D99 0 0 0 0 0
## #FEBB8199 0 0 0 0 0
## #FEC08599 0 0 0 0 0
## #FEC48899 0 0 0 0 0
## #FEC98D99 0 0 0 0 0
## #FECD9099 0 0 0 0 0
## #FED39599 0 0 0 0 0
## #FED79999 0 0 0 0 0
```
```
library(plotly)
graph = plot_ly(x = pcaOut$x[,1],
y = pcaOut$x[,2],
z= pcaOut$x[,3],
type='scatter3d',
mode="markers",
marker = list(color=colors))
graph
```
### 14\.4\.3 3D plot with package
```
library(rgl)
```
```
##
## Attaching package: 'rgl'
```
```
## The following object is masked from 'package:plotrix':
##
## mtext3d
```
```
knitr::knit_hooks$set(webgl = hook_webgl)
```
Make sure the rgl package is installed for the 3d plot.
```
plot3d(x = pcaOut$x[,1],
y = pcaOut$x[,2],
z= pcaOut$x[,3],
col = colors,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
zlab = "Principal Component 3")
```
You must enable Javascript to view this page properly.
### 14\.4\.4 Variance explained
Proportion of Variance explained by 2,3 components:
```
summary(pcaOut)
```
```
## Importance of first k=3 (out of 801) components:
## PC1 PC2 PC3
## Standard deviation 75.7407 61.6805 58.57297
## Proportion of Variance 0.1584 0.1050 0.09472
## Cumulative Proportion 0.1584 0.2634 0.35815
```
```
# Alternatively, if you had computed the ALL the principal components (omitted the rank=3 option) then
# you could directly compute the proportions of variance explained using what we know about the
# eigenvalues:
# sum(pcaOut$sdev[1:2]^2)/sum(pcaOut$sdev^2)
# sum(pcaOut$sdev[1:3]^2)/sum(pcaOut$sdev^2)
```
### 14\.4\.5 Using Correlation PCA
The data involved in this exercise are actually on the same scale, and normalizing them may not be in your best interest because of this. However, it’s always a good idea to explore both decompositions if you have time.
```
pca.cor = prcomp(cancer, rank=3, scale =T)
```
An error message! Cannot rescale a constant/zero column to unit variance. Solution: check for columns with zero variance and remove them. Then, re\-check dimensions of the matrix to see how many columns we lost.
```
cancer = cancer[,apply(cancer, 2, sd)>0 ]
dim(cancer)
```
```
## [1] 801 20264
```
Once we’ve taken care of those zero\-variance columns, we can proceed to compute the correlation PCA:
```
pca.cor = prcomp(cancer, rank=3, scale =T)
```
```
plot(pca.cor$x[,1], pca.cor$x[,2],
col = cancerlabels$Class,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
main = 'Genetic Samples Projected into 2-dimensions \n using CORRELATION PCA')
```
Figure 14\.16: Correlation PCA of genetic data
And it’s clear just from the 2\-dimensional projection that correlation PCA does not seem to work as well as covariance PCA when it comes to separating the 4 different types of cancer.
Indeed, we can confirm this from the proportion of variance explained, which is substantially lower than that of covariance PCA:
```
summary(pca.cor)
```
```
## Importance of first k=3 (out of 801) components:
## PC1 PC2 PC3
## Standard deviation 46.2145 42.11838 39.7823
## Proportion of Variance 0.1054 0.08754 0.0781
## Cumulative Proportion 0.1054 0.19294 0.2710
```
### 14\.4\.6 Range standardization as an alternative to covariance PCA
We can also put all the variables on a scale of 0 to 1 if we’re concerned about issues with scale (in this case, scale wasn’t an issue \- but the following approach still might be provide interesting projections in some datasets). This transformation would be as follows for each variable \\(\\mathbf{x}\\):
\\\[\\frac{\\mathbf{x} \- \\min(\\mathbf{x})}{\\max(\\mathbf{x})\-\\min(\\mathbf{x})}\\]
```
cancer = cancer[,apply(cancer,2,sd)>0]
min = apply(cancer,2,min)
range = apply(cancer,2, function(x){max(x)-min(x)})
minmax.cancer=scale(cancer,center=min,scale=range)
```
Then we can compute the covariance PCA of that range\-standardized data without concern:
```
minmax.pca = prcomp(minmax.cancer, rank=3, scale=F )
```
```
plot(minmax.pca$x[,1],minmax.pca$x[,2],col = cancerlabels$Class, xlab = "Principal Component 1", ylab = "Principal Component 2")
```
Figure 14\.17: Covariance PCA of range standardized genetic data
### 14\.4\.1 Computing the PCA
The function is the one I most often recommend for reasonably sized principal component calculations in R. This function returns a list with class “prcomp” containing the following components (from help prcomp):
1. : the standard deviations of the principal components (i.e., the square roots of the eigenvalues of the covariance/correlation matrix, though the calculation is actually done with the singular values of the data matrix).
2. : the matrix of *variable loadings* (i.e., a matrix whose columns contain the eigenvectors). The function princomp returns this in the element loadings.
3. : if retx is true *the value of the rotated data (i.e. the scores)* (the centred (and scaled if requested) data multiplied by the rotation matrix) is returned. Hence, cov(x) is the diagonal matrix \\(diag(sdev^2\)\\). For the formula method, napredict() is applied to handle the treatment of values omitted by the na.action.
4. : the centering and scaling used, or FALSE.
The option inside the function instructs the program to use **correlation PCA**. The **default is covariance PCA**.
Now let’s compute the *first three* principal components and examine the data projected onto the first 2 axes. We can then look in 3 dimensions.
```
pcaOut = prcomp(cancer,rank = 3, scale = F)
```
```
plot(pcaOut$x[,1], pcaOut$x[,2],
col = cancerlabels$Class,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
main = 'Genetic Samples Projected into 2-dimensions \n using COVARIANCE PCA')
```
Figure 14\.15: Covariance PCA of genetic data
### 14\.4\.2 3D plot with package
Make sure the plotly package is installed for the 3d plot. To get the plot points colored by group, we need to execute the following command that creates a vector of colors (specifying a color for each observation).
```
colors = factor(palette())
colors = colors[cancerlabels$Class]
table(colors, cancerlabels$Class)
```
```
##
## colors BRCA COAD KIRC LUAD PRAD
## #00000499 300 0 0 0 0
## #01010799 0 78 0 0 0
## #02020B99 0 0 146 0 0
## #03031199 0 0 0 141 0
## #05041799 0 0 0 0 136
## #07061C99 0 0 0 0 0
## #09072199 0 0 0 0 0
## #0C092699 0 0 0 0 0
## #0F0B2C99 0 0 0 0 0
## #120D3299 0 0 0 0 0
## #150E3799 0 0 0 0 0
## #180F3E99 0 0 0 0 0
## #1C104499 0 0 0 0 0
## #1F114A99 0 0 0 0 0
## #22115099 0 0 0 0 0
## #26125799 0 0 0 0 0
## #2A115D99 0 0 0 0 0
## #2F116399 0 0 0 0 0
## #33106899 0 0 0 0 0
## #38106C99 0 0 0 0 0
## #3C0F7199 0 0 0 0 0
## #400F7499 0 0 0 0 0
## #45107799 0 0 0 0 0
## #49107899 0 0 0 0 0
## #4E117B99 0 0 0 0 0
## #51127C99 0 0 0 0 0
## #56147D99 0 0 0 0 0
## #5A167E99 0 0 0 0 0
## #5D177F99 0 0 0 0 0
## #61198099 0 0 0 0 0
## #661A8099 0 0 0 0 0
## #6A1C8199 0 0 0 0 0
## #6D1D8199 0 0 0 0 0
## #721F8199 0 0 0 0 0
## #76218199 0 0 0 0 0
## #79228299 0 0 0 0 0
## #7D248299 0 0 0 0 0
## #82258199 0 0 0 0 0
## #86278199 0 0 0 0 0
## #8A298199 0 0 0 0 0
## #8E2A8199 0 0 0 0 0
## #922B8099 0 0 0 0 0
## #962C8099 0 0 0 0 0
## #9B2E7F99 0 0 0 0 0
## #9F2F7F99 0 0 0 0 0
## #A3307E99 0 0 0 0 0
## #A7317D99 0 0 0 0 0
## #AB337C99 0 0 0 0 0
## #AF357B99 0 0 0 0 0
## #B3367A99 0 0 0 0 0
## #B8377999 0 0 0 0 0
## #BC397899 0 0 0 0 0
## #C03A7699 0 0 0 0 0
## #C43C7599 0 0 0 0 0
## #C83E7399 0 0 0 0 0
## #CD407199 0 0 0 0 0
## #D0416F99 0 0 0 0 0
## #D5446D99 0 0 0 0 0
## #D8456C99 0 0 0 0 0
## #DC486999 0 0 0 0 0
## #DF4B6899 0 0 0 0 0
## #E34E6599 0 0 0 0 0
## #E6516399 0 0 0 0 0
## #E9556299 0 0 0 0 0
## #EC586099 0 0 0 0 0
## #EE5C5E99 0 0 0 0 0
## #F1605D99 0 0 0 0 0
## #F2655C99 0 0 0 0 0
## #F4695C99 0 0 0 0 0
## #F66D5C99 0 0 0 0 0
## #F7735C99 0 0 0 0 0
## #F9785D99 0 0 0 0 0
## #F97C5D99 0 0 0 0 0
## #FA815F99 0 0 0 0 0
## #FB866199 0 0 0 0 0
## #FC8A6299 0 0 0 0 0
## #FC906599 0 0 0 0 0
## #FCEFB199 0 0 0 0 0
## #FCF4B699 0 0 0 0 0
## #FCF8BA99 0 0 0 0 0
## #FCFDBF99 0 0 0 0 0
## #FD956799 0 0 0 0 0
## #FD9A6A99 0 0 0 0 0
## #FDDC9E99 0 0 0 0 0
## #FDE1A299 0 0 0 0 0
## #FDE5A799 0 0 0 0 0
## #FDEBAB99 0 0 0 0 0
## #FE9E6C99 0 0 0 0 0
## #FEA36F99 0 0 0 0 0
## #FEA87399 0 0 0 0 0
## #FEAC7699 0 0 0 0 0
## #FEB27A99 0 0 0 0 0
## #FEB67D99 0 0 0 0 0
## #FEBB8199 0 0 0 0 0
## #FEC08599 0 0 0 0 0
## #FEC48899 0 0 0 0 0
## #FEC98D99 0 0 0 0 0
## #FECD9099 0 0 0 0 0
## #FED39599 0 0 0 0 0
## #FED79999 0 0 0 0 0
```
```
library(plotly)
graph = plot_ly(x = pcaOut$x[,1],
y = pcaOut$x[,2],
z= pcaOut$x[,3],
type='scatter3d',
mode="markers",
marker = list(color=colors))
graph
```
### 14\.4\.3 3D plot with package
```
library(rgl)
```
```
##
## Attaching package: 'rgl'
```
```
## The following object is masked from 'package:plotrix':
##
## mtext3d
```
```
knitr::knit_hooks$set(webgl = hook_webgl)
```
Make sure the rgl package is installed for the 3d plot.
```
plot3d(x = pcaOut$x[,1],
y = pcaOut$x[,2],
z= pcaOut$x[,3],
col = colors,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
zlab = "Principal Component 3")
```
You must enable Javascript to view this page properly.
### 14\.4\.4 Variance explained
Proportion of Variance explained by 2,3 components:
```
summary(pcaOut)
```
```
## Importance of first k=3 (out of 801) components:
## PC1 PC2 PC3
## Standard deviation 75.7407 61.6805 58.57297
## Proportion of Variance 0.1584 0.1050 0.09472
## Cumulative Proportion 0.1584 0.2634 0.35815
```
```
# Alternatively, if you had computed the ALL the principal components (omitted the rank=3 option) then
# you could directly compute the proportions of variance explained using what we know about the
# eigenvalues:
# sum(pcaOut$sdev[1:2]^2)/sum(pcaOut$sdev^2)
# sum(pcaOut$sdev[1:3]^2)/sum(pcaOut$sdev^2)
```
### 14\.4\.5 Using Correlation PCA
The data involved in this exercise are actually on the same scale, and normalizing them may not be in your best interest because of this. However, it’s always a good idea to explore both decompositions if you have time.
```
pca.cor = prcomp(cancer, rank=3, scale =T)
```
An error message! Cannot rescale a constant/zero column to unit variance. Solution: check for columns with zero variance and remove them. Then, re\-check dimensions of the matrix to see how many columns we lost.
```
cancer = cancer[,apply(cancer, 2, sd)>0 ]
dim(cancer)
```
```
## [1] 801 20264
```
Once we’ve taken care of those zero\-variance columns, we can proceed to compute the correlation PCA:
```
pca.cor = prcomp(cancer, rank=3, scale =T)
```
```
plot(pca.cor$x[,1], pca.cor$x[,2],
col = cancerlabels$Class,
xlab = "Principal Component 1",
ylab = "Principal Component 2",
main = 'Genetic Samples Projected into 2-dimensions \n using CORRELATION PCA')
```
Figure 14\.16: Correlation PCA of genetic data
And it’s clear just from the 2\-dimensional projection that correlation PCA does not seem to work as well as covariance PCA when it comes to separating the 4 different types of cancer.
Indeed, we can confirm this from the proportion of variance explained, which is substantially lower than that of covariance PCA:
```
summary(pca.cor)
```
```
## Importance of first k=3 (out of 801) components:
## PC1 PC2 PC3
## Standard deviation 46.2145 42.11838 39.7823
## Proportion of Variance 0.1054 0.08754 0.0781
## Cumulative Proportion 0.1054 0.19294 0.2710
```
### 14\.4\.6 Range standardization as an alternative to covariance PCA
We can also put all the variables on a scale of 0 to 1 if we’re concerned about issues with scale (in this case, scale wasn’t an issue \- but the following approach still might be provide interesting projections in some datasets). This transformation would be as follows for each variable \\(\\mathbf{x}\\):
\\\[\\frac{\\mathbf{x} \- \\min(\\mathbf{x})}{\\max(\\mathbf{x})\-\\min(\\mathbf{x})}\\]
```
cancer = cancer[,apply(cancer,2,sd)>0]
min = apply(cancer,2,min)
range = apply(cancer,2, function(x){max(x)-min(x)})
minmax.cancer=scale(cancer,center=min,scale=range)
```
Then we can compute the covariance PCA of that range\-standardized data without concern:
```
minmax.pca = prcomp(minmax.cancer, rank=3, scale=F )
```
```
plot(minmax.pca$x[,1],minmax.pca$x[,2],col = cancerlabels$Class, xlab = "Principal Component 1", ylab = "Principal Component 2")
```
Figure 14\.17: Covariance PCA of range standardized genetic data
| Field Specific |
shainarace.github.io | https://shainarace.github.io/LinearAlgebra/svdapp.html |
Chapter 16 Applications of SVD
==============================
16\.1 Text Mining
-----------------
Text mining is another area where the SVD is used heavily. In text mining, our data structure is generally known as a **Term\-Document Matrix**. The *documents* are any individual pieces of text that we wish to analyze, cluster, summarize or discover topics from. They could be sentences, abstracts, webpages, or social media updates. The *terms* are the words contained in these documents. The term\-document matrix represents what’s called the “bag\-of\-words” approach \- the order of the words is removed and the data becomes unstructured in the sense that each document is represented by the words it contains, not the order or context in which they appear. The \\((i,j)\\) entry in this matrix is the number of times term \\(j\\) appears in document \\(i\\).
**Definition 16\.1 (Term\-Document Matrix)** Let \\(m\\) be the number of documents in a collection and \\(n\\) be the number of terms appearing in that collection, then we create our **term\-document matrix** \\(\\A\\) as follows:
\\\[\\begin{equation}
\\begin{array}{ccc}
\& \& \\text{term 1} \\quad \\text{term $j$} \\,\\, \\text{term $n$} \\\\
\\A\_{m\\times n} \= \& \\begin{array}{c}
\\hbox{Doc 1} \\\\
\\\\
\\\\
\\hbox{Doc $i$} \\\\
\\\\
\\hbox{Doc $m$} \\\\
\\end{array} \&
\\left(
\\begin{array}{ccccccc}
\& \& \& \|\& \& \& \\\\
\& \& \& \|\& \& \& \\\\
\& \& \& \|\& \& \& \\\\
\& \- \& \- \&f\_{ij} \& \& \& \\\\
\& \& \& \& \& \& \\\\
\& \& \& \& \& \& \\\\
\\end{array}
\\right)
\\end{array}
\\nonumber
\\end{equation}\\]
where \\(f\_{ij}\\) is the frequency of term \\(j\\) in document \\(i\\). A **binary** term\-document matrix will simply have \\(\\A\_{ij}\=1\\) if term \\(j\\) is contained in document \\(i\\).
### 16\.1\.1 Note About Rows vs. Columns
You might be asking yourself, “**Hey, wait a minute. Why do we have documents as columns in this matrix? Aren’t the documents like our observations?**” Sure! Many data scientists insist on having the documents on the rows of this matrix. *But*, before you do that, you should realize something. Many SVD and PCA routines are created in a way that is more efficient when your data is long vs. wide, and text data commonly has more terms than documents. The equivalence of the two presentations should be easy to see in all matrix factorization applications. If we have
\\\[\\A \= \\U\\mathbf{D}\\V^T\\] then,
\\\[\\A^T \= \\V\\mathbf{D}\\U^T\\]
so we merely need to switch our interpretations of the left\- and right\-singular vectors to switch from document columns to document rows.
Beyond any computational efficiency argument, we prefer to keep our documents on the columns here because of the emphasis placed earlier in this text regarding matrix multiplication viewed as a linear combination of columns. The animation in Figure [2\.7](mult.html#fig:multlincombanim) is a good thing to be clear on before proceeding here.
### 16\.1\.2 Term Weighting
Term\-document matrices tend to be large and sparse. Term\-weighting schemes are often used to downplay the effect of commonly used words and bolster the effect of rare but semantically important words . The most popular weighting method is known as **Term Frequency\-Inverse Document Frequency (TF\-IDF)**. For this method, the raw term\-frequencies \\(f\_{ij}\\) in the matrix \\(\\A\\) are multiplied by global weights called *inverse document frequencies*, \\(w\_i\\), for each term. These weights reflect the commonality of each term across the entire collection and ultimately quantify a term’s ability to narrow one’s search results (the foundations of text analysis were, after all, dominated by search technology). The inverse document frequency of term \\(i\\) is:
\\\[w\_i \= \\log \\left( \\frac{\\mbox{total \# of documents}}{\\mbox{\# documents containing term } i} \\right)\\]
To put this weight in perspective, for a collection of \\(n\=10,000\\) documents we have \\(0\\leq w\_j \\leq 9\.2\\), where \\(w\_j\=0\\) means the word is contained in every document (rendering it useless for search) and \\(w\_j\=9\.2\\) means the word is contained in only 1 document (making it very useful for search). The document vectors are often normalized to have unit 2\-norm, since their directions (not their lengths) in the term\-space is what characterizes them semantically.
### 16\.1\.3 Other Considerations
In dealing with text, we want to do as much as we can do minimize the size of the dictionary (the collection of terms which enumerate the rows of our term\-document matrix) for both computational and practical reasons. The first effort we’ll make toward this goal is to remove so\-called **stop words**, or very common words that appear in a great many sentences like articles (“a,” “an,” “the”) and prepositions (“about,” “for,” “at”) among others. Many projects also contain domain\-specific stop words. For example, one might remove the word “Reuters” from a corpus of [Reuters’ newswires](https://shainarace.github.io/Reuters/). The second effort we’ll often make is to apply a **stemming** algorithm which reduces words to their *stem.* For example, the words “swimmer” and “swimming” would both be reduced to their stem, “swim.” Stemming and stop word removal can greatly reduce the size of the dictionary and also help draw meaningful connections between documents.
### 16\.1\.4 Latent Semantic Indexing
The noise\-reduction property of the SVD was extended to text processing in 1990 by Susan Dumais et al, who named the effect *Latent Semantic Indexing (LSI)*. LSI involves the singular value decomposition of the term\-document matrix defined in Definition [16\.1](svdapp.html#def:tdm). In other words, it is like a principal components analysis using the unscaled, uncentered inner\-product matrix \\(\\A^T\\A\\). If the documents are normalized to have unit length, this is a matrix of **cosine similarities** (see Chapter [6](norms.html#norms)). Cosine similarity is the most common measure of similarity between documents for text mining. If the term\-document matrix is binary, this is often called the co\-occurrence matrix because each entry gives the number of times two words occur in the same document.
It certainly seems logical to view text data in this context as it contains both an informative signal and semantic noise. LSI quickly grew roots in the information retrieval community, where it is often used for query processing. The idea is to remove semantic noise, due to variation and ambiguity in vocabulary and presentation style, without losing significant amounts of information. For example, a human may not differentiate between the words “car” and “automobile,” but indeed the words will become two separate entities in the raw term\-document matrix. The main idea in LSI is that the realignment of the data into fewer directions should force related documents (like those containing “car” and “automobile”) closer together in an angular sense, thus revealing latent semantic connections.
Purveyors of LSI suggest that the use of the Singular Value Decomposition to project the documents into a lower\-dimensional space results in a representation which reflects the major associative patterns of the data while ignoring less important influences. This projection is done with the simple truncation of the SVD shown in Equation [(15\.3\)](svd.html#eq:truncsvd).
As we have seen with other types of data, the very nature of dimension reduction makes possible for two documents with similar semantic properties to be mapped closer together. Unfortunately, the mixture of signs (positive and negative) in the singular vectors (think principal components) makes the decomposition difficult to interpret. While the major claims of LSI are legitimate, this lack of interpretability is still conceptually problematic for some folks. In order to make this point as clear as possible, consider the original “term basis” representation for the data, where each document (from a collection containing \\(m\\) total terms in the dictionary) could be written as:
\\\[\\A\_j \= \\sum\_{i\=1}^{m} f\_{ij}\\e\_i\\]
where \\(f\_{ij}\\) is the frequency of term \\(i\\) in the document, and \\(\\e\_i\\) is the \\(i^{th}\\) column of the \\(m\\times m\\) identity matrix. The truncated SVD gives us a new set of coordinates (scores) and basis vectors (principal component features):
\\\[\\A\_j \\approx \\sum\_{i\=1}^r \\alpha\_i \\u\_i\\]
but the features \\(\\u\_i\\) live in the term space, and thus ought to be interpretable as a linear combinations of the original “term basis.” However the linear combinations, having both positive and negative coefficients, tends to be semantically obscure in practice \- These new features do not often form meaningful *topics* for the text, although they often do organize in a meaningful way as we will demonstrate in the next section.
### 16\.1\.5 Example
Let’s consider a corpus of short documents, perhaps status updates from social media sites. We’ll keep this corpus as minimal as possible to demonstrate the utility of the SVD for text.
Figure 16\.1: A corpus of 6 documents. Words occurring in more than one document appear in bold. Stop words removed, stemming utilized. Document numbers correspond to term\-document matrix below.
\\\[\\begin{equation\*}
\\begin{array}{cc}
\& \\begin{array}{cccccc} \\;doc\_1\\; \& \\;doc\_2\\;\& \\;doc\_3\\;\& \\;doc\_4\\;\& \\;doc\_5\\;\& \\;doc\_6\\; \\end{array}\\\\
\\begin{array}{c}
\\hbox{cat} \\\\
\\hbox{dog}\\\\
\\hbox{eat}\\\\
\\hbox{tired} \\\\
\\hbox{toy}\\\\
\\hbox{injured} \\\\
\\hbox{ankle} \\\\
\\hbox{broken} \\\\
\\hbox{swollen} \\\\
\\hbox{sprained} \\\\
\\end{array} \&
\\left(
\\begin{array}{cccccc}
\\quad 1\\quad \& \\quad 2\\quad \& \\quad 2\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 2\\quad \& \\quad 3\\quad \& \\quad 2\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 2\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 1\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 1\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 1\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 0\\quad \\\\
\\end{array}\\right)
\\end{array}
\\end{equation\*}\\]
We’ll start by entering this matrix into R. Of course the process of parsing a collection of documents and creating a term\-document matrix is generally more automatic. The `tm` text mining library is recommended for creating a term\-document matrix in practice.
```
A=matrix(c(1,2,2,0,0,0,
2,3,2,0,0,0,
2,0,1,0,0,0,
0,1,0,0,1,0,
0,1,1,0,0,0,
0,0,0,1,1,0,
0,0,0,1,1,1,
0,0,0,1,0,1,
0,0,0,1,0,1,
0,0,0,1,1,0),
nrow=10, byrow=T)
A
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 2 2 0 0 0
## [2,] 2 3 2 0 0 0
## [3,] 2 0 1 0 0 0
## [4,] 0 1 0 0 1 0
## [5,] 0 1 1 0 0 0
## [6,] 0 0 0 1 1 0
## [7,] 0 0 0 1 1 1
## [8,] 0 0 0 1 0 1
## [9,] 0 0 0 1 0 1
## [10,] 0 0 0 1 1 0
```
Because our corpus is so small, we’ll skip the step of term\-weighting, but we *will* normalize the documents to have equal length. In other words, we’ll divide each document vector by its two\-norm so that it becomes a unit vector:
```
A_norm = apply(A, 2, function(x){x/c(sqrt(t(x)%*%x))})
A_norm
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 0.3333333 0.5163978 0.6324555 0.0000000 0.0 0.0000000
## [2,] 0.6666667 0.7745967 0.6324555 0.0000000 0.0 0.0000000
## [3,] 0.6666667 0.0000000 0.3162278 0.0000000 0.0 0.0000000
## [4,] 0.0000000 0.2581989 0.0000000 0.0000000 0.5 0.0000000
## [5,] 0.0000000 0.2581989 0.3162278 0.0000000 0.0 0.0000000
## [6,] 0.0000000 0.0000000 0.0000000 0.4472136 0.5 0.0000000
## [7,] 0.0000000 0.0000000 0.0000000 0.4472136 0.5 0.5773503
## [8,] 0.0000000 0.0000000 0.0000000 0.4472136 0.0 0.5773503
## [9,] 0.0000000 0.0000000 0.0000000 0.4472136 0.0 0.5773503
## [10,] 0.0000000 0.0000000 0.0000000 0.4472136 0.5 0.0000000
```
We then compute the SVD of `A_norm` and observe the left\- and right\-singular vectors. Since the matrix \\(\\A\\) is term\-by\-document, you might consider the terms as being the “units” of the rows of \\(\\A\\) and the documents as being the “units” of the columns. For example, \\(\\A\_{23}\=2\\) could logically be interpreted as “there are 2 units of the word *dog* per *document number 3*.” In this mentality, any factorization of the matrix should preserve those units. Similar to any [“Change of Units Railroad”](https://www.katmarsoftware.com/articles/railroad-track-unit-conversion.htm), matrix factorization can be considered in terms of units assigned to both rows and columns:
\\\[\\A\_{\\text{term} \\times \\text{doc}} \= \\U\_{\\text{term} \\times \\text{factor}}\\mathbf{D}\_{\\text{factor} \\times \\text{factor}}\\V^T\_{\\text{factor} \\times \\text{doc}}\\]
Thus, when we examine the rows of the matrix \\(\\U\\), we’re looking at information about each term and how it contributes to each factor (i.e. the “factors” are just linear combinations of our elementary term vectors); When we examine the columns of the matrix \\(\\V^T\\), we’re looking at information about how each document is related to each factor (i.e. the documents are linear combinations of these factors with weights corresponding to the elements of \\(\\V^T\\)). And what about \\(\\mathbf{D}?\\) Well, in classical factor analysis the matrix \\(\\mathbf{D}\\) is often combined with either \\(\\U\\) or \\(\\V^T\\) to obtain a two\-matrix factorization. \\(\\mathbf{D}\\) describes how much information or signal from our original matrix exists along each of the singular components. It is common to use a **screeplot**, a simple line plot of the singular values in \\(\\mathbf{D}\\), to determine an appropriate *rank* for the truncation in Equation [(15\.3\)](svd.html#eq:truncsvd).
```
out = svd(A_norm)
plot(out$d, ylab = 'Singular Values of A_norm')
```
Figure 16\.2: Screeplot for the Toy Text Dataset
Noticing the gap, or “elbow” in the screeplot at an index of 2 lets us know that the first two singular components contain notably more information than the components to follow \- A major proportion of pattern or signal in this matrix lies long 2 components, i.e. **there are 2 major topics that might provide a reasonable approximation to the data**. What’s a “topic” in a vector space model? A linear combination of terms! It’s just a column vector in the term space! Let’s first examine the left\-singular vectors in \\(\\U\\). Remember, the *rows* of this matrix describe how the terms load onto factors, and the columns are those mysterious “factors” themselves.
```
out$u
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] -0.52980742 -0.04803212 0.01606507 -0.24737747 0.23870207 0.45722153
## [2,] -0.73429739 -0.06558224 0.02165167 -0.08821632 -0.09484667 -0.56183983
## [3,] -0.34442976 -0.03939120 0.10670326 0.83459702 -0.14778574 0.25277609
## [4,] -0.11234648 0.16724740 -0.47798864 -0.22995963 -0.59187851 -0.07506297
## [5,] -0.20810051 -0.01743101 -0.01281893 -0.34717811 0.23948814 0.42758997
## [6,] -0.03377822 0.36991575 -0.41154158 0.15837732 0.39526231 -0.10648584
## [7,] -0.04573569 0.58708873 0.01651849 -0.01514815 -0.42604773 0.38615891
## [8,] -0.02427277 0.41546131 0.45839081 -0.07300613 0.07255625 -0.15988106
## [9,] -0.02427277 0.41546131 0.45839081 -0.07300613 0.07255625 -0.15988106
## [10,] -0.03377822 0.36991575 -0.41154158 0.15837732 0.39526231 -0.10648584
```
So the first “factor” of SVD is as follows:
\\\[\\text{factor}\_1 \=
\-0\.530 \\text{cat} \-0\.734 \\text{dog}\-0\.344 \\text{eat}\-0\.112 \\text{tired} \-0\.208 \\text{toy}\-0\.034 \\text{injured} \-0\.046 \\text{ankle}\-0\.024 \\text{broken} \-0\.024 \\text{swollen} \-0\.034 \\text{sprained} \\]
We can immediately see why people had trouble with LSI as a topic model – it’s hard to intuit how you might treat a mix of positive and negative coefficients in the output. If we ignore the signs and only investigate the absolute values, we can certainly see some meaningful topic information in this first factor: the largest magnitude weights all go to the words from the documents about pets. You might like to say that negative entries mean a topic is *anticorrelated* with that word, and to some extent this is correct. That logic works nicely, in fact, for factor 2:
\\\[\\text{factor}\_2 \= \-0\.048\\text{cat}\-0\.066\\text{dog}\-0\.039\\text{eat}\+ 0\.167\\text{tired} \-0\.017\\text{toy} 0\.370\\text{injured}\+ 0\.587\\text{ankle} \+0\.415\\text{broken} \+ 0\.415\\text{swollen} \+ 0\.370\\text{sprained}\\]
However, circling back to factor 1 then leaves us wanting to see different signs for the two groups of words. Nevertheless, the information separating the words is most certainly present. Take a look at the plot of the words’ loadings along the first two factors in Figure [16\.3](svdapp.html#fig:lsiwords).
Figure 16\.3: Projection of the Terms onto First two Singular Dimensions
Moving on to the documents, we can see a similar clustering pattern in the columns of \\(\\V^T\\) which are the rows of \\(\\V\\), shown below:
```
out$v
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] -0.55253068 -0.05828903 0.10665606 0.74609663 -0.2433982 -0.2530492
## [2,] -0.57064141 -0.02502636 -0.11924683 -0.62022594 -0.1219825 -0.5098650
## [3,] -0.60092838 -0.06088635 0.06280655 -0.10444424 0.3553232 0.7029012
## [4,] -0.04464392 0.65412158 0.05781835 0.12506090 0.6749109 -0.3092635
## [5,] -0.06959068 0.50639918 -0.75339800 0.06438433 -0.3367244 0.2314730
## [6,] -0.03357626 0.55493581 0.63206685 -0.16722869 -0.4803488 0.1808591
```
In fact, the ability to separate the documents with the first two singular vectors is rather magical here, as shown visually in Figure [16\.4](svdapp.html#fig:lsidocs).
Figure 16\.4: Projection of the Docuemnts onto First two Singular Dimensions
Figure [16\.4](svdapp.html#fig:lsidocs) demonstrates how documents that live in a 10\-dimensional term space can be compressed down to 2\-dimensions in a way that captures the major information of interest. If we were to take that 2\-truncated SVD of our term\-document matrix and multiply it back together, we’d see an *approximation* of our original term\-document matrix, and we could calculate the error involved in that approximation. We could equivalently calculate that error by using the singular values.
```
A_approx = out$u[,1:2]%*% diag(out$d[1:2])%*%t(out$v[,1:2])
# Sum of element-wise squared error
(norm(A-A_approx,'F'))^2
```
```
## [1] 24.44893
```
```
# Sum of squared singular values truncated
(sum(out$d[3:6]^2))
```
```
## [1] 1.195292
```
However, multiplying back to the original data is not generally an action of interest to data scientists. What we are after in the SVD is the dimensionality reduced data contained in the columns of \\(\\V^T\\) (or, if you’ve created a document\-term matrix, the rows of \\(\\U\\).
16\.2 Image Compression
-----------------------
While multiplying back to the original data is not generally something we’d like to do, it does provide a nice illustration of noise\-reduction and signal\-compression when working with images. The following example is not designed to teach you how to work with images for the purposes of data science. It is merely a nice visual way to *see* what’s happening when we truncate the SVD and omit these directions that have “minimal signal.”
### 16\.2\.1 Image data in R
Let’s take an image of a leader that we all know and respect:
Figure 16\.5: Michael Rappa, PhD, Founding Director of the Institute for Advanced Analytics and Distinguished Professor at NC State
This image can be downloaded from the IAA website, after clicking on the link on the left hand side “Michael Rappa / Founding Director.”
Let’s read this image into R. You’ll need to install the pixmap package:
```
#install.packages("pixmap")
library(pixmap)
```
Download the image to your computer and then set your working directory in R as the same place you have saved the image:
```
setwd("/Users/shaina/Desktop/lin-alg")
```
The first thing we will do is examine the image as an \[R,G,B] (extension .ppm) and as a grayscale (extension .pgm). Let’s start with the \[R,G,B] image and see what the data looks like in R:
```
rappa = read.pnm("LAdata/rappa.ppm")
```
```
## Warning in rep(cellres, length = 2): 'x' is NULL so the result will be NULL
```
```
#Show the type of the information contained in our data:
str(rappa)
```
```
## Formal class 'pixmapRGB' [package "pixmap"] with 8 slots
## ..@ red : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ green : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ blue : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ channels: chr [1:3] "red" "green" "blue"
## ..@ size : int [1:2] 160 250
## ..@ cellres : num [1:2] 1 1
## ..@ bbox : num [1:4] 0 0 250 160
## ..@ bbcent : logi FALSE
```
You can see we have 3 matrices \- one for each of the colors: red, green, and blue.
Rather than a traditional data frame, when working with an image, we have to refer to the elements in this data set with @ rather than with $.
```
rappa@size
```
```
## [1] 160 250
```
We can then display a heat map showing the intensity of each individual color in each pixel:
```
rappa.red=rappa@red
rappa.green=rappa@green
rappa.blue=rappa@blue
image(rappa.green)
```
Figure 16\.6: Intensity of green in each pixel of the original image
Oops! Dr. Rappa is sideways. To rotate the graphic, we actually have to rotate our coordinate system. There is an easy way to do this (with a little bit of matrix experience), we simply transpose the matrix and then reorder the columns so the last one is first: (note that `nrow(rappa.green)` gives the number of columns in the transposed matrix)
```
rappa.green=t(rappa.green)[,nrow(rappa.green):1]
image(rappa.green)
```
Rather than compressing the colors individually, let’s work with the grayscale image:
```
greyrappa = read.pnm("LAdata/rappa.pgm")
```
```
## Warning in rep(cellres, length = 2): 'x' is NULL so the result will be NULL
```
```
str(greyrappa)
```
```
## Formal class 'pixmapGrey' [package "pixmap"] with 6 slots
## ..@ grey : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ channels: chr "grey"
## ..@ size : int [1:2] 160 250
## ..@ cellres : num [1:2] 1 1
## ..@ bbox : num [1:4] 0 0 250 160
## ..@ bbcent : logi FALSE
```
```
rappa.grey=greyrappa@grey
#again, rotate 90 degrees
rappa.grey=t(rappa.grey)[,nrow(rappa.grey):1]
```
```
image(rappa.grey, col=grey((0:1000)/1000))
```
Figure 16\.7: Greyscale representation of original image
### 16\.2\.2 Computing the SVD of Dr. Rappa
Now, let’s use what we know about the SVD to compress this image. First, let’s compute the SVD and save the individual components. Remember that the rows of \\(\\mathbf{v}^T\\) are the right singular vectors. R outputs the matrix \\(\\mathbf{v}\\) which has the singular vectors in columns.
```
rappasvd=svd(rappa.grey)
U=rappasvd$u
d=rappasvd$d
Vt=t(rappasvd$v)
```
Now let’s compute some approximations of rank 3, 10 and 50:
```
rappaR3=U[ ,1:3]%*%diag(d[1:3])%*%Vt[1:3, ]
image(rappaR3, col=grey((0:1000)/1000))
```
Figure 16\.8: Rank 3 approximation of the image data
```
rappaR10=U[ ,1:10]%*%diag(d[1:10])%*%Vt[1:10, ]
image(rappaR10, col=grey((0:1000)/1000))
```
Figure 16\.9: Rank 10 approximation of the image data
```
rappaR25=U[ ,1:25]%*%diag(d[1:25])%*%Vt[1:25, ]
image(rappaR25, col=grey((0:1000)/1000))
```
Figure 16\.10: Rank 50 approximation of the image data
How many singular vectors does it take to recognize Dr. Rappa? Certainly 25 is sufficient. Can you recognize him with even fewer? You can play around with this and see how the image changes.
### 16\.2\.3 The Noise
One of the main benefits of the SVD is that the *signal\-to\-noise* ratio of each component decreases as we move towards the right end of the SVD sum. If \\(\\mathbf{x}\\) is our data matrix (in this example, it is a matrix of pixel data to create an image) then,
\\\[\\begin{equation}
\\mathbf{X}\= \\sigma\_1\\mathbf{u}\_1\\mathbf{v}\_1^T \+ \\sigma\_2\\mathbf{u}\_2\\mathbf{v}\_2^T \+ \\sigma\_3\\mathbf{u}\_3\\mathbf{v}\_3^T \+ \\dots \+ \\sigma\_r\\mathbf{u}\_r\\mathbf{v}\_r^T
\\tag{15\.2}
\\end{equation}\\]
where \\(r\\) is the rank of the matrix. Our image matrix is full rank, \\(r\=160\\). This is the number of nonzero singular values, \\(\\sigma\_i\\). But, upon examinination, we see many of the singular values are nearly 0\. Let’s examine the last 20 singular values:
```
d[140:160]
```
```
## [1] 0.035731961 0.033644986 0.033030189 0.028704912 0.027428124 0.025370919
## [7] 0.024289497 0.022991926 0.020876657 0.020060538 0.018651373 0.018011032
## [13] 0.016299834 0.015668836 0.013928107 0.013046327 0.011403096 0.010763141
## [19] 0.009210187 0.008421977 0.004167310
```
We can think of these values as the amount of “information” directed along those last 20 singular components. If we assume the noise in the image or data is uniformly distributed along each orthogonal component \\(\\mathbf{u}\_i\\mathbf{v}\_i^T\\), then there is just as much noise in the component \\(\\sigma\_1\\mathbf{u}\_1\\mathbf{v}\_1^T\\) as there is in the component \\(\\sigma\_{160}\\mathbf{u}\_{160}\\mathbf{v}\_{160}^T\\). But, as we’ve just shown, there is far less information in the component \\(\\sigma\_{160}\\mathbf{u}\_{160}\\mathbf{v}\_{160}^T\\) than there is in the component \\(\\sigma\_1\\mathbf{u}\_1\\mathbf{v}\_1^T\\). This means that the later components are primarily noise. Let’s see if we can illustrate this using our image. We’ll construct the parts of the image that are represented on the last few singular components
```
# Using the last 25 components:
rappa_bad25=U[ ,135:160]%*%diag(d[135:160])%*%Vt[135:160, ]
image(rappa_bad25, col=grey((0:1000)/1000))
```
Figure 16\.11: The last 25 components, or the sum of the last 25 terms in equation [(15\.2\)](svd.html#eq:svdsum)
```
# Using the last 50 components:
rappa_bad50=U[ ,110:160]%*%diag(d[110:160])%*%Vt[110:160, ]
image(rappa_bad50, col=grey((0:1000)/1000))
```
Figure 16\.12: The last 50 components, or the sum of the last 50 terms in equation [(15\.2\)](svd.html#eq:svdsum)
```
# Using the last 100 components: (4 times as many components as it took us to recognize the face on the front end)
rappa_bad100=U[ ,61:160]%*%diag(d[61:160])%*%Vt[61:160, ]
image(rappa_bad100, col=grey((0:1000)/1000))
```
Figure 16\.13: The last 100 components, or the sum of the last 100 terms in equation [(15\.2\)](svd.html#eq:svdsum)
Mostly noise. In the last of these images, we see the outline of Dr. Rappa. One of the first things to go when images are compressed are the crisp outlines of objects. This is something you may have witnessed in your own experience, particularly when changing the format of a picture to one that compresses the size.
16\.1 Text Mining
-----------------
Text mining is another area where the SVD is used heavily. In text mining, our data structure is generally known as a **Term\-Document Matrix**. The *documents* are any individual pieces of text that we wish to analyze, cluster, summarize or discover topics from. They could be sentences, abstracts, webpages, or social media updates. The *terms* are the words contained in these documents. The term\-document matrix represents what’s called the “bag\-of\-words” approach \- the order of the words is removed and the data becomes unstructured in the sense that each document is represented by the words it contains, not the order or context in which they appear. The \\((i,j)\\) entry in this matrix is the number of times term \\(j\\) appears in document \\(i\\).
**Definition 16\.1 (Term\-Document Matrix)** Let \\(m\\) be the number of documents in a collection and \\(n\\) be the number of terms appearing in that collection, then we create our **term\-document matrix** \\(\\A\\) as follows:
\\\[\\begin{equation}
\\begin{array}{ccc}
\& \& \\text{term 1} \\quad \\text{term $j$} \\,\\, \\text{term $n$} \\\\
\\A\_{m\\times n} \= \& \\begin{array}{c}
\\hbox{Doc 1} \\\\
\\\\
\\\\
\\hbox{Doc $i$} \\\\
\\\\
\\hbox{Doc $m$} \\\\
\\end{array} \&
\\left(
\\begin{array}{ccccccc}
\& \& \& \|\& \& \& \\\\
\& \& \& \|\& \& \& \\\\
\& \& \& \|\& \& \& \\\\
\& \- \& \- \&f\_{ij} \& \& \& \\\\
\& \& \& \& \& \& \\\\
\& \& \& \& \& \& \\\\
\\end{array}
\\right)
\\end{array}
\\nonumber
\\end{equation}\\]
where \\(f\_{ij}\\) is the frequency of term \\(j\\) in document \\(i\\). A **binary** term\-document matrix will simply have \\(\\A\_{ij}\=1\\) if term \\(j\\) is contained in document \\(i\\).
### 16\.1\.1 Note About Rows vs. Columns
You might be asking yourself, “**Hey, wait a minute. Why do we have documents as columns in this matrix? Aren’t the documents like our observations?**” Sure! Many data scientists insist on having the documents on the rows of this matrix. *But*, before you do that, you should realize something. Many SVD and PCA routines are created in a way that is more efficient when your data is long vs. wide, and text data commonly has more terms than documents. The equivalence of the two presentations should be easy to see in all matrix factorization applications. If we have
\\\[\\A \= \\U\\mathbf{D}\\V^T\\] then,
\\\[\\A^T \= \\V\\mathbf{D}\\U^T\\]
so we merely need to switch our interpretations of the left\- and right\-singular vectors to switch from document columns to document rows.
Beyond any computational efficiency argument, we prefer to keep our documents on the columns here because of the emphasis placed earlier in this text regarding matrix multiplication viewed as a linear combination of columns. The animation in Figure [2\.7](mult.html#fig:multlincombanim) is a good thing to be clear on before proceeding here.
### 16\.1\.2 Term Weighting
Term\-document matrices tend to be large and sparse. Term\-weighting schemes are often used to downplay the effect of commonly used words and bolster the effect of rare but semantically important words . The most popular weighting method is known as **Term Frequency\-Inverse Document Frequency (TF\-IDF)**. For this method, the raw term\-frequencies \\(f\_{ij}\\) in the matrix \\(\\A\\) are multiplied by global weights called *inverse document frequencies*, \\(w\_i\\), for each term. These weights reflect the commonality of each term across the entire collection and ultimately quantify a term’s ability to narrow one’s search results (the foundations of text analysis were, after all, dominated by search technology). The inverse document frequency of term \\(i\\) is:
\\\[w\_i \= \\log \\left( \\frac{\\mbox{total \# of documents}}{\\mbox{\# documents containing term } i} \\right)\\]
To put this weight in perspective, for a collection of \\(n\=10,000\\) documents we have \\(0\\leq w\_j \\leq 9\.2\\), where \\(w\_j\=0\\) means the word is contained in every document (rendering it useless for search) and \\(w\_j\=9\.2\\) means the word is contained in only 1 document (making it very useful for search). The document vectors are often normalized to have unit 2\-norm, since their directions (not their lengths) in the term\-space is what characterizes them semantically.
### 16\.1\.3 Other Considerations
In dealing with text, we want to do as much as we can do minimize the size of the dictionary (the collection of terms which enumerate the rows of our term\-document matrix) for both computational and practical reasons. The first effort we’ll make toward this goal is to remove so\-called **stop words**, or very common words that appear in a great many sentences like articles (“a,” “an,” “the”) and prepositions (“about,” “for,” “at”) among others. Many projects also contain domain\-specific stop words. For example, one might remove the word “Reuters” from a corpus of [Reuters’ newswires](https://shainarace.github.io/Reuters/). The second effort we’ll often make is to apply a **stemming** algorithm which reduces words to their *stem.* For example, the words “swimmer” and “swimming” would both be reduced to their stem, “swim.” Stemming and stop word removal can greatly reduce the size of the dictionary and also help draw meaningful connections between documents.
### 16\.1\.4 Latent Semantic Indexing
The noise\-reduction property of the SVD was extended to text processing in 1990 by Susan Dumais et al, who named the effect *Latent Semantic Indexing (LSI)*. LSI involves the singular value decomposition of the term\-document matrix defined in Definition [16\.1](svdapp.html#def:tdm). In other words, it is like a principal components analysis using the unscaled, uncentered inner\-product matrix \\(\\A^T\\A\\). If the documents are normalized to have unit length, this is a matrix of **cosine similarities** (see Chapter [6](norms.html#norms)). Cosine similarity is the most common measure of similarity between documents for text mining. If the term\-document matrix is binary, this is often called the co\-occurrence matrix because each entry gives the number of times two words occur in the same document.
It certainly seems logical to view text data in this context as it contains both an informative signal and semantic noise. LSI quickly grew roots in the information retrieval community, where it is often used for query processing. The idea is to remove semantic noise, due to variation and ambiguity in vocabulary and presentation style, without losing significant amounts of information. For example, a human may not differentiate between the words “car” and “automobile,” but indeed the words will become two separate entities in the raw term\-document matrix. The main idea in LSI is that the realignment of the data into fewer directions should force related documents (like those containing “car” and “automobile”) closer together in an angular sense, thus revealing latent semantic connections.
Purveyors of LSI suggest that the use of the Singular Value Decomposition to project the documents into a lower\-dimensional space results in a representation which reflects the major associative patterns of the data while ignoring less important influences. This projection is done with the simple truncation of the SVD shown in Equation [(15\.3\)](svd.html#eq:truncsvd).
As we have seen with other types of data, the very nature of dimension reduction makes possible for two documents with similar semantic properties to be mapped closer together. Unfortunately, the mixture of signs (positive and negative) in the singular vectors (think principal components) makes the decomposition difficult to interpret. While the major claims of LSI are legitimate, this lack of interpretability is still conceptually problematic for some folks. In order to make this point as clear as possible, consider the original “term basis” representation for the data, where each document (from a collection containing \\(m\\) total terms in the dictionary) could be written as:
\\\[\\A\_j \= \\sum\_{i\=1}^{m} f\_{ij}\\e\_i\\]
where \\(f\_{ij}\\) is the frequency of term \\(i\\) in the document, and \\(\\e\_i\\) is the \\(i^{th}\\) column of the \\(m\\times m\\) identity matrix. The truncated SVD gives us a new set of coordinates (scores) and basis vectors (principal component features):
\\\[\\A\_j \\approx \\sum\_{i\=1}^r \\alpha\_i \\u\_i\\]
but the features \\(\\u\_i\\) live in the term space, and thus ought to be interpretable as a linear combinations of the original “term basis.” However the linear combinations, having both positive and negative coefficients, tends to be semantically obscure in practice \- These new features do not often form meaningful *topics* for the text, although they often do organize in a meaningful way as we will demonstrate in the next section.
### 16\.1\.5 Example
Let’s consider a corpus of short documents, perhaps status updates from social media sites. We’ll keep this corpus as minimal as possible to demonstrate the utility of the SVD for text.
Figure 16\.1: A corpus of 6 documents. Words occurring in more than one document appear in bold. Stop words removed, stemming utilized. Document numbers correspond to term\-document matrix below.
\\\[\\begin{equation\*}
\\begin{array}{cc}
\& \\begin{array}{cccccc} \\;doc\_1\\; \& \\;doc\_2\\;\& \\;doc\_3\\;\& \\;doc\_4\\;\& \\;doc\_5\\;\& \\;doc\_6\\; \\end{array}\\\\
\\begin{array}{c}
\\hbox{cat} \\\\
\\hbox{dog}\\\\
\\hbox{eat}\\\\
\\hbox{tired} \\\\
\\hbox{toy}\\\\
\\hbox{injured} \\\\
\\hbox{ankle} \\\\
\\hbox{broken} \\\\
\\hbox{swollen} \\\\
\\hbox{sprained} \\\\
\\end{array} \&
\\left(
\\begin{array}{cccccc}
\\quad 1\\quad \& \\quad 2\\quad \& \\quad 2\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 2\\quad \& \\quad 3\\quad \& \\quad 2\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 2\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 1\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 1\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 1\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 0\\quad \\\\
\\end{array}\\right)
\\end{array}
\\end{equation\*}\\]
We’ll start by entering this matrix into R. Of course the process of parsing a collection of documents and creating a term\-document matrix is generally more automatic. The `tm` text mining library is recommended for creating a term\-document matrix in practice.
```
A=matrix(c(1,2,2,0,0,0,
2,3,2,0,0,0,
2,0,1,0,0,0,
0,1,0,0,1,0,
0,1,1,0,0,0,
0,0,0,1,1,0,
0,0,0,1,1,1,
0,0,0,1,0,1,
0,0,0,1,0,1,
0,0,0,1,1,0),
nrow=10, byrow=T)
A
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 2 2 0 0 0
## [2,] 2 3 2 0 0 0
## [3,] 2 0 1 0 0 0
## [4,] 0 1 0 0 1 0
## [5,] 0 1 1 0 0 0
## [6,] 0 0 0 1 1 0
## [7,] 0 0 0 1 1 1
## [8,] 0 0 0 1 0 1
## [9,] 0 0 0 1 0 1
## [10,] 0 0 0 1 1 0
```
Because our corpus is so small, we’ll skip the step of term\-weighting, but we *will* normalize the documents to have equal length. In other words, we’ll divide each document vector by its two\-norm so that it becomes a unit vector:
```
A_norm = apply(A, 2, function(x){x/c(sqrt(t(x)%*%x))})
A_norm
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 0.3333333 0.5163978 0.6324555 0.0000000 0.0 0.0000000
## [2,] 0.6666667 0.7745967 0.6324555 0.0000000 0.0 0.0000000
## [3,] 0.6666667 0.0000000 0.3162278 0.0000000 0.0 0.0000000
## [4,] 0.0000000 0.2581989 0.0000000 0.0000000 0.5 0.0000000
## [5,] 0.0000000 0.2581989 0.3162278 0.0000000 0.0 0.0000000
## [6,] 0.0000000 0.0000000 0.0000000 0.4472136 0.5 0.0000000
## [7,] 0.0000000 0.0000000 0.0000000 0.4472136 0.5 0.5773503
## [8,] 0.0000000 0.0000000 0.0000000 0.4472136 0.0 0.5773503
## [9,] 0.0000000 0.0000000 0.0000000 0.4472136 0.0 0.5773503
## [10,] 0.0000000 0.0000000 0.0000000 0.4472136 0.5 0.0000000
```
We then compute the SVD of `A_norm` and observe the left\- and right\-singular vectors. Since the matrix \\(\\A\\) is term\-by\-document, you might consider the terms as being the “units” of the rows of \\(\\A\\) and the documents as being the “units” of the columns. For example, \\(\\A\_{23}\=2\\) could logically be interpreted as “there are 2 units of the word *dog* per *document number 3*.” In this mentality, any factorization of the matrix should preserve those units. Similar to any [“Change of Units Railroad”](https://www.katmarsoftware.com/articles/railroad-track-unit-conversion.htm), matrix factorization can be considered in terms of units assigned to both rows and columns:
\\\[\\A\_{\\text{term} \\times \\text{doc}} \= \\U\_{\\text{term} \\times \\text{factor}}\\mathbf{D}\_{\\text{factor} \\times \\text{factor}}\\V^T\_{\\text{factor} \\times \\text{doc}}\\]
Thus, when we examine the rows of the matrix \\(\\U\\), we’re looking at information about each term and how it contributes to each factor (i.e. the “factors” are just linear combinations of our elementary term vectors); When we examine the columns of the matrix \\(\\V^T\\), we’re looking at information about how each document is related to each factor (i.e. the documents are linear combinations of these factors with weights corresponding to the elements of \\(\\V^T\\)). And what about \\(\\mathbf{D}?\\) Well, in classical factor analysis the matrix \\(\\mathbf{D}\\) is often combined with either \\(\\U\\) or \\(\\V^T\\) to obtain a two\-matrix factorization. \\(\\mathbf{D}\\) describes how much information or signal from our original matrix exists along each of the singular components. It is common to use a **screeplot**, a simple line plot of the singular values in \\(\\mathbf{D}\\), to determine an appropriate *rank* for the truncation in Equation [(15\.3\)](svd.html#eq:truncsvd).
```
out = svd(A_norm)
plot(out$d, ylab = 'Singular Values of A_norm')
```
Figure 16\.2: Screeplot for the Toy Text Dataset
Noticing the gap, or “elbow” in the screeplot at an index of 2 lets us know that the first two singular components contain notably more information than the components to follow \- A major proportion of pattern or signal in this matrix lies long 2 components, i.e. **there are 2 major topics that might provide a reasonable approximation to the data**. What’s a “topic” in a vector space model? A linear combination of terms! It’s just a column vector in the term space! Let’s first examine the left\-singular vectors in \\(\\U\\). Remember, the *rows* of this matrix describe how the terms load onto factors, and the columns are those mysterious “factors” themselves.
```
out$u
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] -0.52980742 -0.04803212 0.01606507 -0.24737747 0.23870207 0.45722153
## [2,] -0.73429739 -0.06558224 0.02165167 -0.08821632 -0.09484667 -0.56183983
## [3,] -0.34442976 -0.03939120 0.10670326 0.83459702 -0.14778574 0.25277609
## [4,] -0.11234648 0.16724740 -0.47798864 -0.22995963 -0.59187851 -0.07506297
## [5,] -0.20810051 -0.01743101 -0.01281893 -0.34717811 0.23948814 0.42758997
## [6,] -0.03377822 0.36991575 -0.41154158 0.15837732 0.39526231 -0.10648584
## [7,] -0.04573569 0.58708873 0.01651849 -0.01514815 -0.42604773 0.38615891
## [8,] -0.02427277 0.41546131 0.45839081 -0.07300613 0.07255625 -0.15988106
## [9,] -0.02427277 0.41546131 0.45839081 -0.07300613 0.07255625 -0.15988106
## [10,] -0.03377822 0.36991575 -0.41154158 0.15837732 0.39526231 -0.10648584
```
So the first “factor” of SVD is as follows:
\\\[\\text{factor}\_1 \=
\-0\.530 \\text{cat} \-0\.734 \\text{dog}\-0\.344 \\text{eat}\-0\.112 \\text{tired} \-0\.208 \\text{toy}\-0\.034 \\text{injured} \-0\.046 \\text{ankle}\-0\.024 \\text{broken} \-0\.024 \\text{swollen} \-0\.034 \\text{sprained} \\]
We can immediately see why people had trouble with LSI as a topic model – it’s hard to intuit how you might treat a mix of positive and negative coefficients in the output. If we ignore the signs and only investigate the absolute values, we can certainly see some meaningful topic information in this first factor: the largest magnitude weights all go to the words from the documents about pets. You might like to say that negative entries mean a topic is *anticorrelated* with that word, and to some extent this is correct. That logic works nicely, in fact, for factor 2:
\\\[\\text{factor}\_2 \= \-0\.048\\text{cat}\-0\.066\\text{dog}\-0\.039\\text{eat}\+ 0\.167\\text{tired} \-0\.017\\text{toy} 0\.370\\text{injured}\+ 0\.587\\text{ankle} \+0\.415\\text{broken} \+ 0\.415\\text{swollen} \+ 0\.370\\text{sprained}\\]
However, circling back to factor 1 then leaves us wanting to see different signs for the two groups of words. Nevertheless, the information separating the words is most certainly present. Take a look at the plot of the words’ loadings along the first two factors in Figure [16\.3](svdapp.html#fig:lsiwords).
Figure 16\.3: Projection of the Terms onto First two Singular Dimensions
Moving on to the documents, we can see a similar clustering pattern in the columns of \\(\\V^T\\) which are the rows of \\(\\V\\), shown below:
```
out$v
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] -0.55253068 -0.05828903 0.10665606 0.74609663 -0.2433982 -0.2530492
## [2,] -0.57064141 -0.02502636 -0.11924683 -0.62022594 -0.1219825 -0.5098650
## [3,] -0.60092838 -0.06088635 0.06280655 -0.10444424 0.3553232 0.7029012
## [4,] -0.04464392 0.65412158 0.05781835 0.12506090 0.6749109 -0.3092635
## [5,] -0.06959068 0.50639918 -0.75339800 0.06438433 -0.3367244 0.2314730
## [6,] -0.03357626 0.55493581 0.63206685 -0.16722869 -0.4803488 0.1808591
```
In fact, the ability to separate the documents with the first two singular vectors is rather magical here, as shown visually in Figure [16\.4](svdapp.html#fig:lsidocs).
Figure 16\.4: Projection of the Docuemnts onto First two Singular Dimensions
Figure [16\.4](svdapp.html#fig:lsidocs) demonstrates how documents that live in a 10\-dimensional term space can be compressed down to 2\-dimensions in a way that captures the major information of interest. If we were to take that 2\-truncated SVD of our term\-document matrix and multiply it back together, we’d see an *approximation* of our original term\-document matrix, and we could calculate the error involved in that approximation. We could equivalently calculate that error by using the singular values.
```
A_approx = out$u[,1:2]%*% diag(out$d[1:2])%*%t(out$v[,1:2])
# Sum of element-wise squared error
(norm(A-A_approx,'F'))^2
```
```
## [1] 24.44893
```
```
# Sum of squared singular values truncated
(sum(out$d[3:6]^2))
```
```
## [1] 1.195292
```
However, multiplying back to the original data is not generally an action of interest to data scientists. What we are after in the SVD is the dimensionality reduced data contained in the columns of \\(\\V^T\\) (or, if you’ve created a document\-term matrix, the rows of \\(\\U\\).
### 16\.1\.1 Note About Rows vs. Columns
You might be asking yourself, “**Hey, wait a minute. Why do we have documents as columns in this matrix? Aren’t the documents like our observations?**” Sure! Many data scientists insist on having the documents on the rows of this matrix. *But*, before you do that, you should realize something. Many SVD and PCA routines are created in a way that is more efficient when your data is long vs. wide, and text data commonly has more terms than documents. The equivalence of the two presentations should be easy to see in all matrix factorization applications. If we have
\\\[\\A \= \\U\\mathbf{D}\\V^T\\] then,
\\\[\\A^T \= \\V\\mathbf{D}\\U^T\\]
so we merely need to switch our interpretations of the left\- and right\-singular vectors to switch from document columns to document rows.
Beyond any computational efficiency argument, we prefer to keep our documents on the columns here because of the emphasis placed earlier in this text regarding matrix multiplication viewed as a linear combination of columns. The animation in Figure [2\.7](mult.html#fig:multlincombanim) is a good thing to be clear on before proceeding here.
### 16\.1\.2 Term Weighting
Term\-document matrices tend to be large and sparse. Term\-weighting schemes are often used to downplay the effect of commonly used words and bolster the effect of rare but semantically important words . The most popular weighting method is known as **Term Frequency\-Inverse Document Frequency (TF\-IDF)**. For this method, the raw term\-frequencies \\(f\_{ij}\\) in the matrix \\(\\A\\) are multiplied by global weights called *inverse document frequencies*, \\(w\_i\\), for each term. These weights reflect the commonality of each term across the entire collection and ultimately quantify a term’s ability to narrow one’s search results (the foundations of text analysis were, after all, dominated by search technology). The inverse document frequency of term \\(i\\) is:
\\\[w\_i \= \\log \\left( \\frac{\\mbox{total \# of documents}}{\\mbox{\# documents containing term } i} \\right)\\]
To put this weight in perspective, for a collection of \\(n\=10,000\\) documents we have \\(0\\leq w\_j \\leq 9\.2\\), where \\(w\_j\=0\\) means the word is contained in every document (rendering it useless for search) and \\(w\_j\=9\.2\\) means the word is contained in only 1 document (making it very useful for search). The document vectors are often normalized to have unit 2\-norm, since their directions (not their lengths) in the term\-space is what characterizes them semantically.
### 16\.1\.3 Other Considerations
In dealing with text, we want to do as much as we can do minimize the size of the dictionary (the collection of terms which enumerate the rows of our term\-document matrix) for both computational and practical reasons. The first effort we’ll make toward this goal is to remove so\-called **stop words**, or very common words that appear in a great many sentences like articles (“a,” “an,” “the”) and prepositions (“about,” “for,” “at”) among others. Many projects also contain domain\-specific stop words. For example, one might remove the word “Reuters” from a corpus of [Reuters’ newswires](https://shainarace.github.io/Reuters/). The second effort we’ll often make is to apply a **stemming** algorithm which reduces words to their *stem.* For example, the words “swimmer” and “swimming” would both be reduced to their stem, “swim.” Stemming and stop word removal can greatly reduce the size of the dictionary and also help draw meaningful connections between documents.
### 16\.1\.4 Latent Semantic Indexing
The noise\-reduction property of the SVD was extended to text processing in 1990 by Susan Dumais et al, who named the effect *Latent Semantic Indexing (LSI)*. LSI involves the singular value decomposition of the term\-document matrix defined in Definition [16\.1](svdapp.html#def:tdm). In other words, it is like a principal components analysis using the unscaled, uncentered inner\-product matrix \\(\\A^T\\A\\). If the documents are normalized to have unit length, this is a matrix of **cosine similarities** (see Chapter [6](norms.html#norms)). Cosine similarity is the most common measure of similarity between documents for text mining. If the term\-document matrix is binary, this is often called the co\-occurrence matrix because each entry gives the number of times two words occur in the same document.
It certainly seems logical to view text data in this context as it contains both an informative signal and semantic noise. LSI quickly grew roots in the information retrieval community, where it is often used for query processing. The idea is to remove semantic noise, due to variation and ambiguity in vocabulary and presentation style, without losing significant amounts of information. For example, a human may not differentiate between the words “car” and “automobile,” but indeed the words will become two separate entities in the raw term\-document matrix. The main idea in LSI is that the realignment of the data into fewer directions should force related documents (like those containing “car” and “automobile”) closer together in an angular sense, thus revealing latent semantic connections.
Purveyors of LSI suggest that the use of the Singular Value Decomposition to project the documents into a lower\-dimensional space results in a representation which reflects the major associative patterns of the data while ignoring less important influences. This projection is done with the simple truncation of the SVD shown in Equation [(15\.3\)](svd.html#eq:truncsvd).
As we have seen with other types of data, the very nature of dimension reduction makes possible for two documents with similar semantic properties to be mapped closer together. Unfortunately, the mixture of signs (positive and negative) in the singular vectors (think principal components) makes the decomposition difficult to interpret. While the major claims of LSI are legitimate, this lack of interpretability is still conceptually problematic for some folks. In order to make this point as clear as possible, consider the original “term basis” representation for the data, where each document (from a collection containing \\(m\\) total terms in the dictionary) could be written as:
\\\[\\A\_j \= \\sum\_{i\=1}^{m} f\_{ij}\\e\_i\\]
where \\(f\_{ij}\\) is the frequency of term \\(i\\) in the document, and \\(\\e\_i\\) is the \\(i^{th}\\) column of the \\(m\\times m\\) identity matrix. The truncated SVD gives us a new set of coordinates (scores) and basis vectors (principal component features):
\\\[\\A\_j \\approx \\sum\_{i\=1}^r \\alpha\_i \\u\_i\\]
but the features \\(\\u\_i\\) live in the term space, and thus ought to be interpretable as a linear combinations of the original “term basis.” However the linear combinations, having both positive and negative coefficients, tends to be semantically obscure in practice \- These new features do not often form meaningful *topics* for the text, although they often do organize in a meaningful way as we will demonstrate in the next section.
### 16\.1\.5 Example
Let’s consider a corpus of short documents, perhaps status updates from social media sites. We’ll keep this corpus as minimal as possible to demonstrate the utility of the SVD for text.
Figure 16\.1: A corpus of 6 documents. Words occurring in more than one document appear in bold. Stop words removed, stemming utilized. Document numbers correspond to term\-document matrix below.
\\\[\\begin{equation\*}
\\begin{array}{cc}
\& \\begin{array}{cccccc} \\;doc\_1\\; \& \\;doc\_2\\;\& \\;doc\_3\\;\& \\;doc\_4\\;\& \\;doc\_5\\;\& \\;doc\_6\\; \\end{array}\\\\
\\begin{array}{c}
\\hbox{cat} \\\\
\\hbox{dog}\\\\
\\hbox{eat}\\\\
\\hbox{tired} \\\\
\\hbox{toy}\\\\
\\hbox{injured} \\\\
\\hbox{ankle} \\\\
\\hbox{broken} \\\\
\\hbox{swollen} \\\\
\\hbox{sprained} \\\\
\\end{array} \&
\\left(
\\begin{array}{cccccc}
\\quad 1\\quad \& \\quad 2\\quad \& \\quad 2\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 2\\quad \& \\quad 3\\quad \& \\quad 2\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 2\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 0\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 1\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 1\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 0\\quad \& \\quad 1\\quad \\\\
\\quad 0\\quad \& \\quad 0\\quad \& \\quad 0\\quad \& \\quad 1\\quad \& \\quad 1\\quad \& \\quad 0\\quad \\\\
\\end{array}\\right)
\\end{array}
\\end{equation\*}\\]
We’ll start by entering this matrix into R. Of course the process of parsing a collection of documents and creating a term\-document matrix is generally more automatic. The `tm` text mining library is recommended for creating a term\-document matrix in practice.
```
A=matrix(c(1,2,2,0,0,0,
2,3,2,0,0,0,
2,0,1,0,0,0,
0,1,0,0,1,0,
0,1,1,0,0,0,
0,0,0,1,1,0,
0,0,0,1,1,1,
0,0,0,1,0,1,
0,0,0,1,0,1,
0,0,0,1,1,0),
nrow=10, byrow=T)
A
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 2 2 0 0 0
## [2,] 2 3 2 0 0 0
## [3,] 2 0 1 0 0 0
## [4,] 0 1 0 0 1 0
## [5,] 0 1 1 0 0 0
## [6,] 0 0 0 1 1 0
## [7,] 0 0 0 1 1 1
## [8,] 0 0 0 1 0 1
## [9,] 0 0 0 1 0 1
## [10,] 0 0 0 1 1 0
```
Because our corpus is so small, we’ll skip the step of term\-weighting, but we *will* normalize the documents to have equal length. In other words, we’ll divide each document vector by its two\-norm so that it becomes a unit vector:
```
A_norm = apply(A, 2, function(x){x/c(sqrt(t(x)%*%x))})
A_norm
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 0.3333333 0.5163978 0.6324555 0.0000000 0.0 0.0000000
## [2,] 0.6666667 0.7745967 0.6324555 0.0000000 0.0 0.0000000
## [3,] 0.6666667 0.0000000 0.3162278 0.0000000 0.0 0.0000000
## [4,] 0.0000000 0.2581989 0.0000000 0.0000000 0.5 0.0000000
## [5,] 0.0000000 0.2581989 0.3162278 0.0000000 0.0 0.0000000
## [6,] 0.0000000 0.0000000 0.0000000 0.4472136 0.5 0.0000000
## [7,] 0.0000000 0.0000000 0.0000000 0.4472136 0.5 0.5773503
## [8,] 0.0000000 0.0000000 0.0000000 0.4472136 0.0 0.5773503
## [9,] 0.0000000 0.0000000 0.0000000 0.4472136 0.0 0.5773503
## [10,] 0.0000000 0.0000000 0.0000000 0.4472136 0.5 0.0000000
```
We then compute the SVD of `A_norm` and observe the left\- and right\-singular vectors. Since the matrix \\(\\A\\) is term\-by\-document, you might consider the terms as being the “units” of the rows of \\(\\A\\) and the documents as being the “units” of the columns. For example, \\(\\A\_{23}\=2\\) could logically be interpreted as “there are 2 units of the word *dog* per *document number 3*.” In this mentality, any factorization of the matrix should preserve those units. Similar to any [“Change of Units Railroad”](https://www.katmarsoftware.com/articles/railroad-track-unit-conversion.htm), matrix factorization can be considered in terms of units assigned to both rows and columns:
\\\[\\A\_{\\text{term} \\times \\text{doc}} \= \\U\_{\\text{term} \\times \\text{factor}}\\mathbf{D}\_{\\text{factor} \\times \\text{factor}}\\V^T\_{\\text{factor} \\times \\text{doc}}\\]
Thus, when we examine the rows of the matrix \\(\\U\\), we’re looking at information about each term and how it contributes to each factor (i.e. the “factors” are just linear combinations of our elementary term vectors); When we examine the columns of the matrix \\(\\V^T\\), we’re looking at information about how each document is related to each factor (i.e. the documents are linear combinations of these factors with weights corresponding to the elements of \\(\\V^T\\)). And what about \\(\\mathbf{D}?\\) Well, in classical factor analysis the matrix \\(\\mathbf{D}\\) is often combined with either \\(\\U\\) or \\(\\V^T\\) to obtain a two\-matrix factorization. \\(\\mathbf{D}\\) describes how much information or signal from our original matrix exists along each of the singular components. It is common to use a **screeplot**, a simple line plot of the singular values in \\(\\mathbf{D}\\), to determine an appropriate *rank* for the truncation in Equation [(15\.3\)](svd.html#eq:truncsvd).
```
out = svd(A_norm)
plot(out$d, ylab = 'Singular Values of A_norm')
```
Figure 16\.2: Screeplot for the Toy Text Dataset
Noticing the gap, or “elbow” in the screeplot at an index of 2 lets us know that the first two singular components contain notably more information than the components to follow \- A major proportion of pattern or signal in this matrix lies long 2 components, i.e. **there are 2 major topics that might provide a reasonable approximation to the data**. What’s a “topic” in a vector space model? A linear combination of terms! It’s just a column vector in the term space! Let’s first examine the left\-singular vectors in \\(\\U\\). Remember, the *rows* of this matrix describe how the terms load onto factors, and the columns are those mysterious “factors” themselves.
```
out$u
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] -0.52980742 -0.04803212 0.01606507 -0.24737747 0.23870207 0.45722153
## [2,] -0.73429739 -0.06558224 0.02165167 -0.08821632 -0.09484667 -0.56183983
## [3,] -0.34442976 -0.03939120 0.10670326 0.83459702 -0.14778574 0.25277609
## [4,] -0.11234648 0.16724740 -0.47798864 -0.22995963 -0.59187851 -0.07506297
## [5,] -0.20810051 -0.01743101 -0.01281893 -0.34717811 0.23948814 0.42758997
## [6,] -0.03377822 0.36991575 -0.41154158 0.15837732 0.39526231 -0.10648584
## [7,] -0.04573569 0.58708873 0.01651849 -0.01514815 -0.42604773 0.38615891
## [8,] -0.02427277 0.41546131 0.45839081 -0.07300613 0.07255625 -0.15988106
## [9,] -0.02427277 0.41546131 0.45839081 -0.07300613 0.07255625 -0.15988106
## [10,] -0.03377822 0.36991575 -0.41154158 0.15837732 0.39526231 -0.10648584
```
So the first “factor” of SVD is as follows:
\\\[\\text{factor}\_1 \=
\-0\.530 \\text{cat} \-0\.734 \\text{dog}\-0\.344 \\text{eat}\-0\.112 \\text{tired} \-0\.208 \\text{toy}\-0\.034 \\text{injured} \-0\.046 \\text{ankle}\-0\.024 \\text{broken} \-0\.024 \\text{swollen} \-0\.034 \\text{sprained} \\]
We can immediately see why people had trouble with LSI as a topic model – it’s hard to intuit how you might treat a mix of positive and negative coefficients in the output. If we ignore the signs and only investigate the absolute values, we can certainly see some meaningful topic information in this first factor: the largest magnitude weights all go to the words from the documents about pets. You might like to say that negative entries mean a topic is *anticorrelated* with that word, and to some extent this is correct. That logic works nicely, in fact, for factor 2:
\\\[\\text{factor}\_2 \= \-0\.048\\text{cat}\-0\.066\\text{dog}\-0\.039\\text{eat}\+ 0\.167\\text{tired} \-0\.017\\text{toy} 0\.370\\text{injured}\+ 0\.587\\text{ankle} \+0\.415\\text{broken} \+ 0\.415\\text{swollen} \+ 0\.370\\text{sprained}\\]
However, circling back to factor 1 then leaves us wanting to see different signs for the two groups of words. Nevertheless, the information separating the words is most certainly present. Take a look at the plot of the words’ loadings along the first two factors in Figure [16\.3](svdapp.html#fig:lsiwords).
Figure 16\.3: Projection of the Terms onto First two Singular Dimensions
Moving on to the documents, we can see a similar clustering pattern in the columns of \\(\\V^T\\) which are the rows of \\(\\V\\), shown below:
```
out$v
```
```
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] -0.55253068 -0.05828903 0.10665606 0.74609663 -0.2433982 -0.2530492
## [2,] -0.57064141 -0.02502636 -0.11924683 -0.62022594 -0.1219825 -0.5098650
## [3,] -0.60092838 -0.06088635 0.06280655 -0.10444424 0.3553232 0.7029012
## [4,] -0.04464392 0.65412158 0.05781835 0.12506090 0.6749109 -0.3092635
## [5,] -0.06959068 0.50639918 -0.75339800 0.06438433 -0.3367244 0.2314730
## [6,] -0.03357626 0.55493581 0.63206685 -0.16722869 -0.4803488 0.1808591
```
In fact, the ability to separate the documents with the first two singular vectors is rather magical here, as shown visually in Figure [16\.4](svdapp.html#fig:lsidocs).
Figure 16\.4: Projection of the Docuemnts onto First two Singular Dimensions
Figure [16\.4](svdapp.html#fig:lsidocs) demonstrates how documents that live in a 10\-dimensional term space can be compressed down to 2\-dimensions in a way that captures the major information of interest. If we were to take that 2\-truncated SVD of our term\-document matrix and multiply it back together, we’d see an *approximation* of our original term\-document matrix, and we could calculate the error involved in that approximation. We could equivalently calculate that error by using the singular values.
```
A_approx = out$u[,1:2]%*% diag(out$d[1:2])%*%t(out$v[,1:2])
# Sum of element-wise squared error
(norm(A-A_approx,'F'))^2
```
```
## [1] 24.44893
```
```
# Sum of squared singular values truncated
(sum(out$d[3:6]^2))
```
```
## [1] 1.195292
```
However, multiplying back to the original data is not generally an action of interest to data scientists. What we are after in the SVD is the dimensionality reduced data contained in the columns of \\(\\V^T\\) (or, if you’ve created a document\-term matrix, the rows of \\(\\U\\).
16\.2 Image Compression
-----------------------
While multiplying back to the original data is not generally something we’d like to do, it does provide a nice illustration of noise\-reduction and signal\-compression when working with images. The following example is not designed to teach you how to work with images for the purposes of data science. It is merely a nice visual way to *see* what’s happening when we truncate the SVD and omit these directions that have “minimal signal.”
### 16\.2\.1 Image data in R
Let’s take an image of a leader that we all know and respect:
Figure 16\.5: Michael Rappa, PhD, Founding Director of the Institute for Advanced Analytics and Distinguished Professor at NC State
This image can be downloaded from the IAA website, after clicking on the link on the left hand side “Michael Rappa / Founding Director.”
Let’s read this image into R. You’ll need to install the pixmap package:
```
#install.packages("pixmap")
library(pixmap)
```
Download the image to your computer and then set your working directory in R as the same place you have saved the image:
```
setwd("/Users/shaina/Desktop/lin-alg")
```
The first thing we will do is examine the image as an \[R,G,B] (extension .ppm) and as a grayscale (extension .pgm). Let’s start with the \[R,G,B] image and see what the data looks like in R:
```
rappa = read.pnm("LAdata/rappa.ppm")
```
```
## Warning in rep(cellres, length = 2): 'x' is NULL so the result will be NULL
```
```
#Show the type of the information contained in our data:
str(rappa)
```
```
## Formal class 'pixmapRGB' [package "pixmap"] with 8 slots
## ..@ red : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ green : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ blue : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ channels: chr [1:3] "red" "green" "blue"
## ..@ size : int [1:2] 160 250
## ..@ cellres : num [1:2] 1 1
## ..@ bbox : num [1:4] 0 0 250 160
## ..@ bbcent : logi FALSE
```
You can see we have 3 matrices \- one for each of the colors: red, green, and blue.
Rather than a traditional data frame, when working with an image, we have to refer to the elements in this data set with @ rather than with $.
```
rappa@size
```
```
## [1] 160 250
```
We can then display a heat map showing the intensity of each individual color in each pixel:
```
rappa.red=rappa@red
rappa.green=rappa@green
rappa.blue=rappa@blue
image(rappa.green)
```
Figure 16\.6: Intensity of green in each pixel of the original image
Oops! Dr. Rappa is sideways. To rotate the graphic, we actually have to rotate our coordinate system. There is an easy way to do this (with a little bit of matrix experience), we simply transpose the matrix and then reorder the columns so the last one is first: (note that `nrow(rappa.green)` gives the number of columns in the transposed matrix)
```
rappa.green=t(rappa.green)[,nrow(rappa.green):1]
image(rappa.green)
```
Rather than compressing the colors individually, let’s work with the grayscale image:
```
greyrappa = read.pnm("LAdata/rappa.pgm")
```
```
## Warning in rep(cellres, length = 2): 'x' is NULL so the result will be NULL
```
```
str(greyrappa)
```
```
## Formal class 'pixmapGrey' [package "pixmap"] with 6 slots
## ..@ grey : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ channels: chr "grey"
## ..@ size : int [1:2] 160 250
## ..@ cellres : num [1:2] 1 1
## ..@ bbox : num [1:4] 0 0 250 160
## ..@ bbcent : logi FALSE
```
```
rappa.grey=greyrappa@grey
#again, rotate 90 degrees
rappa.grey=t(rappa.grey)[,nrow(rappa.grey):1]
```
```
image(rappa.grey, col=grey((0:1000)/1000))
```
Figure 16\.7: Greyscale representation of original image
### 16\.2\.2 Computing the SVD of Dr. Rappa
Now, let’s use what we know about the SVD to compress this image. First, let’s compute the SVD and save the individual components. Remember that the rows of \\(\\mathbf{v}^T\\) are the right singular vectors. R outputs the matrix \\(\\mathbf{v}\\) which has the singular vectors in columns.
```
rappasvd=svd(rappa.grey)
U=rappasvd$u
d=rappasvd$d
Vt=t(rappasvd$v)
```
Now let’s compute some approximations of rank 3, 10 and 50:
```
rappaR3=U[ ,1:3]%*%diag(d[1:3])%*%Vt[1:3, ]
image(rappaR3, col=grey((0:1000)/1000))
```
Figure 16\.8: Rank 3 approximation of the image data
```
rappaR10=U[ ,1:10]%*%diag(d[1:10])%*%Vt[1:10, ]
image(rappaR10, col=grey((0:1000)/1000))
```
Figure 16\.9: Rank 10 approximation of the image data
```
rappaR25=U[ ,1:25]%*%diag(d[1:25])%*%Vt[1:25, ]
image(rappaR25, col=grey((0:1000)/1000))
```
Figure 16\.10: Rank 50 approximation of the image data
How many singular vectors does it take to recognize Dr. Rappa? Certainly 25 is sufficient. Can you recognize him with even fewer? You can play around with this and see how the image changes.
### 16\.2\.3 The Noise
One of the main benefits of the SVD is that the *signal\-to\-noise* ratio of each component decreases as we move towards the right end of the SVD sum. If \\(\\mathbf{x}\\) is our data matrix (in this example, it is a matrix of pixel data to create an image) then,
\\\[\\begin{equation}
\\mathbf{X}\= \\sigma\_1\\mathbf{u}\_1\\mathbf{v}\_1^T \+ \\sigma\_2\\mathbf{u}\_2\\mathbf{v}\_2^T \+ \\sigma\_3\\mathbf{u}\_3\\mathbf{v}\_3^T \+ \\dots \+ \\sigma\_r\\mathbf{u}\_r\\mathbf{v}\_r^T
\\tag{15\.2}
\\end{equation}\\]
where \\(r\\) is the rank of the matrix. Our image matrix is full rank, \\(r\=160\\). This is the number of nonzero singular values, \\(\\sigma\_i\\). But, upon examinination, we see many of the singular values are nearly 0\. Let’s examine the last 20 singular values:
```
d[140:160]
```
```
## [1] 0.035731961 0.033644986 0.033030189 0.028704912 0.027428124 0.025370919
## [7] 0.024289497 0.022991926 0.020876657 0.020060538 0.018651373 0.018011032
## [13] 0.016299834 0.015668836 0.013928107 0.013046327 0.011403096 0.010763141
## [19] 0.009210187 0.008421977 0.004167310
```
We can think of these values as the amount of “information” directed along those last 20 singular components. If we assume the noise in the image or data is uniformly distributed along each orthogonal component \\(\\mathbf{u}\_i\\mathbf{v}\_i^T\\), then there is just as much noise in the component \\(\\sigma\_1\\mathbf{u}\_1\\mathbf{v}\_1^T\\) as there is in the component \\(\\sigma\_{160}\\mathbf{u}\_{160}\\mathbf{v}\_{160}^T\\). But, as we’ve just shown, there is far less information in the component \\(\\sigma\_{160}\\mathbf{u}\_{160}\\mathbf{v}\_{160}^T\\) than there is in the component \\(\\sigma\_1\\mathbf{u}\_1\\mathbf{v}\_1^T\\). This means that the later components are primarily noise. Let’s see if we can illustrate this using our image. We’ll construct the parts of the image that are represented on the last few singular components
```
# Using the last 25 components:
rappa_bad25=U[ ,135:160]%*%diag(d[135:160])%*%Vt[135:160, ]
image(rappa_bad25, col=grey((0:1000)/1000))
```
Figure 16\.11: The last 25 components, or the sum of the last 25 terms in equation [(15\.2\)](svd.html#eq:svdsum)
```
# Using the last 50 components:
rappa_bad50=U[ ,110:160]%*%diag(d[110:160])%*%Vt[110:160, ]
image(rappa_bad50, col=grey((0:1000)/1000))
```
Figure 16\.12: The last 50 components, or the sum of the last 50 terms in equation [(15\.2\)](svd.html#eq:svdsum)
```
# Using the last 100 components: (4 times as many components as it took us to recognize the face on the front end)
rappa_bad100=U[ ,61:160]%*%diag(d[61:160])%*%Vt[61:160, ]
image(rappa_bad100, col=grey((0:1000)/1000))
```
Figure 16\.13: The last 100 components, or the sum of the last 100 terms in equation [(15\.2\)](svd.html#eq:svdsum)
Mostly noise. In the last of these images, we see the outline of Dr. Rappa. One of the first things to go when images are compressed are the crisp outlines of objects. This is something you may have witnessed in your own experience, particularly when changing the format of a picture to one that compresses the size.
### 16\.2\.1 Image data in R
Let’s take an image of a leader that we all know and respect:
Figure 16\.5: Michael Rappa, PhD, Founding Director of the Institute for Advanced Analytics and Distinguished Professor at NC State
This image can be downloaded from the IAA website, after clicking on the link on the left hand side “Michael Rappa / Founding Director.”
Let’s read this image into R. You’ll need to install the pixmap package:
```
#install.packages("pixmap")
library(pixmap)
```
Download the image to your computer and then set your working directory in R as the same place you have saved the image:
```
setwd("/Users/shaina/Desktop/lin-alg")
```
The first thing we will do is examine the image as an \[R,G,B] (extension .ppm) and as a grayscale (extension .pgm). Let’s start with the \[R,G,B] image and see what the data looks like in R:
```
rappa = read.pnm("LAdata/rappa.ppm")
```
```
## Warning in rep(cellres, length = 2): 'x' is NULL so the result will be NULL
```
```
#Show the type of the information contained in our data:
str(rappa)
```
```
## Formal class 'pixmapRGB' [package "pixmap"] with 8 slots
## ..@ red : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ green : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ blue : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ channels: chr [1:3] "red" "green" "blue"
## ..@ size : int [1:2] 160 250
## ..@ cellres : num [1:2] 1 1
## ..@ bbox : num [1:4] 0 0 250 160
## ..@ bbcent : logi FALSE
```
You can see we have 3 matrices \- one for each of the colors: red, green, and blue.
Rather than a traditional data frame, when working with an image, we have to refer to the elements in this data set with @ rather than with $.
```
rappa@size
```
```
## [1] 160 250
```
We can then display a heat map showing the intensity of each individual color in each pixel:
```
rappa.red=rappa@red
rappa.green=rappa@green
rappa.blue=rappa@blue
image(rappa.green)
```
Figure 16\.6: Intensity of green in each pixel of the original image
Oops! Dr. Rappa is sideways. To rotate the graphic, we actually have to rotate our coordinate system. There is an easy way to do this (with a little bit of matrix experience), we simply transpose the matrix and then reorder the columns so the last one is first: (note that `nrow(rappa.green)` gives the number of columns in the transposed matrix)
```
rappa.green=t(rappa.green)[,nrow(rappa.green):1]
image(rappa.green)
```
Rather than compressing the colors individually, let’s work with the grayscale image:
```
greyrappa = read.pnm("LAdata/rappa.pgm")
```
```
## Warning in rep(cellres, length = 2): 'x' is NULL so the result will be NULL
```
```
str(greyrappa)
```
```
## Formal class 'pixmapGrey' [package "pixmap"] with 6 slots
## ..@ grey : num [1:160, 1:250] 1 1 1 1 1 1 1 1 1 1 ...
## ..@ channels: chr "grey"
## ..@ size : int [1:2] 160 250
## ..@ cellres : num [1:2] 1 1
## ..@ bbox : num [1:4] 0 0 250 160
## ..@ bbcent : logi FALSE
```
```
rappa.grey=greyrappa@grey
#again, rotate 90 degrees
rappa.grey=t(rappa.grey)[,nrow(rappa.grey):1]
```
```
image(rappa.grey, col=grey((0:1000)/1000))
```
Figure 16\.7: Greyscale representation of original image
### 16\.2\.2 Computing the SVD of Dr. Rappa
Now, let’s use what we know about the SVD to compress this image. First, let’s compute the SVD and save the individual components. Remember that the rows of \\(\\mathbf{v}^T\\) are the right singular vectors. R outputs the matrix \\(\\mathbf{v}\\) which has the singular vectors in columns.
```
rappasvd=svd(rappa.grey)
U=rappasvd$u
d=rappasvd$d
Vt=t(rappasvd$v)
```
Now let’s compute some approximations of rank 3, 10 and 50:
```
rappaR3=U[ ,1:3]%*%diag(d[1:3])%*%Vt[1:3, ]
image(rappaR3, col=grey((0:1000)/1000))
```
Figure 16\.8: Rank 3 approximation of the image data
```
rappaR10=U[ ,1:10]%*%diag(d[1:10])%*%Vt[1:10, ]
image(rappaR10, col=grey((0:1000)/1000))
```
Figure 16\.9: Rank 10 approximation of the image data
```
rappaR25=U[ ,1:25]%*%diag(d[1:25])%*%Vt[1:25, ]
image(rappaR25, col=grey((0:1000)/1000))
```
Figure 16\.10: Rank 50 approximation of the image data
How many singular vectors does it take to recognize Dr. Rappa? Certainly 25 is sufficient. Can you recognize him with even fewer? You can play around with this and see how the image changes.
### 16\.2\.3 The Noise
One of the main benefits of the SVD is that the *signal\-to\-noise* ratio of each component decreases as we move towards the right end of the SVD sum. If \\(\\mathbf{x}\\) is our data matrix (in this example, it is a matrix of pixel data to create an image) then,
\\\[\\begin{equation}
\\mathbf{X}\= \\sigma\_1\\mathbf{u}\_1\\mathbf{v}\_1^T \+ \\sigma\_2\\mathbf{u}\_2\\mathbf{v}\_2^T \+ \\sigma\_3\\mathbf{u}\_3\\mathbf{v}\_3^T \+ \\dots \+ \\sigma\_r\\mathbf{u}\_r\\mathbf{v}\_r^T
\\tag{15\.2}
\\end{equation}\\]
where \\(r\\) is the rank of the matrix. Our image matrix is full rank, \\(r\=160\\). This is the number of nonzero singular values, \\(\\sigma\_i\\). But, upon examinination, we see many of the singular values are nearly 0\. Let’s examine the last 20 singular values:
```
d[140:160]
```
```
## [1] 0.035731961 0.033644986 0.033030189 0.028704912 0.027428124 0.025370919
## [7] 0.024289497 0.022991926 0.020876657 0.020060538 0.018651373 0.018011032
## [13] 0.016299834 0.015668836 0.013928107 0.013046327 0.011403096 0.010763141
## [19] 0.009210187 0.008421977 0.004167310
```
We can think of these values as the amount of “information” directed along those last 20 singular components. If we assume the noise in the image or data is uniformly distributed along each orthogonal component \\(\\mathbf{u}\_i\\mathbf{v}\_i^T\\), then there is just as much noise in the component \\(\\sigma\_1\\mathbf{u}\_1\\mathbf{v}\_1^T\\) as there is in the component \\(\\sigma\_{160}\\mathbf{u}\_{160}\\mathbf{v}\_{160}^T\\). But, as we’ve just shown, there is far less information in the component \\(\\sigma\_{160}\\mathbf{u}\_{160}\\mathbf{v}\_{160}^T\\) than there is in the component \\(\\sigma\_1\\mathbf{u}\_1\\mathbf{v}\_1^T\\). This means that the later components are primarily noise. Let’s see if we can illustrate this using our image. We’ll construct the parts of the image that are represented on the last few singular components
```
# Using the last 25 components:
rappa_bad25=U[ ,135:160]%*%diag(d[135:160])%*%Vt[135:160, ]
image(rappa_bad25, col=grey((0:1000)/1000))
```
Figure 16\.11: The last 25 components, or the sum of the last 25 terms in equation [(15\.2\)](svd.html#eq:svdsum)
```
# Using the last 50 components:
rappa_bad50=U[ ,110:160]%*%diag(d[110:160])%*%Vt[110:160, ]
image(rappa_bad50, col=grey((0:1000)/1000))
```
Figure 16\.12: The last 50 components, or the sum of the last 50 terms in equation [(15\.2\)](svd.html#eq:svdsum)
```
# Using the last 100 components: (4 times as many components as it took us to recognize the face on the front end)
rappa_bad100=U[ ,61:160]%*%diag(d[61:160])%*%Vt[61:160, ]
image(rappa_bad100, col=grey((0:1000)/1000))
```
Figure 16\.13: The last 100 components, or the sum of the last 100 terms in equation [(15\.2\)](svd.html#eq:svdsum)
Mostly noise. In the last of these images, we see the outline of Dr. Rappa. One of the first things to go when images are compressed are the crisp outlines of objects. This is something you may have witnessed in your own experience, particularly when changing the format of a picture to one that compresses the size.
| Field Specific |
shainarace.github.io | https://shainarace.github.io/LinearAlgebra/fa.html |
Chapter 17 Factor Analysis
==========================
Factor Analysis is about looking for underlying *relationships* or *associations*. In that way, factor analysis is a correlational study of variables, aiming to group or cluster variables along dimensions. It may also be used to provide an estimate (factor score) of a latent construct which is a linear combination of variables. For example, a standardized test might ask hundreds of questions on a variety of quantitative and verbal subjects. Each of these questions could be viewed as a variable. However, the quantitative questions collectively are meant to measure some *latent* factor, that is the individual’s *quantitative reasoning*. A Factor Analysis might be able to reveal these two latent factors (quantitative reasoning and verbal ability) and then also provide an estimate (score) for each individual on each factor.
Any attempt to use factor analysis to summarize or reduce a set to data should be based on a conceptual foundation or hypothesis. It should be remembered that factor analysis will produce factors for most sets of data. Thus, if you simply analyze a large number of variables in the hopes that the technique will “figure it out,” your results may look as though they are grasping at straws. The quality or meaning/interpretation of the derived factors is best when related to a conceptual foundation that existed prior to the analysis.
17\.1 Assumptions of Factor Analysis
------------------------------------
1. No outliers in the data set
- Adequate sample size
* As a rule of thumb, maintain a ratio of variables to factors of at least 3 (some say 5\). This depends on the application.
* You should have at least 10 observations for each variable (some say 20\). This often depends on what value of factor loading you want to declare as significant. See Table [17\.2](fa.html#tab:factorsig) for the details on this.- No perfect multicollinearity
- Homoskedasticity *not* required between variables (all variances *not* required to be equal)
- Linearity of variables desired \- only models linear correlation between variables
- Interval data (as opposed to nominal)
- Measurement error on the variables/observations has constant variance and is, on average, 0
- Normality is not required
17\.2 Determining Factorability
-------------------------------
Before we even begin the process of factor analysis, we have to do some preliminary work to determine whether or not the data even lends itself to this technique. If none of our variables are correlated, then we cannot group them together in any meaningful way! Bartlett’s Sphericity Test and the KMO index are two statistical tests for whether or not a set of variables can be factored. These tests *do not* provide information about the appropriate number of factors, only whether or not such factors even exist.
### 17\.2\.1 Visual Examination of Correlation Matrix
Depending on how many variables you are working with, you may be able to determine whether or not to proceed with factor analysis by simply examining the correlation matrix. With this examination, we are looking for two things:
1. Correlations that are significant at the 0\.01 level of significance. At least half of the correlations should be significant in order to proceed to the next step.
2. Correlations are “sufficient” to justify applying factor analysis. As a rule of thumb, at least half of the correlations should be greater than 0\.30\.
### 17\.2\.2 Barlett’s Sphericity Test
Barlett’s sphericity test checks if the observed correlation matrix is significantly different from the identity matrix. Recall that the correlation of two variables is equal to 0 if and only if they are orthogonal (and thus completely uncorrelated). When this is the case, we cannot reduce the number of variables any further and neither PCA nor any other flavor of factor analysis will be able to compress the information reliably into fewer dimensions. For Barlett’s test, the null hypothesis is:
\\\[H\_0 \= \\mbox{ The variables are orthogonal,} \\]
which implies that there are no underlying factors to be uncovered. Obviously, we must be able to reject this hypothesis for a meaningful factor model.
### 17\.2\.3 Kaiser\-Meyer\-Olkin (KMO) Measure of Sampling Adequacy
The goal of the Kaiser\-Meyer\-Olkin (KMO) measure of sampling adequacy is similar to that of Bartlett’s test in that it checks if we can factorize the original variables efficiently. However, the KMO measure is based on the idea of *partial correlation* [\[1]](#ref-kmo). The correlation matrix is always the starting point. We know that the variables are more or less correlated, but the correlation between two variables can be influenced by the others. So, we use the partial correlation in order to measure the relation between two variables by removing the effect of the remaining variables. The KMO index compares the raw values of correlations between variables and those of the partial correlations. If the KMO index is high (\\(\\approx 1\\)), then PCA can act efficiently; if the KMO index is low (\\(\\approx 0\\)), then PCA is not relevant. Generally a KMO index greater than 0\.5 is considered acceptable to proceed with factor analysis. Table [17\.1](fa.html#tab:KMO) contains the information about interpretting KMO results that was provided in the original 1974 paper.
| KMO value Degree of Common Variance | |
| --- | --- |
| 0\.90 to 1\.00 Marvelous | |
| 0\.80 to 0\.89 Middling | |
| 0\.60 to 0\.69 Mediocre | |
| 0\.50 to 0\.59 Miserable | |
| 0\.00 to 0\.49 Don’t Factor | |
Table 17\.1: Interpretting the KMO value. [\[1]](#ref-kmo)
So, for example, if you have a survey with 100 questions/variables and you obtained a KMO index of 0\.61, this tells you that the degree of common variance between your variables is mediocre, on the border of being miserable. While factor analysis may still be appropriate in this case, you will find that such an analysis will not account for a substantial amount of variance in your data. It may still account for enough to draw some meaningful conclusions, however.
### 17\.2\.4 Significant factor loadings
When performing principal factor analysis on the correlation matrix, we have some clear guidelines for what kind of factor loadings will be deemed “significant” from a statistical viewpoint based on sample size. Table [17\.2](fa.html#tab:factorsig) provides those limits.
| Sample Size Needed for Significance Factor Loading | |
| --- | --- |
| 350 .30 | |
| 250 .35 | |
| 200 .40 | |
| 150 .45 | |
| 120 .50 | |
| 100 .55 | |
| 85 .60 | |
| 70 .65 | |
| 60 .70 | |
| 50 .75 | |
Table 17\.2: For principal component factor analysis on the correlation matrix, the factor loadings provide the correlations of each variable with each factor. This table is a guide for the sample sizes necessary to consider a factor loading significant. For example, in a sample of 100, factor loadings of 0\.55 are considered significant. In a sample size of 70, however, factor loadings must reach 0\.65 to be considered significant. Significance based on 0\.05 level, a power level of 80 percent. Source: *Computations made with SOLO Power Analysis, BMDP Statistical Software, Inc., 1993*
17\.3 Communalities
-------------------
You can think of **communalities** as multiple \\(R^2\\) values for regression models predicting the variables of interest from the factors (the reduced number of factors that your model uses). The communality for a given variable can be interpreted as the proportion of variation in that variable explained by the chosen factors.
Take for example the SAS output for factor analysis on the Iris dataset shown in Figure [**??**](#fig:factorOUT). The factor model (which settles on only one single factor) explains 98% of the variability in *petal length*. In other words, if you were to use this factor in a simple linear regression model to predict petal length, the associated \\(R^2\\) value should be 0\.98\. Indeed you can verify that this is true. The results indicate that this single factor model will do the best job explaining variability in *petal length, petal width, and sepal length*.
Figure 17\.1: SAS output for PROC FACTOR using Iris Dataset
One assessment of how well a factor model is doing can be obtained from the communalities. What you want to see is values that are close to one. This would indicate that the model explains most of the variation for those variables. In this case, the model does better for some variables than it does for others.
If you take all of the communality values, \\(c\_i\\) and add them up you can get a total communality value:
\\\[\\sum\_{i\=1}^p \\widehat{c\_i} \= \\sum\_{i\=1}^k \\widehat{\\lambda\_i}\\]
Here, the total communality is 2\.918\. The proportion of the total variation explained by the three factors is
\\\[\\frac{2\.918}{4}\\approx 0\.75\.\\]
The denominator in that fraction comes from the fact that the correlation matrix is used by default and our dataset has 4 variables. Standardized variables have variance of 1 so the total variance is 4\. This gives us the percentage of variation explained in our model. This might be looked at as an overall assessment of the performance of the model. The individual communalities tell how well the model is working for the individual variables, and the total communality gives an overall assessment of performance.
17\.4 Number of Factors
-----------------------
A common rule of thumb for determining the number of factors in principal factor analysis on the correlation matrix is to only choose factors with associated eigenvalue (or variance) greater than 1\. Since the correlation matrix implies the use of standardized data, each individual variable going in has a variance of 1\. So this rule of thumb simply states that we want our factors to explain more variance than any individual variable from our dataset. If this rule of thumb produces too many factors, it is reasonable to raise that limiting condition only if the number of factors still explains a reasonable amount of the total variance.
17\.5 Rotation of Factors
-------------------------
The purpose of rotating factors is to make them more interpretable. If factor loadings are relatively constant across variables, they don’t help us find latent structure or clusters of variables. This will often happen in PCA when the goal is only to find directions of maximal variance. Thus, once the number of components/factors is fixed and a projection of the data onto a lower\-dimensional subspace is done, we are free to rotate the axes of the result without losing any variance. The axes will no longer be principal components! The amount of variance explained by each factor will change, but the total amount of variance in the reduced data will stay the same because all we have done is rotate the basis. The goal is to rotate the factors in such a way that the loading matrix develops a more *sparse* structure. A sparse loading matrix (one with lots of very small entries and few large entries) is far easier to interpret in terms of finding latent variable groups.
The two most common rotations are **varimax** and **quartimax**. The goal of *varimax* rotation is to maximize the squared factor loadings in each factor, i.e. to simplify the columns of the factor matrix. In each factor, the large loadings are increased and the small loadings are decreased so that each factor has only a few variables with large loadings. In contrast, the goal of *quartimax* rotation is to simply the rows of the factor matrix. In each variable the large loadings are increased and the small loadings are decreased so that each variable will only load on a few factors. Which of these factor rotations is appropriate
17\.6 Methods of Factor Analysis
--------------------------------
Factor Analysis is much like PCA in that it attempts to find some latent variables (linear combinations of original variables) which can describe large portions of the total variance in data. There are numerous ways to compute factors for factor analysis, the two most common methods are:
1. The *principal axis* method (i.e. PCA) and
2. Maximum Likelihood Estimation.
In fact, the default method for SAS’s PROC FACTOR with no additional options is merely PCA. For some reason, the scores and factors may be scaled differently, involving the standard deviations of each factor, but nonetheless, there is absolutely nothing different between PROC FACTOR defaults and PROC PRINCOMP.
The difference between Factor Analysis and PCA is two\-fold:
1. In factor analysis, the factors are usually rotated to obtain a more sparse (i.e. interpretable) structure *varimax* rotation is the most common rotation. Others include *promax*, and *quartimax*.)
2. The factors try to only explain the “common variance” between variables. In other words, Factor Analysis tries to estimate how much of each variable’s variance is specific to that variable and not “covarying” (for lack of a better word) with any other variables. This specific variance is often subtracted from the diagonal of the covariance matrix before factors or components are found.
We’ll talk more about the first difference than the second because it generally carries more advantages.
### 17\.6\.1 PCA Rotations
Let’s first talk about the motivation behind principal component rotations. Compare the following sets of (fabricated) factors, both using the variables from the iris dataset. Listed below are the loadings of each variable on two factors. Which set of factors is more easily interpretted?
| Variable P1 P2 | | |
| --- | --- | --- |
| Sepal.Length \-.3 .7 | | |
–\>
| Sepal.Width \-.5 .4 | | |
–\>
| Petal.Length .7 .3 | | |
–\>
| Petal.Width .4 \-.5 | | |
–\>
Table 17\.3: Factor Loadings: Set 1
| Variable F1 F2 | | |
| --- | --- | --- |
| Sepal.Length 0 .9 | | |
| Sepal.Width \-.9 0 | | |
| Petal.Length .8 0 | | |
| Petal.Width .1 \-.9 | | |
Table 17\.4: Factor Loadings: Set 2
The difference between these factors might be described as “sparsity.” Factor Set 2 has more zero loadings than Factor Set 1\. It also has entries which are comparitively larger in magnitude. This makes Factor Set 2 much easier to interpret! Clearly F1 is dominated by the variables Sepal.Width (positively correlated) and Petal.Length (negatively correlated), whereas F2 is dominated by the variables Sepal.Length (positively) and Petal.Width (negatively). Factor interpretation doesn’t get much easier than that! With the first set of factors, the story is not so clear.
This is the whole purpose of factor rotation, to increase the interpretability of factors by encouraging sparsity. Geometrically, factor rotation tries to rotate a given set of factors (like those derived from PCA) to be more closely aligned with the original variables once the dimensions of the space have been reduced and the variables have been pushed closer together in the factor space. Let’s take a look at the actual principal components from the iris data and then rotate them using a varimax rotation. In order to rotate the factors, we have to decide on some number of factors to use. If we rotated all 4 orthogonal components to find sparsity, we’d just end up with our original variables again!
```
irispca = princomp(iris[,1:4],scale=T)
```
```
## Warning: In princomp.default(iris[, 1:4], scale = T) :
## extra argument 'scale' will be disregarded
```
```
summary(irispca)
```
```
## Importance of components:
## Comp.1 Comp.2 Comp.3 Comp.4
## Standard deviation 2.0494032 0.49097143 0.27872586 0.153870700
## Proportion of Variance 0.9246187 0.05306648 0.01710261 0.005212184
## Cumulative Proportion 0.9246187 0.97768521 0.99478782 1.000000000
```
```
irispca$loadings
```
```
##
## Loadings:
## Comp.1 Comp.2 Comp.3 Comp.4
## Sepal.Length 0.361 0.657 0.582 0.315
## Sepal.Width 0.730 -0.598 -0.320
## Petal.Length 0.857 -0.173 -0.480
## Petal.Width 0.358 -0.546 0.754
##
## Comp.1 Comp.2 Comp.3 Comp.4
## SS loadings 1.00 1.00 1.00 1.00
## Proportion Var 0.25 0.25 0.25 0.25
## Cumulative Var 0.25 0.50 0.75 1.00
```
```
# Since 2 components explain a large proportion of the variation, lets settle on those two:
rotatedpca = varimax(irispca$loadings[,1:2])
rotatedpca$loadings
```
```
##
## Loadings:
## Comp.1 Comp.2
## Sepal.Length 0.223 0.716
## Sepal.Width -0.229 0.699
## Petal.Length 0.874
## Petal.Width 0.366
##
## Comp.1 Comp.2
## SS loadings 1.00 1.00
## Proportion Var 0.25 0.25
## Cumulative Var 0.25 0.50
```
```
# Not a drastic amount of difference, but clearly an attempt has been made to encourage
# sparsity in the vectors of loadings.
# NOTE: THE ROTATED FACTORS EXPLAIN THE SAME AMOUNT OF VARIANCE AS THE FIRST TWO PCS
# AFTER PROJECTING THE DATA INTO TWO DIMENSIONS (THE BIPLOT) ALL WE DID WAS ROTATE THOSE
# ORTHOGONAL AXIS. THIS CHANGES THE PROPORTION EXPLAINED BY *EACH* AXIS, BUT NOT THE TOTAL
# AMOUNT EXPLAINED BY THE TWO TOGETHER.
# The output from varimax can't tell you about proportion of variance in the original data
# because you didn't even tell it what the original data was!
```
17\.7 Case Study: Personality Tests
-----------------------------------
In this example, we’ll use a publicly available dataset that describes personality traits of nearly
Read in the Big5 Personality test dataset, which contains likert scale responses (five point scale where 1\=Disagree, 3\=Neutral, 5\=Agree. 0 \= missing) on 50 different questions in columns 8 through 57\. The questions, labeled E1\-E10 (E\=extroversion), N1\-N10 (N\=neuroticism), A1\-A10 (A\=agreeableness), C1\-C10 (C\=conscientiousness), and O1\-O10 (O\=openness) all attempt to measure 5 key angles of human personality. The first 7 columns contain demographic information coded as follows:
1. **Race** Chosen from a drop down menu.
* 1\=Mixed Race
* 2\=Arctic (Siberian, Eskimo)
* 3\=Caucasian (European)
* 4\=Caucasian (Indian)
* 5\=Caucasian (Middle East)
* 6\=Caucasian (North African, Other)
* 7\=Indigenous Australian
* 8\=Native American
* 9\=North East Asian (Mongol, Tibetan, Korean Japanese, etc)
* 10\=Pacific (Polynesian, Micronesian, etc)
* 11\=South East Asian (Chinese, Thai, Malay, Filipino, etc)
* 12\=West African, Bushmen, Ethiopian
* 13\=Other (0\=missed)- **Age** Entered as text (individuals reporting age \< 13 were not recorded)
- **Engnat** Response to “is English your native language?”
* 1\=yes
* 2\=no
* 0\=missing- **Gender** Chosen from a drop down menu
* 1\=Male
* 2\=Female
* 3\=Other
* 0\=missing- **Hand** “What hand do you use to write with?”
* 1\=Right
* 2\=Left
* 3\=Both
* 0\=missing
```
options(digits=2)
big5 = read.csv('http://birch.iaa.ncsu.edu/~slrace/LinearAlgebra2021/Code/big5.csv')
```
To perform the same analysis we did in SAS, we want to use Correlation PCA and rotate the axes with a varimax transformation. We will start by performing the PCA. We need to set the option `scale=T` to perform PCA on the correlation matrix rather than the default covariance matrix. We will only compute the first 5 principal components because we have 5 personality traits we are trying to measure. We could also compute more than 5 and take the number of components with eigenvalues \>1 to match the default output in SAS (without n\=5 option).
### 17\.7\.1 Raw PCA Factors
```
options(digits=5)
pca.out = prcomp(big5[,8:57], rank = 5, scale = T)
```
Remember the only difference between the default PROC PRINCOMP output and the default PROC FACTOR output in SAS was the fact that the eigenvectors in PROC PRINCOMP were normalized to be unit vectors and the factor vectors in PROC FACTOR were those same eigenvectors scaled by the square roots of the eigenvalues. So we want to multiply each eigenvector column output in `pca.out$rotation` (recall this is the loading matrix or matrix of eigenvectors) by the square root of the corresponding eigenvalue given in `pca.out$sdev`. You’ll recall that multiplying a matrix by a diagonal matrix on the right has the effect of scaling the columns of the matrix. So we’ll just make a diagonal matrix, \\(\\textbf{S}\\) with diagonal elements from the `pca.out$sdev` vector and scale the columns of the `pca.out$rotation` matrix. Similarly, the coordinates of the data along each component then need to be *divided* by the standard deviation to cancel out this effect of lengthening the axis. So again we will multiply by a diagonal matrix to perform this scaling, but this time, we use the diagonal matrix \\(\\textbf{S}^{\-1}\=\\) `diag(1/(pca.out$sdev))`.
Matrix multiplication in R is performed with the `\%\*\%` operator.
```
fact.loadings = pca.out$rotation[,1:5] %*% diag(pca.out$sdev[1:5])
fact.scores = pca.out$x[,1:5] %*%diag(1/pca.out$sdev[1:5])
# PRINT OUT THE FIRST 5 ROWS OF EACH MATRIX FOR CONFIRMATION.
fact.loadings[1:5,1:5]
```
```
## [,1] [,2] [,3] [,4] [,5]
## E1 -0.52057 0.27735 -0.29183 0.13456 -0.25072
## E2 0.51025 -0.35942 0.26959 -0.14223 0.21649
## E3 -0.70998 0.15791 -0.11623 0.21768 -0.11303
## E4 0.58361 -0.20341 0.31433 -0.17833 0.22788
## E5 -0.65751 0.31924 -0.16404 0.12496 -0.21810
```
```
fact.scores[1:5,1:5]
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] -2.53286 -1.16617 0.276244 0.043229 -0.069518
## [2,] 0.70216 -1.22761 1.095383 1.615919 -0.562371
## [3,] -0.12575 1.33180 1.525208 -1.163062 -2.949501
## [4,] 1.29926 1.17736 0.044168 -0.784411 0.148903
## [5,] -0.37359 0.47716 0.292680 1.233652 0.406582
```
This should match the output from SAS and it does. Remember these columns are unique up to a sign, so you’ll see factor 4 does not have the same sign in both software outputs. This is not cause for concern.
Figure 17\.2: Default (Unrotated) Factor Loadings Output by SAS
Figure 17\.3: Default (Unrotated) Factor Scores Output by SAS
### 17\.7\.2 Rotated Principal Components
The next task we may want to undertake is a rotation of the factor axes according to the varimax procedure. The most simple way to go about this is to use the `varimax()` function to find the optimal rotation of the eigenvectors in the matrix `pca.out$rotation`. The `varimax()` function outputs both the new set of axes in the matrix called `loadings` and the rotation matrix (`rotmat`) which performs the rotation from the original principal component axes to the new axes. (i.e. if \\(\\textbf{V}\\) contains the old axes as columns and \\(\\hat{\\textbf{V}}\\) contains the new axes and \\(\\textbf{R}\\) is the rotation matrix then \\(\\hat{\\textbf{V}} \= \\textbf{V}\\textbf{R}\\).) That rotation matrix can be used to perform the same rotation on the scores of the observations. If the matrix \\(\\textbf{U}\\) contains the scores for each observation, then the rotated scores \\(\\hat{\\textbf{U}}\\) are found by \\(\\hat{\\textbf{U}} \= \\textbf{U}\\textbf{R}\\)
```
varimax.out = varimax(fact.loadings)
rotated.fact.loadings = fact.loadings %*% varimax.out$rotmat
rotated.fact.scores = fact.scores %*% varimax.out$rotmat
# PRINT OUT THE FIRST 5 ROWS OF EACH MATRIX FOR CONFIRMATION.
rotated.fact.loadings[1:5,]
```
```
## [,1] [,2] [,3] [,4] [,5]
## E1 -0.71232 -0.0489043 0.010596 -0.03206926 0.055858
## E2 0.71592 -0.0031185 0.028946 0.03504236 -0.121241
## E3 -0.66912 -0.2604049 0.131609 0.01704690 0.263679
## E4 0.73332 0.1528552 -0.023367 0.00094685 -0.053219
## E5 -0.74534 -0.0757539 0.100875 -0.07140722 0.218602
```
```
rotated.fact.scores[1:5,]
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] -1.09083 -2.04516 1.40699 -0.38254 0.5998386
## [2,] 0.85718 -0.19268 1.07708 2.03665 -0.2178616
## [3,] -0.92344 2.58761 2.43566 -0.80840 -0.1833138
## [4,] 0.61935 1.53087 -0.79225 -0.59901 -0.0064665
## [5,] -0.39495 -0.10893 -0.24892 0.99744 0.9567712
```
And again we can see that these line up with our SAS Rotated output, **however** the order does not have to be the same! SAS conveniently reorders the columns according to the variance of the data along that new direction. Since we have not done that in R, the order of the columns is not the same! Factors 1 and 2 are the same in both outputs, but SAS Factor 3 \= R Factor 4 and SAS Factor 5 \= (\-1\)\* R Factor 4\. The coordinates are switched too so nothing changes in our interpretation. Remember, when you rotate factors, you no longer keep the notion that the “first vector” explains the most variance unless you reorder them so that is true (like SAS does).
Figure 17\.4: Rotated Factor Loadings Output by SAS
```
knitr::include_graphics('RotatedScores.png')
```
Figure 17\.5: Rotated Factor Scores Output by SAS
### 17\.7\.3 Visualizing Rotation via BiPlots
Let’s start with a peek at BiPlots of the first 2 of principal component loadings, prior to rotation. Notice that here I’m not going to bother with any scaling of the factor loadings as I’m not interested in forcing my output to look like SAS’s output. I’m also downsampling the observations because 20,000 is far to many to plot.
```
biplot(pca.out$x[sample(1:19719,1000),1:2],
pca.out$rotation[,1:2],
cex=c(0.2,1))
```
Figure 17\.6: BiPlot of Projection onto PC1 and PC2
```
biplot(pca.out$x[sample(1:19719,1000),3:4],
pca.out$rotation[,3:4],
cex=c(0.2,1))
```
Figure 17\.7: BiPlot of Projection onto PC3 and PC4
Let’s see what happens to these biplots after rotation:
```
vmax = varimax(pca.out$rotation)
newscores = pca.out$x%*%vmax$rotmat
biplot(newscores[sample(1:19719,1000),1:2],
vmax$loadings[,1:2],
cex=c(0.2,1),
xlab = 'Rotated Axis 1',
ylab = 'Rotated Axis 2')
```
Figure 17\.8: BiPlot of Projection onto Rotated Axes 1,2\. Extroversion questions align with axis 1, Neuroticism with Axis 2
```
biplot(newscores[sample(1:19719,1000),3:4],
vmax$loadings[,3:4],
cex=c(0.2,1),
xlab = 'Rotated Axis 3',
ylab = 'Rotated Axis 4')
```
Figure 17\.9: BiPlot of Projection onto Rotated Axes 3,4\. Agreeableness questions align with axis 3, Openness with Axis 4\.
After the rotation, we can see the BiPlots tell a more distinct story. The extroversion questions line up along rotated axes 1, neuroticism along rotated axes 2, and agreeableness and openness are reflected in rotated axes 3 and 4 respectively. The fifth rotated component can be confirmed to represent the last remaining category which is conscientiousness.
17\.1 Assumptions of Factor Analysis
------------------------------------
1. No outliers in the data set
- Adequate sample size
* As a rule of thumb, maintain a ratio of variables to factors of at least 3 (some say 5\). This depends on the application.
* You should have at least 10 observations for each variable (some say 20\). This often depends on what value of factor loading you want to declare as significant. See Table [17\.2](fa.html#tab:factorsig) for the details on this.- No perfect multicollinearity
- Homoskedasticity *not* required between variables (all variances *not* required to be equal)
- Linearity of variables desired \- only models linear correlation between variables
- Interval data (as opposed to nominal)
- Measurement error on the variables/observations has constant variance and is, on average, 0
- Normality is not required
17\.2 Determining Factorability
-------------------------------
Before we even begin the process of factor analysis, we have to do some preliminary work to determine whether or not the data even lends itself to this technique. If none of our variables are correlated, then we cannot group them together in any meaningful way! Bartlett’s Sphericity Test and the KMO index are two statistical tests for whether or not a set of variables can be factored. These tests *do not* provide information about the appropriate number of factors, only whether or not such factors even exist.
### 17\.2\.1 Visual Examination of Correlation Matrix
Depending on how many variables you are working with, you may be able to determine whether or not to proceed with factor analysis by simply examining the correlation matrix. With this examination, we are looking for two things:
1. Correlations that are significant at the 0\.01 level of significance. At least half of the correlations should be significant in order to proceed to the next step.
2. Correlations are “sufficient” to justify applying factor analysis. As a rule of thumb, at least half of the correlations should be greater than 0\.30\.
### 17\.2\.2 Barlett’s Sphericity Test
Barlett’s sphericity test checks if the observed correlation matrix is significantly different from the identity matrix. Recall that the correlation of two variables is equal to 0 if and only if they are orthogonal (and thus completely uncorrelated). When this is the case, we cannot reduce the number of variables any further and neither PCA nor any other flavor of factor analysis will be able to compress the information reliably into fewer dimensions. For Barlett’s test, the null hypothesis is:
\\\[H\_0 \= \\mbox{ The variables are orthogonal,} \\]
which implies that there are no underlying factors to be uncovered. Obviously, we must be able to reject this hypothesis for a meaningful factor model.
### 17\.2\.3 Kaiser\-Meyer\-Olkin (KMO) Measure of Sampling Adequacy
The goal of the Kaiser\-Meyer\-Olkin (KMO) measure of sampling adequacy is similar to that of Bartlett’s test in that it checks if we can factorize the original variables efficiently. However, the KMO measure is based on the idea of *partial correlation* [\[1]](#ref-kmo). The correlation matrix is always the starting point. We know that the variables are more or less correlated, but the correlation between two variables can be influenced by the others. So, we use the partial correlation in order to measure the relation between two variables by removing the effect of the remaining variables. The KMO index compares the raw values of correlations between variables and those of the partial correlations. If the KMO index is high (\\(\\approx 1\\)), then PCA can act efficiently; if the KMO index is low (\\(\\approx 0\\)), then PCA is not relevant. Generally a KMO index greater than 0\.5 is considered acceptable to proceed with factor analysis. Table [17\.1](fa.html#tab:KMO) contains the information about interpretting KMO results that was provided in the original 1974 paper.
| KMO value Degree of Common Variance | |
| --- | --- |
| 0\.90 to 1\.00 Marvelous | |
| 0\.80 to 0\.89 Middling | |
| 0\.60 to 0\.69 Mediocre | |
| 0\.50 to 0\.59 Miserable | |
| 0\.00 to 0\.49 Don’t Factor | |
Table 17\.1: Interpretting the KMO value. [\[1]](#ref-kmo)
So, for example, if you have a survey with 100 questions/variables and you obtained a KMO index of 0\.61, this tells you that the degree of common variance between your variables is mediocre, on the border of being miserable. While factor analysis may still be appropriate in this case, you will find that such an analysis will not account for a substantial amount of variance in your data. It may still account for enough to draw some meaningful conclusions, however.
### 17\.2\.4 Significant factor loadings
When performing principal factor analysis on the correlation matrix, we have some clear guidelines for what kind of factor loadings will be deemed “significant” from a statistical viewpoint based on sample size. Table [17\.2](fa.html#tab:factorsig) provides those limits.
| Sample Size Needed for Significance Factor Loading | |
| --- | --- |
| 350 .30 | |
| 250 .35 | |
| 200 .40 | |
| 150 .45 | |
| 120 .50 | |
| 100 .55 | |
| 85 .60 | |
| 70 .65 | |
| 60 .70 | |
| 50 .75 | |
Table 17\.2: For principal component factor analysis on the correlation matrix, the factor loadings provide the correlations of each variable with each factor. This table is a guide for the sample sizes necessary to consider a factor loading significant. For example, in a sample of 100, factor loadings of 0\.55 are considered significant. In a sample size of 70, however, factor loadings must reach 0\.65 to be considered significant. Significance based on 0\.05 level, a power level of 80 percent. Source: *Computations made with SOLO Power Analysis, BMDP Statistical Software, Inc., 1993*
### 17\.2\.1 Visual Examination of Correlation Matrix
Depending on how many variables you are working with, you may be able to determine whether or not to proceed with factor analysis by simply examining the correlation matrix. With this examination, we are looking for two things:
1. Correlations that are significant at the 0\.01 level of significance. At least half of the correlations should be significant in order to proceed to the next step.
2. Correlations are “sufficient” to justify applying factor analysis. As a rule of thumb, at least half of the correlations should be greater than 0\.30\.
### 17\.2\.2 Barlett’s Sphericity Test
Barlett’s sphericity test checks if the observed correlation matrix is significantly different from the identity matrix. Recall that the correlation of two variables is equal to 0 if and only if they are orthogonal (and thus completely uncorrelated). When this is the case, we cannot reduce the number of variables any further and neither PCA nor any other flavor of factor analysis will be able to compress the information reliably into fewer dimensions. For Barlett’s test, the null hypothesis is:
\\\[H\_0 \= \\mbox{ The variables are orthogonal,} \\]
which implies that there are no underlying factors to be uncovered. Obviously, we must be able to reject this hypothesis for a meaningful factor model.
### 17\.2\.3 Kaiser\-Meyer\-Olkin (KMO) Measure of Sampling Adequacy
The goal of the Kaiser\-Meyer\-Olkin (KMO) measure of sampling adequacy is similar to that of Bartlett’s test in that it checks if we can factorize the original variables efficiently. However, the KMO measure is based on the idea of *partial correlation* [\[1]](#ref-kmo). The correlation matrix is always the starting point. We know that the variables are more or less correlated, but the correlation between two variables can be influenced by the others. So, we use the partial correlation in order to measure the relation between two variables by removing the effect of the remaining variables. The KMO index compares the raw values of correlations between variables and those of the partial correlations. If the KMO index is high (\\(\\approx 1\\)), then PCA can act efficiently; if the KMO index is low (\\(\\approx 0\\)), then PCA is not relevant. Generally a KMO index greater than 0\.5 is considered acceptable to proceed with factor analysis. Table [17\.1](fa.html#tab:KMO) contains the information about interpretting KMO results that was provided in the original 1974 paper.
| KMO value Degree of Common Variance | |
| --- | --- |
| 0\.90 to 1\.00 Marvelous | |
| 0\.80 to 0\.89 Middling | |
| 0\.60 to 0\.69 Mediocre | |
| 0\.50 to 0\.59 Miserable | |
| 0\.00 to 0\.49 Don’t Factor | |
Table 17\.1: Interpretting the KMO value. [\[1]](#ref-kmo)
So, for example, if you have a survey with 100 questions/variables and you obtained a KMO index of 0\.61, this tells you that the degree of common variance between your variables is mediocre, on the border of being miserable. While factor analysis may still be appropriate in this case, you will find that such an analysis will not account for a substantial amount of variance in your data. It may still account for enough to draw some meaningful conclusions, however.
### 17\.2\.4 Significant factor loadings
When performing principal factor analysis on the correlation matrix, we have some clear guidelines for what kind of factor loadings will be deemed “significant” from a statistical viewpoint based on sample size. Table [17\.2](fa.html#tab:factorsig) provides those limits.
| Sample Size Needed for Significance Factor Loading | |
| --- | --- |
| 350 .30 | |
| 250 .35 | |
| 200 .40 | |
| 150 .45 | |
| 120 .50 | |
| 100 .55 | |
| 85 .60 | |
| 70 .65 | |
| 60 .70 | |
| 50 .75 | |
Table 17\.2: For principal component factor analysis on the correlation matrix, the factor loadings provide the correlations of each variable with each factor. This table is a guide for the sample sizes necessary to consider a factor loading significant. For example, in a sample of 100, factor loadings of 0\.55 are considered significant. In a sample size of 70, however, factor loadings must reach 0\.65 to be considered significant. Significance based on 0\.05 level, a power level of 80 percent. Source: *Computations made with SOLO Power Analysis, BMDP Statistical Software, Inc., 1993*
17\.3 Communalities
-------------------
You can think of **communalities** as multiple \\(R^2\\) values for regression models predicting the variables of interest from the factors (the reduced number of factors that your model uses). The communality for a given variable can be interpreted as the proportion of variation in that variable explained by the chosen factors.
Take for example the SAS output for factor analysis on the Iris dataset shown in Figure [**??**](#fig:factorOUT). The factor model (which settles on only one single factor) explains 98% of the variability in *petal length*. In other words, if you were to use this factor in a simple linear regression model to predict petal length, the associated \\(R^2\\) value should be 0\.98\. Indeed you can verify that this is true. The results indicate that this single factor model will do the best job explaining variability in *petal length, petal width, and sepal length*.
Figure 17\.1: SAS output for PROC FACTOR using Iris Dataset
One assessment of how well a factor model is doing can be obtained from the communalities. What you want to see is values that are close to one. This would indicate that the model explains most of the variation for those variables. In this case, the model does better for some variables than it does for others.
If you take all of the communality values, \\(c\_i\\) and add them up you can get a total communality value:
\\\[\\sum\_{i\=1}^p \\widehat{c\_i} \= \\sum\_{i\=1}^k \\widehat{\\lambda\_i}\\]
Here, the total communality is 2\.918\. The proportion of the total variation explained by the three factors is
\\\[\\frac{2\.918}{4}\\approx 0\.75\.\\]
The denominator in that fraction comes from the fact that the correlation matrix is used by default and our dataset has 4 variables. Standardized variables have variance of 1 so the total variance is 4\. This gives us the percentage of variation explained in our model. This might be looked at as an overall assessment of the performance of the model. The individual communalities tell how well the model is working for the individual variables, and the total communality gives an overall assessment of performance.
17\.4 Number of Factors
-----------------------
A common rule of thumb for determining the number of factors in principal factor analysis on the correlation matrix is to only choose factors with associated eigenvalue (or variance) greater than 1\. Since the correlation matrix implies the use of standardized data, each individual variable going in has a variance of 1\. So this rule of thumb simply states that we want our factors to explain more variance than any individual variable from our dataset. If this rule of thumb produces too many factors, it is reasonable to raise that limiting condition only if the number of factors still explains a reasonable amount of the total variance.
17\.5 Rotation of Factors
-------------------------
The purpose of rotating factors is to make them more interpretable. If factor loadings are relatively constant across variables, they don’t help us find latent structure or clusters of variables. This will often happen in PCA when the goal is only to find directions of maximal variance. Thus, once the number of components/factors is fixed and a projection of the data onto a lower\-dimensional subspace is done, we are free to rotate the axes of the result without losing any variance. The axes will no longer be principal components! The amount of variance explained by each factor will change, but the total amount of variance in the reduced data will stay the same because all we have done is rotate the basis. The goal is to rotate the factors in such a way that the loading matrix develops a more *sparse* structure. A sparse loading matrix (one with lots of very small entries and few large entries) is far easier to interpret in terms of finding latent variable groups.
The two most common rotations are **varimax** and **quartimax**. The goal of *varimax* rotation is to maximize the squared factor loadings in each factor, i.e. to simplify the columns of the factor matrix. In each factor, the large loadings are increased and the small loadings are decreased so that each factor has only a few variables with large loadings. In contrast, the goal of *quartimax* rotation is to simply the rows of the factor matrix. In each variable the large loadings are increased and the small loadings are decreased so that each variable will only load on a few factors. Which of these factor rotations is appropriate
17\.6 Methods of Factor Analysis
--------------------------------
Factor Analysis is much like PCA in that it attempts to find some latent variables (linear combinations of original variables) which can describe large portions of the total variance in data. There are numerous ways to compute factors for factor analysis, the two most common methods are:
1. The *principal axis* method (i.e. PCA) and
2. Maximum Likelihood Estimation.
In fact, the default method for SAS’s PROC FACTOR with no additional options is merely PCA. For some reason, the scores and factors may be scaled differently, involving the standard deviations of each factor, but nonetheless, there is absolutely nothing different between PROC FACTOR defaults and PROC PRINCOMP.
The difference between Factor Analysis and PCA is two\-fold:
1. In factor analysis, the factors are usually rotated to obtain a more sparse (i.e. interpretable) structure *varimax* rotation is the most common rotation. Others include *promax*, and *quartimax*.)
2. The factors try to only explain the “common variance” between variables. In other words, Factor Analysis tries to estimate how much of each variable’s variance is specific to that variable and not “covarying” (for lack of a better word) with any other variables. This specific variance is often subtracted from the diagonal of the covariance matrix before factors or components are found.
We’ll talk more about the first difference than the second because it generally carries more advantages.
### 17\.6\.1 PCA Rotations
Let’s first talk about the motivation behind principal component rotations. Compare the following sets of (fabricated) factors, both using the variables from the iris dataset. Listed below are the loadings of each variable on two factors. Which set of factors is more easily interpretted?
| Variable P1 P2 | | |
| --- | --- | --- |
| Sepal.Length \-.3 .7 | | |
–\>
| Sepal.Width \-.5 .4 | | |
–\>
| Petal.Length .7 .3 | | |
–\>
| Petal.Width .4 \-.5 | | |
–\>
Table 17\.3: Factor Loadings: Set 1
| Variable F1 F2 | | |
| --- | --- | --- |
| Sepal.Length 0 .9 | | |
| Sepal.Width \-.9 0 | | |
| Petal.Length .8 0 | | |
| Petal.Width .1 \-.9 | | |
Table 17\.4: Factor Loadings: Set 2
The difference between these factors might be described as “sparsity.” Factor Set 2 has more zero loadings than Factor Set 1\. It also has entries which are comparitively larger in magnitude. This makes Factor Set 2 much easier to interpret! Clearly F1 is dominated by the variables Sepal.Width (positively correlated) and Petal.Length (negatively correlated), whereas F2 is dominated by the variables Sepal.Length (positively) and Petal.Width (negatively). Factor interpretation doesn’t get much easier than that! With the first set of factors, the story is not so clear.
This is the whole purpose of factor rotation, to increase the interpretability of factors by encouraging sparsity. Geometrically, factor rotation tries to rotate a given set of factors (like those derived from PCA) to be more closely aligned with the original variables once the dimensions of the space have been reduced and the variables have been pushed closer together in the factor space. Let’s take a look at the actual principal components from the iris data and then rotate them using a varimax rotation. In order to rotate the factors, we have to decide on some number of factors to use. If we rotated all 4 orthogonal components to find sparsity, we’d just end up with our original variables again!
```
irispca = princomp(iris[,1:4],scale=T)
```
```
## Warning: In princomp.default(iris[, 1:4], scale = T) :
## extra argument 'scale' will be disregarded
```
```
summary(irispca)
```
```
## Importance of components:
## Comp.1 Comp.2 Comp.3 Comp.4
## Standard deviation 2.0494032 0.49097143 0.27872586 0.153870700
## Proportion of Variance 0.9246187 0.05306648 0.01710261 0.005212184
## Cumulative Proportion 0.9246187 0.97768521 0.99478782 1.000000000
```
```
irispca$loadings
```
```
##
## Loadings:
## Comp.1 Comp.2 Comp.3 Comp.4
## Sepal.Length 0.361 0.657 0.582 0.315
## Sepal.Width 0.730 -0.598 -0.320
## Petal.Length 0.857 -0.173 -0.480
## Petal.Width 0.358 -0.546 0.754
##
## Comp.1 Comp.2 Comp.3 Comp.4
## SS loadings 1.00 1.00 1.00 1.00
## Proportion Var 0.25 0.25 0.25 0.25
## Cumulative Var 0.25 0.50 0.75 1.00
```
```
# Since 2 components explain a large proportion of the variation, lets settle on those two:
rotatedpca = varimax(irispca$loadings[,1:2])
rotatedpca$loadings
```
```
##
## Loadings:
## Comp.1 Comp.2
## Sepal.Length 0.223 0.716
## Sepal.Width -0.229 0.699
## Petal.Length 0.874
## Petal.Width 0.366
##
## Comp.1 Comp.2
## SS loadings 1.00 1.00
## Proportion Var 0.25 0.25
## Cumulative Var 0.25 0.50
```
```
# Not a drastic amount of difference, but clearly an attempt has been made to encourage
# sparsity in the vectors of loadings.
# NOTE: THE ROTATED FACTORS EXPLAIN THE SAME AMOUNT OF VARIANCE AS THE FIRST TWO PCS
# AFTER PROJECTING THE DATA INTO TWO DIMENSIONS (THE BIPLOT) ALL WE DID WAS ROTATE THOSE
# ORTHOGONAL AXIS. THIS CHANGES THE PROPORTION EXPLAINED BY *EACH* AXIS, BUT NOT THE TOTAL
# AMOUNT EXPLAINED BY THE TWO TOGETHER.
# The output from varimax can't tell you about proportion of variance in the original data
# because you didn't even tell it what the original data was!
```
### 17\.6\.1 PCA Rotations
Let’s first talk about the motivation behind principal component rotations. Compare the following sets of (fabricated) factors, both using the variables from the iris dataset. Listed below are the loadings of each variable on two factors. Which set of factors is more easily interpretted?
| Variable P1 P2 | | |
| --- | --- | --- |
| Sepal.Length \-.3 .7 | | |
–\>
| Sepal.Width \-.5 .4 | | |
–\>
| Petal.Length .7 .3 | | |
–\>
| Petal.Width .4 \-.5 | | |
–\>
Table 17\.3: Factor Loadings: Set 1
| Variable F1 F2 | | |
| --- | --- | --- |
| Sepal.Length 0 .9 | | |
| Sepal.Width \-.9 0 | | |
| Petal.Length .8 0 | | |
| Petal.Width .1 \-.9 | | |
Table 17\.4: Factor Loadings: Set 2
The difference between these factors might be described as “sparsity.” Factor Set 2 has more zero loadings than Factor Set 1\. It also has entries which are comparitively larger in magnitude. This makes Factor Set 2 much easier to interpret! Clearly F1 is dominated by the variables Sepal.Width (positively correlated) and Petal.Length (negatively correlated), whereas F2 is dominated by the variables Sepal.Length (positively) and Petal.Width (negatively). Factor interpretation doesn’t get much easier than that! With the first set of factors, the story is not so clear.
This is the whole purpose of factor rotation, to increase the interpretability of factors by encouraging sparsity. Geometrically, factor rotation tries to rotate a given set of factors (like those derived from PCA) to be more closely aligned with the original variables once the dimensions of the space have been reduced and the variables have been pushed closer together in the factor space. Let’s take a look at the actual principal components from the iris data and then rotate them using a varimax rotation. In order to rotate the factors, we have to decide on some number of factors to use. If we rotated all 4 orthogonal components to find sparsity, we’d just end up with our original variables again!
```
irispca = princomp(iris[,1:4],scale=T)
```
```
## Warning: In princomp.default(iris[, 1:4], scale = T) :
## extra argument 'scale' will be disregarded
```
```
summary(irispca)
```
```
## Importance of components:
## Comp.1 Comp.2 Comp.3 Comp.4
## Standard deviation 2.0494032 0.49097143 0.27872586 0.153870700
## Proportion of Variance 0.9246187 0.05306648 0.01710261 0.005212184
## Cumulative Proportion 0.9246187 0.97768521 0.99478782 1.000000000
```
```
irispca$loadings
```
```
##
## Loadings:
## Comp.1 Comp.2 Comp.3 Comp.4
## Sepal.Length 0.361 0.657 0.582 0.315
## Sepal.Width 0.730 -0.598 -0.320
## Petal.Length 0.857 -0.173 -0.480
## Petal.Width 0.358 -0.546 0.754
##
## Comp.1 Comp.2 Comp.3 Comp.4
## SS loadings 1.00 1.00 1.00 1.00
## Proportion Var 0.25 0.25 0.25 0.25
## Cumulative Var 0.25 0.50 0.75 1.00
```
```
# Since 2 components explain a large proportion of the variation, lets settle on those two:
rotatedpca = varimax(irispca$loadings[,1:2])
rotatedpca$loadings
```
```
##
## Loadings:
## Comp.1 Comp.2
## Sepal.Length 0.223 0.716
## Sepal.Width -0.229 0.699
## Petal.Length 0.874
## Petal.Width 0.366
##
## Comp.1 Comp.2
## SS loadings 1.00 1.00
## Proportion Var 0.25 0.25
## Cumulative Var 0.25 0.50
```
```
# Not a drastic amount of difference, but clearly an attempt has been made to encourage
# sparsity in the vectors of loadings.
# NOTE: THE ROTATED FACTORS EXPLAIN THE SAME AMOUNT OF VARIANCE AS THE FIRST TWO PCS
# AFTER PROJECTING THE DATA INTO TWO DIMENSIONS (THE BIPLOT) ALL WE DID WAS ROTATE THOSE
# ORTHOGONAL AXIS. THIS CHANGES THE PROPORTION EXPLAINED BY *EACH* AXIS, BUT NOT THE TOTAL
# AMOUNT EXPLAINED BY THE TWO TOGETHER.
# The output from varimax can't tell you about proportion of variance in the original data
# because you didn't even tell it what the original data was!
```
17\.7 Case Study: Personality Tests
-----------------------------------
In this example, we’ll use a publicly available dataset that describes personality traits of nearly
Read in the Big5 Personality test dataset, which contains likert scale responses (five point scale where 1\=Disagree, 3\=Neutral, 5\=Agree. 0 \= missing) on 50 different questions in columns 8 through 57\. The questions, labeled E1\-E10 (E\=extroversion), N1\-N10 (N\=neuroticism), A1\-A10 (A\=agreeableness), C1\-C10 (C\=conscientiousness), and O1\-O10 (O\=openness) all attempt to measure 5 key angles of human personality. The first 7 columns contain demographic information coded as follows:
1. **Race** Chosen from a drop down menu.
* 1\=Mixed Race
* 2\=Arctic (Siberian, Eskimo)
* 3\=Caucasian (European)
* 4\=Caucasian (Indian)
* 5\=Caucasian (Middle East)
* 6\=Caucasian (North African, Other)
* 7\=Indigenous Australian
* 8\=Native American
* 9\=North East Asian (Mongol, Tibetan, Korean Japanese, etc)
* 10\=Pacific (Polynesian, Micronesian, etc)
* 11\=South East Asian (Chinese, Thai, Malay, Filipino, etc)
* 12\=West African, Bushmen, Ethiopian
* 13\=Other (0\=missed)- **Age** Entered as text (individuals reporting age \< 13 were not recorded)
- **Engnat** Response to “is English your native language?”
* 1\=yes
* 2\=no
* 0\=missing- **Gender** Chosen from a drop down menu
* 1\=Male
* 2\=Female
* 3\=Other
* 0\=missing- **Hand** “What hand do you use to write with?”
* 1\=Right
* 2\=Left
* 3\=Both
* 0\=missing
```
options(digits=2)
big5 = read.csv('http://birch.iaa.ncsu.edu/~slrace/LinearAlgebra2021/Code/big5.csv')
```
To perform the same analysis we did in SAS, we want to use Correlation PCA and rotate the axes with a varimax transformation. We will start by performing the PCA. We need to set the option `scale=T` to perform PCA on the correlation matrix rather than the default covariance matrix. We will only compute the first 5 principal components because we have 5 personality traits we are trying to measure. We could also compute more than 5 and take the number of components with eigenvalues \>1 to match the default output in SAS (without n\=5 option).
### 17\.7\.1 Raw PCA Factors
```
options(digits=5)
pca.out = prcomp(big5[,8:57], rank = 5, scale = T)
```
Remember the only difference between the default PROC PRINCOMP output and the default PROC FACTOR output in SAS was the fact that the eigenvectors in PROC PRINCOMP were normalized to be unit vectors and the factor vectors in PROC FACTOR were those same eigenvectors scaled by the square roots of the eigenvalues. So we want to multiply each eigenvector column output in `pca.out$rotation` (recall this is the loading matrix or matrix of eigenvectors) by the square root of the corresponding eigenvalue given in `pca.out$sdev`. You’ll recall that multiplying a matrix by a diagonal matrix on the right has the effect of scaling the columns of the matrix. So we’ll just make a diagonal matrix, \\(\\textbf{S}\\) with diagonal elements from the `pca.out$sdev` vector and scale the columns of the `pca.out$rotation` matrix. Similarly, the coordinates of the data along each component then need to be *divided* by the standard deviation to cancel out this effect of lengthening the axis. So again we will multiply by a diagonal matrix to perform this scaling, but this time, we use the diagonal matrix \\(\\textbf{S}^{\-1}\=\\) `diag(1/(pca.out$sdev))`.
Matrix multiplication in R is performed with the `\%\*\%` operator.
```
fact.loadings = pca.out$rotation[,1:5] %*% diag(pca.out$sdev[1:5])
fact.scores = pca.out$x[,1:5] %*%diag(1/pca.out$sdev[1:5])
# PRINT OUT THE FIRST 5 ROWS OF EACH MATRIX FOR CONFIRMATION.
fact.loadings[1:5,1:5]
```
```
## [,1] [,2] [,3] [,4] [,5]
## E1 -0.52057 0.27735 -0.29183 0.13456 -0.25072
## E2 0.51025 -0.35942 0.26959 -0.14223 0.21649
## E3 -0.70998 0.15791 -0.11623 0.21768 -0.11303
## E4 0.58361 -0.20341 0.31433 -0.17833 0.22788
## E5 -0.65751 0.31924 -0.16404 0.12496 -0.21810
```
```
fact.scores[1:5,1:5]
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] -2.53286 -1.16617 0.276244 0.043229 -0.069518
## [2,] 0.70216 -1.22761 1.095383 1.615919 -0.562371
## [3,] -0.12575 1.33180 1.525208 -1.163062 -2.949501
## [4,] 1.29926 1.17736 0.044168 -0.784411 0.148903
## [5,] -0.37359 0.47716 0.292680 1.233652 0.406582
```
This should match the output from SAS and it does. Remember these columns are unique up to a sign, so you’ll see factor 4 does not have the same sign in both software outputs. This is not cause for concern.
Figure 17\.2: Default (Unrotated) Factor Loadings Output by SAS
Figure 17\.3: Default (Unrotated) Factor Scores Output by SAS
### 17\.7\.2 Rotated Principal Components
The next task we may want to undertake is a rotation of the factor axes according to the varimax procedure. The most simple way to go about this is to use the `varimax()` function to find the optimal rotation of the eigenvectors in the matrix `pca.out$rotation`. The `varimax()` function outputs both the new set of axes in the matrix called `loadings` and the rotation matrix (`rotmat`) which performs the rotation from the original principal component axes to the new axes. (i.e. if \\(\\textbf{V}\\) contains the old axes as columns and \\(\\hat{\\textbf{V}}\\) contains the new axes and \\(\\textbf{R}\\) is the rotation matrix then \\(\\hat{\\textbf{V}} \= \\textbf{V}\\textbf{R}\\).) That rotation matrix can be used to perform the same rotation on the scores of the observations. If the matrix \\(\\textbf{U}\\) contains the scores for each observation, then the rotated scores \\(\\hat{\\textbf{U}}\\) are found by \\(\\hat{\\textbf{U}} \= \\textbf{U}\\textbf{R}\\)
```
varimax.out = varimax(fact.loadings)
rotated.fact.loadings = fact.loadings %*% varimax.out$rotmat
rotated.fact.scores = fact.scores %*% varimax.out$rotmat
# PRINT OUT THE FIRST 5 ROWS OF EACH MATRIX FOR CONFIRMATION.
rotated.fact.loadings[1:5,]
```
```
## [,1] [,2] [,3] [,4] [,5]
## E1 -0.71232 -0.0489043 0.010596 -0.03206926 0.055858
## E2 0.71592 -0.0031185 0.028946 0.03504236 -0.121241
## E3 -0.66912 -0.2604049 0.131609 0.01704690 0.263679
## E4 0.73332 0.1528552 -0.023367 0.00094685 -0.053219
## E5 -0.74534 -0.0757539 0.100875 -0.07140722 0.218602
```
```
rotated.fact.scores[1:5,]
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] -1.09083 -2.04516 1.40699 -0.38254 0.5998386
## [2,] 0.85718 -0.19268 1.07708 2.03665 -0.2178616
## [3,] -0.92344 2.58761 2.43566 -0.80840 -0.1833138
## [4,] 0.61935 1.53087 -0.79225 -0.59901 -0.0064665
## [5,] -0.39495 -0.10893 -0.24892 0.99744 0.9567712
```
And again we can see that these line up with our SAS Rotated output, **however** the order does not have to be the same! SAS conveniently reorders the columns according to the variance of the data along that new direction. Since we have not done that in R, the order of the columns is not the same! Factors 1 and 2 are the same in both outputs, but SAS Factor 3 \= R Factor 4 and SAS Factor 5 \= (\-1\)\* R Factor 4\. The coordinates are switched too so nothing changes in our interpretation. Remember, when you rotate factors, you no longer keep the notion that the “first vector” explains the most variance unless you reorder them so that is true (like SAS does).
Figure 17\.4: Rotated Factor Loadings Output by SAS
```
knitr::include_graphics('RotatedScores.png')
```
Figure 17\.5: Rotated Factor Scores Output by SAS
### 17\.7\.3 Visualizing Rotation via BiPlots
Let’s start with a peek at BiPlots of the first 2 of principal component loadings, prior to rotation. Notice that here I’m not going to bother with any scaling of the factor loadings as I’m not interested in forcing my output to look like SAS’s output. I’m also downsampling the observations because 20,000 is far to many to plot.
```
biplot(pca.out$x[sample(1:19719,1000),1:2],
pca.out$rotation[,1:2],
cex=c(0.2,1))
```
Figure 17\.6: BiPlot of Projection onto PC1 and PC2
```
biplot(pca.out$x[sample(1:19719,1000),3:4],
pca.out$rotation[,3:4],
cex=c(0.2,1))
```
Figure 17\.7: BiPlot of Projection onto PC3 and PC4
Let’s see what happens to these biplots after rotation:
```
vmax = varimax(pca.out$rotation)
newscores = pca.out$x%*%vmax$rotmat
biplot(newscores[sample(1:19719,1000),1:2],
vmax$loadings[,1:2],
cex=c(0.2,1),
xlab = 'Rotated Axis 1',
ylab = 'Rotated Axis 2')
```
Figure 17\.8: BiPlot of Projection onto Rotated Axes 1,2\. Extroversion questions align with axis 1, Neuroticism with Axis 2
```
biplot(newscores[sample(1:19719,1000),3:4],
vmax$loadings[,3:4],
cex=c(0.2,1),
xlab = 'Rotated Axis 3',
ylab = 'Rotated Axis 4')
```
Figure 17\.9: BiPlot of Projection onto Rotated Axes 3,4\. Agreeableness questions align with axis 3, Openness with Axis 4\.
After the rotation, we can see the BiPlots tell a more distinct story. The extroversion questions line up along rotated axes 1, neuroticism along rotated axes 2, and agreeableness and openness are reflected in rotated axes 3 and 4 respectively. The fifth rotated component can be confirmed to represent the last remaining category which is conscientiousness.
### 17\.7\.1 Raw PCA Factors
```
options(digits=5)
pca.out = prcomp(big5[,8:57], rank = 5, scale = T)
```
Remember the only difference between the default PROC PRINCOMP output and the default PROC FACTOR output in SAS was the fact that the eigenvectors in PROC PRINCOMP were normalized to be unit vectors and the factor vectors in PROC FACTOR were those same eigenvectors scaled by the square roots of the eigenvalues. So we want to multiply each eigenvector column output in `pca.out$rotation` (recall this is the loading matrix or matrix of eigenvectors) by the square root of the corresponding eigenvalue given in `pca.out$sdev`. You’ll recall that multiplying a matrix by a diagonal matrix on the right has the effect of scaling the columns of the matrix. So we’ll just make a diagonal matrix, \\(\\textbf{S}\\) with diagonal elements from the `pca.out$sdev` vector and scale the columns of the `pca.out$rotation` matrix. Similarly, the coordinates of the data along each component then need to be *divided* by the standard deviation to cancel out this effect of lengthening the axis. So again we will multiply by a diagonal matrix to perform this scaling, but this time, we use the diagonal matrix \\(\\textbf{S}^{\-1}\=\\) `diag(1/(pca.out$sdev))`.
Matrix multiplication in R is performed with the `\%\*\%` operator.
```
fact.loadings = pca.out$rotation[,1:5] %*% diag(pca.out$sdev[1:5])
fact.scores = pca.out$x[,1:5] %*%diag(1/pca.out$sdev[1:5])
# PRINT OUT THE FIRST 5 ROWS OF EACH MATRIX FOR CONFIRMATION.
fact.loadings[1:5,1:5]
```
```
## [,1] [,2] [,3] [,4] [,5]
## E1 -0.52057 0.27735 -0.29183 0.13456 -0.25072
## E2 0.51025 -0.35942 0.26959 -0.14223 0.21649
## E3 -0.70998 0.15791 -0.11623 0.21768 -0.11303
## E4 0.58361 -0.20341 0.31433 -0.17833 0.22788
## E5 -0.65751 0.31924 -0.16404 0.12496 -0.21810
```
```
fact.scores[1:5,1:5]
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] -2.53286 -1.16617 0.276244 0.043229 -0.069518
## [2,] 0.70216 -1.22761 1.095383 1.615919 -0.562371
## [3,] -0.12575 1.33180 1.525208 -1.163062 -2.949501
## [4,] 1.29926 1.17736 0.044168 -0.784411 0.148903
## [5,] -0.37359 0.47716 0.292680 1.233652 0.406582
```
This should match the output from SAS and it does. Remember these columns are unique up to a sign, so you’ll see factor 4 does not have the same sign in both software outputs. This is not cause for concern.
Figure 17\.2: Default (Unrotated) Factor Loadings Output by SAS
Figure 17\.3: Default (Unrotated) Factor Scores Output by SAS
### 17\.7\.2 Rotated Principal Components
The next task we may want to undertake is a rotation of the factor axes according to the varimax procedure. The most simple way to go about this is to use the `varimax()` function to find the optimal rotation of the eigenvectors in the matrix `pca.out$rotation`. The `varimax()` function outputs both the new set of axes in the matrix called `loadings` and the rotation matrix (`rotmat`) which performs the rotation from the original principal component axes to the new axes. (i.e. if \\(\\textbf{V}\\) contains the old axes as columns and \\(\\hat{\\textbf{V}}\\) contains the new axes and \\(\\textbf{R}\\) is the rotation matrix then \\(\\hat{\\textbf{V}} \= \\textbf{V}\\textbf{R}\\).) That rotation matrix can be used to perform the same rotation on the scores of the observations. If the matrix \\(\\textbf{U}\\) contains the scores for each observation, then the rotated scores \\(\\hat{\\textbf{U}}\\) are found by \\(\\hat{\\textbf{U}} \= \\textbf{U}\\textbf{R}\\)
```
varimax.out = varimax(fact.loadings)
rotated.fact.loadings = fact.loadings %*% varimax.out$rotmat
rotated.fact.scores = fact.scores %*% varimax.out$rotmat
# PRINT OUT THE FIRST 5 ROWS OF EACH MATRIX FOR CONFIRMATION.
rotated.fact.loadings[1:5,]
```
```
## [,1] [,2] [,3] [,4] [,5]
## E1 -0.71232 -0.0489043 0.010596 -0.03206926 0.055858
## E2 0.71592 -0.0031185 0.028946 0.03504236 -0.121241
## E3 -0.66912 -0.2604049 0.131609 0.01704690 0.263679
## E4 0.73332 0.1528552 -0.023367 0.00094685 -0.053219
## E5 -0.74534 -0.0757539 0.100875 -0.07140722 0.218602
```
```
rotated.fact.scores[1:5,]
```
```
## [,1] [,2] [,3] [,4] [,5]
## [1,] -1.09083 -2.04516 1.40699 -0.38254 0.5998386
## [2,] 0.85718 -0.19268 1.07708 2.03665 -0.2178616
## [3,] -0.92344 2.58761 2.43566 -0.80840 -0.1833138
## [4,] 0.61935 1.53087 -0.79225 -0.59901 -0.0064665
## [5,] -0.39495 -0.10893 -0.24892 0.99744 0.9567712
```
And again we can see that these line up with our SAS Rotated output, **however** the order does not have to be the same! SAS conveniently reorders the columns according to the variance of the data along that new direction. Since we have not done that in R, the order of the columns is not the same! Factors 1 and 2 are the same in both outputs, but SAS Factor 3 \= R Factor 4 and SAS Factor 5 \= (\-1\)\* R Factor 4\. The coordinates are switched too so nothing changes in our interpretation. Remember, when you rotate factors, you no longer keep the notion that the “first vector” explains the most variance unless you reorder them so that is true (like SAS does).
Figure 17\.4: Rotated Factor Loadings Output by SAS
```
knitr::include_graphics('RotatedScores.png')
```
Figure 17\.5: Rotated Factor Scores Output by SAS
### 17\.7\.3 Visualizing Rotation via BiPlots
Let’s start with a peek at BiPlots of the first 2 of principal component loadings, prior to rotation. Notice that here I’m not going to bother with any scaling of the factor loadings as I’m not interested in forcing my output to look like SAS’s output. I’m also downsampling the observations because 20,000 is far to many to plot.
```
biplot(pca.out$x[sample(1:19719,1000),1:2],
pca.out$rotation[,1:2],
cex=c(0.2,1))
```
Figure 17\.6: BiPlot of Projection onto PC1 and PC2
```
biplot(pca.out$x[sample(1:19719,1000),3:4],
pca.out$rotation[,3:4],
cex=c(0.2,1))
```
Figure 17\.7: BiPlot of Projection onto PC3 and PC4
Let’s see what happens to these biplots after rotation:
```
vmax = varimax(pca.out$rotation)
newscores = pca.out$x%*%vmax$rotmat
biplot(newscores[sample(1:19719,1000),1:2],
vmax$loadings[,1:2],
cex=c(0.2,1),
xlab = 'Rotated Axis 1',
ylab = 'Rotated Axis 2')
```
Figure 17\.8: BiPlot of Projection onto Rotated Axes 1,2\. Extroversion questions align with axis 1, Neuroticism with Axis 2
```
biplot(newscores[sample(1:19719,1000),3:4],
vmax$loadings[,3:4],
cex=c(0.2,1),
xlab = 'Rotated Axis 3',
ylab = 'Rotated Axis 4')
```
Figure 17\.9: BiPlot of Projection onto Rotated Axes 3,4\. Agreeableness questions align with axis 3, Openness with Axis 4\.
After the rotation, we can see the BiPlots tell a more distinct story. The extroversion questions line up along rotated axes 1, neuroticism along rotated axes 2, and agreeableness and openness are reflected in rotated axes 3 and 4 respectively. The fifth rotated component can be confirmed to represent the last remaining category which is conscientiousness.
| Field Specific |
shainarace.github.io | https://shainarace.github.io/LinearAlgebra/otherdimred.html |
Chapter 18 Dimension Reduction for Visualization
================================================
18\.1 Multidimensional Scaling
------------------------------
Multidimensional scaling is a technique which aims to represent higher\-dimensional data in a lower\-dimensional space while keeping the pairwise distances between points as close to their original distances as possible. It takes as input a distance matrix, \\(\\D\\) where \\(\\D\_{ij}\\) is some measure of distance between observation \\(i\\) and observation \\(j\\) (most often Euclidean distance). The original observations may involve many variables and thus exist in a high\-dimensional space. The output of MDS is a set of coordinates, usually in 2\-dimensions for the purposes of visualization, such that the Euclidean distance between observation \\(i\\) and observation \\(j\\) in the new lower\-dimensional representation is an approximation to \\(\\D\_{ij}\\).
One of the outputs in R will be a measure that is akin to the “percentage of variation explained” by PCs. The difference is that the matrix we are representing is *not* a covariance matrix, so this ratio is *not* a percentage of variation. It can, however, be used to judge how much information is retained by the lower dimensional representation. This is output in the `GOF` vector (presumably standing for *Goodness of Fit*). The first entry in `GOF` gives the ratio of the sum of the \\(k\\) largest eigenvalues to the sum of the absolute values of all the eigenvalues, and the second entry in `GOF` gives the ratio of the sum of the \\(k\\) largest eigenvalues to the sum of only the positive eigenvalues.
\\\[GOF\[1] \= \\frac{\\sum\_{i\=1}^k \\lambda\_i}{\\sum\_{i\=1}^n \|\\lambda\_i\|}\\]
and
\\\[GOF\[2] \= \\frac{\\sum\_{i\=1}^k \\lambda\_i}{\\sum\_{i\=1}^n \\max(\\lambda\_i,0\)}\\]
### 18\.1\.1 MDS of Iris Data
Let’s take a dataset we’ve already worked with, like the iris dataset, and see how this is done. Recall that the iris data contains measurements of 150 flowers (50 each from 3 different species) on 4 variables: Sepal.Length, Sepal.Width, Petal.Length, and Petal.Width. To examine a 2\-dimensional representation of this data via Multidimensional Scaling, we simply compute a distance matrix and run the MDS procedure:
```
D = dist(iris[,1:4])
fit = cmdscale(D, eig=TRUE, k=2) # k is the number of dimensions desired
fit$eig[1:12] # view first dozen eigenvalues
```
```
## [1] 6.3001e+02 3.6158e+01 1.1653e+01 3.5514e+00 3.4866e-13 3.1863e-13
## [7] 2.0112e-13 1.3770e-13 7.7470e-14 3.2881e-14 3.0740e-14 2.1786e-14
```
```
fit$GOF # view the Goodness of Fit measures
```
```
## [1] 0.97769 0.97769
```
```
# plot the solution, colored by iris species:
x = fit$points[,1]
y = fit$points[,2]
# The pch= option controls the symbol output. 16=filled circles.
plot(x,y,col=c("red","green3","blue")[iris$Species], pch=16,
xlab='Coordinate 1', ylab='Coordinate 2')
```
Figure 18\.1: Multidimensional Scaling of the Iris Data
We can tell from the eigenvalues alone that two dimensions should be relatively sufficient to summarize this data. After two large eigenvalues, the remainder drop off and become small, signifying a lack of further information. Indeed, the Goodness of Fit measurements back up this intuition: values close to 1 indicate a good fit with minimal error.
### 18\.1\.2 MDS of Leukemia dataset
Let’s take a look at another example, this time using the Leukemia dataset which has 5000 variables. It is unreasonable to expect that we can get as good of a fit of this data using only two dimensions! There will obviously be much more error. However, we can still get a visualization that should at least show us which observations are close to each other and which are far away.
```
leuk=read.csv('http://birch.iaa.ncsu.edu/~slrace/Code/leukemia.csv')
```
As you may recall, this data has some variables with 0 variance; those entire columns are constant. To determine which ones, we first remove the last column which is a character vector that identifies the type of leukemia.
```
type = leuk[ , 5001]
leuk = leuk[,1:5000]
# If desired, could supply names of columns that have 0 variance with
# names(leuk[, sapply(leuk, function(v) var(v, na.rm=TRUE)==0)])
# The na.rm=T would allow us to keep any missing information and still compute
# the variance using the non-missing values. In this instance, it is not necessary
# because we have no missing values.
# We can remove these columns from the data with:
leuk=leuk[,apply(leuk, 2, var, na.rm=TRUE) != 0]
# compute distances matrix
t=dist(leuk)
fit=cmdscale(t,eig=TRUE, k=2)
fit$GOF
```
```
## [1] 0.35822 0.35822
```
```
x = fit$points[,1]
y = fit$points[,2]
#The cex= controls the size of the circles in the plot function.
plot(x,y,col=c("red","green","blue")[factor(type)], cex=3,
xlab='Coordinate 1', ylab='Coordinate 2',
main = 'Multidimensional Scaling of Raw Leukemia Data')
text(x,y,labels=row.names(leuk))
```
Figure 18\.2: Multidimensional Scaling of the Leukemia Data
What if we standardize our data before running the MDS procedure? Will that effect our results? Let’s see how it looks on the standardized version of the leukemia data.
```
# We can experiment with standardization to see how it
# effects our results:
leuk2=scale(leuk,center=TRUE, scale=TRUE)
t2=dist(leuk2)
fit2=cmdscale(t2,eig=TRUE,k=2)
fit2$GOF
```
```
## [1] 0.21287 0.21287
```
```
x2 = fit2$points[,1]
y2 = fit2$points[,2]
#The cex= controls the size of the circles in the plot function.
plot(x2,y2,col=c("red","green","blue")[factor(type)], cex=3,
xlab='Coordinate 1', ylab='Coordinate 2',
main = 'Multidimensional Scaling of Standardized Leukemia Data')
text(x2,y2,labels=row.names(leuk))
```
### A note on standardization
Clearly, things have changed substantially. We shouldn’t give to much creedence to the decreased Goodness of Fit statistics. I don’t necessarily believe that we are explaining less information just because we scaled our data, the fact that this number has changed should likely be attributed to the fact that we have significantly decreased all of the eigenvalues of the matrix, and not in any predictable or meaningful way. It’s more important to focus on what we are trying to represent and that is differences between samples. Perhaps if there are some genes for which values vary wildly between the different leukemia types, and other genes which don’t show much variation, then we should keep this information in the data. By standardizing the data, we’re making the variation of every gene equal to 1 \- which stands to wash out some of the bigger, more discriminating factors in the distance calculations. This consideration is something that will need to be made for each dataset on a case\-by\-case basis. If our dataset had variables with wide scale variations (like income and number of cars) then standardization is a much more reasonable approach!
There are several things to keep in mind when studying an MDS map.
1. The axis are, by themselves, meaningless.
2. The orientation of the picture is completely arbitrary.
3. All that matters is the relative proximity of the points in the map. Are they close? Are they far apart?
18\.1 Multidimensional Scaling
------------------------------
Multidimensional scaling is a technique which aims to represent higher\-dimensional data in a lower\-dimensional space while keeping the pairwise distances between points as close to their original distances as possible. It takes as input a distance matrix, \\(\\D\\) where \\(\\D\_{ij}\\) is some measure of distance between observation \\(i\\) and observation \\(j\\) (most often Euclidean distance). The original observations may involve many variables and thus exist in a high\-dimensional space. The output of MDS is a set of coordinates, usually in 2\-dimensions for the purposes of visualization, such that the Euclidean distance between observation \\(i\\) and observation \\(j\\) in the new lower\-dimensional representation is an approximation to \\(\\D\_{ij}\\).
One of the outputs in R will be a measure that is akin to the “percentage of variation explained” by PCs. The difference is that the matrix we are representing is *not* a covariance matrix, so this ratio is *not* a percentage of variation. It can, however, be used to judge how much information is retained by the lower dimensional representation. This is output in the `GOF` vector (presumably standing for *Goodness of Fit*). The first entry in `GOF` gives the ratio of the sum of the \\(k\\) largest eigenvalues to the sum of the absolute values of all the eigenvalues, and the second entry in `GOF` gives the ratio of the sum of the \\(k\\) largest eigenvalues to the sum of only the positive eigenvalues.
\\\[GOF\[1] \= \\frac{\\sum\_{i\=1}^k \\lambda\_i}{\\sum\_{i\=1}^n \|\\lambda\_i\|}\\]
and
\\\[GOF\[2] \= \\frac{\\sum\_{i\=1}^k \\lambda\_i}{\\sum\_{i\=1}^n \\max(\\lambda\_i,0\)}\\]
### 18\.1\.1 MDS of Iris Data
Let’s take a dataset we’ve already worked with, like the iris dataset, and see how this is done. Recall that the iris data contains measurements of 150 flowers (50 each from 3 different species) on 4 variables: Sepal.Length, Sepal.Width, Petal.Length, and Petal.Width. To examine a 2\-dimensional representation of this data via Multidimensional Scaling, we simply compute a distance matrix and run the MDS procedure:
```
D = dist(iris[,1:4])
fit = cmdscale(D, eig=TRUE, k=2) # k is the number of dimensions desired
fit$eig[1:12] # view first dozen eigenvalues
```
```
## [1] 6.3001e+02 3.6158e+01 1.1653e+01 3.5514e+00 3.4866e-13 3.1863e-13
## [7] 2.0112e-13 1.3770e-13 7.7470e-14 3.2881e-14 3.0740e-14 2.1786e-14
```
```
fit$GOF # view the Goodness of Fit measures
```
```
## [1] 0.97769 0.97769
```
```
# plot the solution, colored by iris species:
x = fit$points[,1]
y = fit$points[,2]
# The pch= option controls the symbol output. 16=filled circles.
plot(x,y,col=c("red","green3","blue")[iris$Species], pch=16,
xlab='Coordinate 1', ylab='Coordinate 2')
```
Figure 18\.1: Multidimensional Scaling of the Iris Data
We can tell from the eigenvalues alone that two dimensions should be relatively sufficient to summarize this data. After two large eigenvalues, the remainder drop off and become small, signifying a lack of further information. Indeed, the Goodness of Fit measurements back up this intuition: values close to 1 indicate a good fit with minimal error.
### 18\.1\.2 MDS of Leukemia dataset
Let’s take a look at another example, this time using the Leukemia dataset which has 5000 variables. It is unreasonable to expect that we can get as good of a fit of this data using only two dimensions! There will obviously be much more error. However, we can still get a visualization that should at least show us which observations are close to each other and which are far away.
```
leuk=read.csv('http://birch.iaa.ncsu.edu/~slrace/Code/leukemia.csv')
```
As you may recall, this data has some variables with 0 variance; those entire columns are constant. To determine which ones, we first remove the last column which is a character vector that identifies the type of leukemia.
```
type = leuk[ , 5001]
leuk = leuk[,1:5000]
# If desired, could supply names of columns that have 0 variance with
# names(leuk[, sapply(leuk, function(v) var(v, na.rm=TRUE)==0)])
# The na.rm=T would allow us to keep any missing information and still compute
# the variance using the non-missing values. In this instance, it is not necessary
# because we have no missing values.
# We can remove these columns from the data with:
leuk=leuk[,apply(leuk, 2, var, na.rm=TRUE) != 0]
# compute distances matrix
t=dist(leuk)
fit=cmdscale(t,eig=TRUE, k=2)
fit$GOF
```
```
## [1] 0.35822 0.35822
```
```
x = fit$points[,1]
y = fit$points[,2]
#The cex= controls the size of the circles in the plot function.
plot(x,y,col=c("red","green","blue")[factor(type)], cex=3,
xlab='Coordinate 1', ylab='Coordinate 2',
main = 'Multidimensional Scaling of Raw Leukemia Data')
text(x,y,labels=row.names(leuk))
```
Figure 18\.2: Multidimensional Scaling of the Leukemia Data
What if we standardize our data before running the MDS procedure? Will that effect our results? Let’s see how it looks on the standardized version of the leukemia data.
```
# We can experiment with standardization to see how it
# effects our results:
leuk2=scale(leuk,center=TRUE, scale=TRUE)
t2=dist(leuk2)
fit2=cmdscale(t2,eig=TRUE,k=2)
fit2$GOF
```
```
## [1] 0.21287 0.21287
```
```
x2 = fit2$points[,1]
y2 = fit2$points[,2]
#The cex= controls the size of the circles in the plot function.
plot(x2,y2,col=c("red","green","blue")[factor(type)], cex=3,
xlab='Coordinate 1', ylab='Coordinate 2',
main = 'Multidimensional Scaling of Standardized Leukemia Data')
text(x2,y2,labels=row.names(leuk))
```
### A note on standardization
Clearly, things have changed substantially. We shouldn’t give to much creedence to the decreased Goodness of Fit statistics. I don’t necessarily believe that we are explaining less information just because we scaled our data, the fact that this number has changed should likely be attributed to the fact that we have significantly decreased all of the eigenvalues of the matrix, and not in any predictable or meaningful way. It’s more important to focus on what we are trying to represent and that is differences between samples. Perhaps if there are some genes for which values vary wildly between the different leukemia types, and other genes which don’t show much variation, then we should keep this information in the data. By standardizing the data, we’re making the variation of every gene equal to 1 \- which stands to wash out some of the bigger, more discriminating factors in the distance calculations. This consideration is something that will need to be made for each dataset on a case\-by\-case basis. If our dataset had variables with wide scale variations (like income and number of cars) then standardization is a much more reasonable approach!
There are several things to keep in mind when studying an MDS map.
1. The axis are, by themselves, meaningless.
2. The orientation of the picture is completely arbitrary.
3. All that matters is the relative proximity of the points in the map. Are they close? Are they far apart?
### 18\.1\.1 MDS of Iris Data
Let’s take a dataset we’ve already worked with, like the iris dataset, and see how this is done. Recall that the iris data contains measurements of 150 flowers (50 each from 3 different species) on 4 variables: Sepal.Length, Sepal.Width, Petal.Length, and Petal.Width. To examine a 2\-dimensional representation of this data via Multidimensional Scaling, we simply compute a distance matrix and run the MDS procedure:
```
D = dist(iris[,1:4])
fit = cmdscale(D, eig=TRUE, k=2) # k is the number of dimensions desired
fit$eig[1:12] # view first dozen eigenvalues
```
```
## [1] 6.3001e+02 3.6158e+01 1.1653e+01 3.5514e+00 3.4866e-13 3.1863e-13
## [7] 2.0112e-13 1.3770e-13 7.7470e-14 3.2881e-14 3.0740e-14 2.1786e-14
```
```
fit$GOF # view the Goodness of Fit measures
```
```
## [1] 0.97769 0.97769
```
```
# plot the solution, colored by iris species:
x = fit$points[,1]
y = fit$points[,2]
# The pch= option controls the symbol output. 16=filled circles.
plot(x,y,col=c("red","green3","blue")[iris$Species], pch=16,
xlab='Coordinate 1', ylab='Coordinate 2')
```
Figure 18\.1: Multidimensional Scaling of the Iris Data
We can tell from the eigenvalues alone that two dimensions should be relatively sufficient to summarize this data. After two large eigenvalues, the remainder drop off and become small, signifying a lack of further information. Indeed, the Goodness of Fit measurements back up this intuition: values close to 1 indicate a good fit with minimal error.
### 18\.1\.2 MDS of Leukemia dataset
Let’s take a look at another example, this time using the Leukemia dataset which has 5000 variables. It is unreasonable to expect that we can get as good of a fit of this data using only two dimensions! There will obviously be much more error. However, we can still get a visualization that should at least show us which observations are close to each other and which are far away.
```
leuk=read.csv('http://birch.iaa.ncsu.edu/~slrace/Code/leukemia.csv')
```
As you may recall, this data has some variables with 0 variance; those entire columns are constant. To determine which ones, we first remove the last column which is a character vector that identifies the type of leukemia.
```
type = leuk[ , 5001]
leuk = leuk[,1:5000]
# If desired, could supply names of columns that have 0 variance with
# names(leuk[, sapply(leuk, function(v) var(v, na.rm=TRUE)==0)])
# The na.rm=T would allow us to keep any missing information and still compute
# the variance using the non-missing values. In this instance, it is not necessary
# because we have no missing values.
# We can remove these columns from the data with:
leuk=leuk[,apply(leuk, 2, var, na.rm=TRUE) != 0]
# compute distances matrix
t=dist(leuk)
fit=cmdscale(t,eig=TRUE, k=2)
fit$GOF
```
```
## [1] 0.35822 0.35822
```
```
x = fit$points[,1]
y = fit$points[,2]
#The cex= controls the size of the circles in the plot function.
plot(x,y,col=c("red","green","blue")[factor(type)], cex=3,
xlab='Coordinate 1', ylab='Coordinate 2',
main = 'Multidimensional Scaling of Raw Leukemia Data')
text(x,y,labels=row.names(leuk))
```
Figure 18\.2: Multidimensional Scaling of the Leukemia Data
What if we standardize our data before running the MDS procedure? Will that effect our results? Let’s see how it looks on the standardized version of the leukemia data.
```
# We can experiment with standardization to see how it
# effects our results:
leuk2=scale(leuk,center=TRUE, scale=TRUE)
t2=dist(leuk2)
fit2=cmdscale(t2,eig=TRUE,k=2)
fit2$GOF
```
```
## [1] 0.21287 0.21287
```
```
x2 = fit2$points[,1]
y2 = fit2$points[,2]
#The cex= controls the size of the circles in the plot function.
plot(x2,y2,col=c("red","green","blue")[factor(type)], cex=3,
xlab='Coordinate 1', ylab='Coordinate 2',
main = 'Multidimensional Scaling of Standardized Leukemia Data')
text(x2,y2,labels=row.names(leuk))
```
### A note on standardization
Clearly, things have changed substantially. We shouldn’t give to much creedence to the decreased Goodness of Fit statistics. I don’t necessarily believe that we are explaining less information just because we scaled our data, the fact that this number has changed should likely be attributed to the fact that we have significantly decreased all of the eigenvalues of the matrix, and not in any predictable or meaningful way. It’s more important to focus on what we are trying to represent and that is differences between samples. Perhaps if there are some genes for which values vary wildly between the different leukemia types, and other genes which don’t show much variation, then we should keep this information in the data. By standardizing the data, we’re making the variation of every gene equal to 1 \- which stands to wash out some of the bigger, more discriminating factors in the distance calculations. This consideration is something that will need to be made for each dataset on a case\-by\-case basis. If our dataset had variables with wide scale variations (like income and number of cars) then standardization is a much more reasonable approach!
There are several things to keep in mind when studying an MDS map.
1. The axis are, by themselves, meaningless.
2. The orientation of the picture is completely arbitrary.
3. All that matters is the relative proximity of the points in the map. Are they close? Are they far apart?
| Field Specific |
shainarace.github.io | https://shainarace.github.io/LinearAlgebra/sna.html |
Chapter 19 Social Network Analysis
==================================
19\.1 Working with Network Data
-------------------------------
We’ll use the popular `igraph` package to explore the student slack network in R. The data has been anonymized for use in this text. First, we load the two data frames that contain the information for our network:
\- `SlackNetwork` contains the interactions between pairs of students. An interaction between students was defined as either an emoji\-reaction or threaded reply to a post. The *source* of the interaction is the individual reacting or replying and the *target* of the interaction is the user who originated the post. This data frame also contains the channel in which the interaction takes place, and 9 binary flags indicating the presence or absence of certain keywords or phrases of interest.
\- `users` contains user\-level attributes like the cohort to which a student belongs (‘blue’ or ‘orange’).
```
library(igraph)
load('LAdata/slackanon2021.RData')
```
```
head(SlackNetwork)
```
```
## source target channel notes study howdoyou python R SAS beer
## 1 U0130T4056Y U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## 2 U012UMSH7FC U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## 3 U012N097BUN U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## 4 U012E0B3YBZ U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## 5 U0130T1GKMJ U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## 6 U0130T486SY U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## food snack edgeID
## 1 FALSE FALSE 1
## 2 FALSE FALSE 2
## 3 FALSE FALSE 3
## 4 FALSE FALSE 4
## 5 FALSE FALSE 5
## 6 FALSE FALSE 6
```
```
head(users)
```
```
## userID Cohort
## 1 U012E07QP1D o
## 2 U012E08EWTZ b
## 3 U012E08R0CX o
## 4 U012E08TYMV b
## 5 U012E096VF1 o
## 6 U012E09JRAB o
```
Using this information, we can create an igraph network object using the `graph_from_data_frame()` function. We can then apply some functions from the igraph package to discover the underlying data as we’ve already seen it. Because this network has almost 42,000 edges overall, we’ll subset the data and only look at interactions from the general channel.
19\.2 Network Visualization \- `igraph` package
-----------------------------------------------
```
SlackNetworkSubset = SlackNetwork[SlackNetwork$channel=='general',]
slack = graph_from_data_frame(SlackNetworkSubset, directed = TRUE, vertices = users)
plot(slack)
```
The default plots certainly leave room for improvement. We notice that one user is not connected to the rest of the network in the general channel, signifying that this user has not reacted or replied in a threaded fashion to any posts in this channel, nor have they created a post that received any interaction. We can delete this vertex from the network by taking advantage of the `delete.vertices()` function specifying that we want to remove all vertices with degree equal to zero. You’ll recall that the degree of a vertex is the number of edges that connect to it.
```
slack=delete.vertices(slack,degree(slack)==0)
```
There are various ways that we can improve the network visualization, but we will soon see that *layout* is, by far, the most important. First, let’s explore how we can use the plot options to change the line weight, size, and color of the nodes and edges to improve the visualization in the following chunk.
```
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='gray',edge.color='blue')
```
### 19\.2\.1 Layout algorithms for `igraph` package
The igraph package has many different layout algorithms available; type `?igraph::layout` for a list of them. By clicking on each layout in the help menu, you’ll be able to distinguish which of the layouts are force\-directed and which are not. Force\-directed layouts generally provide the highest quality network visualizations. The Davidson\-Harel (`layout_with_dh`), Fruchterman\-Reingold (`layout_with_fr`), DrL (`layout_with_drl`) and multidimensional scaling algorithms (`layout_with_mds`) are probably the most well\-known algorithms available in this package.
We recommend that you compute the layout outside of the plot function so that you may use it again without re\-computing it. After all, a layout is just a two dimensional array of coordinates that specifies where each node should be placed. If you compute the layout inside the plot function then every time you make a small adjustment like color or edge arrow size, you will have to your computer will have to re\-compute the layout algorithm.
The following code chunk computes 4 different layouts and then plots the resulting networks on a 2x2 grid for comparison. We encourage you to substitute four *different* layouts (listed in the help document at the bottom) in place of the ones chosen here as part of your exploration.
```
#?igraph::layout
l = layout_with_lgl(slack)
l2 = layout_with_fr(slack)
l3 = layout_with_drl(slack)
l4 = layout_with_mds(slack)
par(mfrow=c(2,2),mar=c(1,1,1,1))
# Above tells the graphic window to use the
# following plots to fill out a 2x2 grid with margins of 1 unit
# on each side. Must reset these options with dev.off() when done!
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l,main="Large Graph Layout")
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l2,main="Fruchterman-Reingold")
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l3,main="DrL")
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l4,main = "MDS")
```
To reset your plot window, you should run `dev.off()` or else your future plots will continue to display in a 2x2 grid.
```
dev.off()
```
### 19\.2\.2 Adding attribute information to your visualization
We commonly want to represent information about our nodes using color or size. This is easily done by passing a vector of colors into the plot function that maintains the order in the *users* data frame. We can then create a legend and locate it in our plot window as desired.
```
plot(slack, edge.arrow.size = .2,
vertex.label=V(slack)$name,
vertex.size=10,
vertex.label.cex = 0.3,
vertex.color=c("blue","orange")[as.factor(V(slack)$Cohort)],
layout=l3,
main = "Slack Network Colored by Cohort")
legend(x=-1.5,y=0,unique(V(slack)$Cohort),pch=21,
pt.bg=c("blue","orange"),pt.cex=2,bty="n",ncol=1)
```
A (nearly) complete list of plot option parameters is given below:
* **vertex.color**: Node color
* **vertex.frame.color**: Node border color
* **vertex.shape**: Vector containing shape of vertices, like “circle,” “square,” “csquare,” “rectangle” etc
* **vertex.size**: Size of the node (default is 15\)
* **vertex.size2**: The second size of the node (e.g. for a rectangle)
* **vertex.label**: Character vector used to label the nodes
* **vertex.label.color**: Character vector specifying color the nodes
* **vertex.label.family**: Font family of the label (e.g.“Times,” “Helvetica”)
* **vertex.label.font**: Font: 1 plain, 2 bold, 3, italic, 4 bold italic, 5 symbol
* **vertex.label.cex**: Font size (multiplication factor, device\-dependent)
* **vertex.label.dist**: Distance between the label and the vertex
* **vertex.label.degree**: The position of the label in relation to the vertex (use pi)
* **edge.color**: Edge color
* **edge.width**: Edge width, defaults to 1
* **edge.arrow.size**: Arrow size, defaults to 1
* **edge.arrow.width**: Arrow width, defaults to 1
* **edge.lty**: Line type, 0 \=“blank,” 1 \=“solid,” 2 \=“dashed,” 3 \=“dotted,” etc
* **edge.curved**: Edge curvature, range 0\-1 (FALSE sets it to 0, TRUE to 0\.5\)
and if you’d like to try a dark\-mode style visualization, consider the global graphical parameter to change the background color of your visual: `par(bg="black")`.
Any one of these option parameters can be set according to a variable in your dataset, or a metric about your graph. For example, let’s define **degree** as the number of edges that are adjacent to a given vertex. We can *size* the vertices according to their degree by including that information in the plot function as follows, using the `degree()` function. We just have to keep in mind that the `vertex.size` plot attribute is expecting the same range of sizes that you would provide for any points on a plot, and since the degree of a vertex can be very high in this case, we should put it on a scale that seems more reasonable. In this example. we divide the degree by the maximum degree to create a number between 0 and 1 and then multiply it by 10 to create `vertex.size` values between zero and 10\.
```
plot(slack, edge.arrow.size = .2,
vertex.label=V(slack)$name,
vertex.size=10*degree(slack, v=V(slack), mode='all')/max(degree(slack, v=V(slack), mode='all')),
vertex.label.cex = 0.3,
vertex.color=c("blue","orange")[as.factor(V(slack)$Cohort)],
layout=l3,
main = "Slack Network Colored by Cohort")
legend(x=-1.5,y=0,c("Orange","Blue"),pch=21,
pt.bg=c("Orange","Blue"),pt.cex=2,bty="n",ncol=1)
```
19\.3 Package `networkD3`
-------------------------
The network D3 package creates the same type of visualizations that you would see in the [JavaScript library D3](https://d3js.org). These visualizations are highly interactive and quite beautiful.
```
library(networkD3)
```
### 19\.3\.1 Preparing the data for `networkD3`
The one thing that you’ll have to keep in mind when creating this visualization is the insistence of this package that your label names (indices) of your nodes start from zero. To use this package, you need a data frame containing the edge list and a data frame containing the node data. While we already have these data frames prepared, the following chunk of code shows you how to extract them from an igraph object and easily transform your ID or label column into a counter that starts from 0\. You can see the first few rows of the resulting data frames below.
```
nodes=data.frame(vertex_attr(slack))
nodes$ID=0:(vcount(slack)-1)
#data frame with edge list
edges=data.frame(get.edgelist(slack))
colnames(edges)=c("source","target")
edges=merge(edges, nodes[,c("name","ID")],by.x="source",by.y="name")
edges=merge(edges, nodes[,c("name","ID")],by.x="target",by.y="name")
edges=edges[,3:4]
colnames(edges)=c("source","target")
head(edges)
```
```
## source target
## 1 80 0
## 2 77 0
## 3 99 0
## 4 22 0
## 5 101 1
## 6 71 1
```
```
head(nodes)
```
```
## name Cohort ID
## 1 U012E07QP1D o 0
## 2 U012E08EWTZ b 1
## 3 U012E08R0CX o 2
## 4 U012E08TYMV b 3
## 5 U012E096VF1 o 4
## 6 U012E09JRAB o 5
```
Once we have our data in the right format it’s easy to create the force\-directed network a la D3 with the `forceNetwork()` function, and to save it as an .html file with the `saveNetwork()` function.
### 19\.3\.2 Creating an Interactive Visualization with `networkD3`
The following visualization is interactive! Try it by hovering on or dragging a node.
```
colors = JS('d3.scaleOrdinal().domain(["b", "o"]).range(["#0000ff", "#ffa500"])')
forceNetwork(Links=edges, Nodes=nodes, Source = "source",
Target = "target", NodeID="name", Group="Cohort", colourScale=colors,
charge=-100,fontSize=12, opacity = 0.8, zoom=F, legend=T)
```
### 19\.3\.3 Saving your Interactive Visualization to .html
Exploration of the resulting visualization is likely to be smoother in .html, so let’s export this visualization to a file with `saveNetwork()`.
```
j=forceNetwork(Links=edges, Nodes=nodes, Source = "source",
Target = "target", NodeID="name", Group="Cohort",
fontSize=12, opacity = 0.8, zoom=T, legend=T)
saveNetwork(j, file = 'Slack2021.html')
```
You can find the resulting file in your working directory (or you can specify a path rather than just a file name) and open it with any web browser.
19\.1 Working with Network Data
-------------------------------
We’ll use the popular `igraph` package to explore the student slack network in R. The data has been anonymized for use in this text. First, we load the two data frames that contain the information for our network:
\- `SlackNetwork` contains the interactions between pairs of students. An interaction between students was defined as either an emoji\-reaction or threaded reply to a post. The *source* of the interaction is the individual reacting or replying and the *target* of the interaction is the user who originated the post. This data frame also contains the channel in which the interaction takes place, and 9 binary flags indicating the presence or absence of certain keywords or phrases of interest.
\- `users` contains user\-level attributes like the cohort to which a student belongs (‘blue’ or ‘orange’).
```
library(igraph)
load('LAdata/slackanon2021.RData')
```
```
head(SlackNetwork)
```
```
## source target channel notes study howdoyou python R SAS beer
## 1 U0130T4056Y U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## 2 U012UMSH7FC U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## 3 U012N097BUN U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## 4 U012E0B3YBZ U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## 5 U0130T1GKMJ U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## 6 U0130T486SY U0130T30B36 random FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## food snack edgeID
## 1 FALSE FALSE 1
## 2 FALSE FALSE 2
## 3 FALSE FALSE 3
## 4 FALSE FALSE 4
## 5 FALSE FALSE 5
## 6 FALSE FALSE 6
```
```
head(users)
```
```
## userID Cohort
## 1 U012E07QP1D o
## 2 U012E08EWTZ b
## 3 U012E08R0CX o
## 4 U012E08TYMV b
## 5 U012E096VF1 o
## 6 U012E09JRAB o
```
Using this information, we can create an igraph network object using the `graph_from_data_frame()` function. We can then apply some functions from the igraph package to discover the underlying data as we’ve already seen it. Because this network has almost 42,000 edges overall, we’ll subset the data and only look at interactions from the general channel.
19\.2 Network Visualization \- `igraph` package
-----------------------------------------------
```
SlackNetworkSubset = SlackNetwork[SlackNetwork$channel=='general',]
slack = graph_from_data_frame(SlackNetworkSubset, directed = TRUE, vertices = users)
plot(slack)
```
The default plots certainly leave room for improvement. We notice that one user is not connected to the rest of the network in the general channel, signifying that this user has not reacted or replied in a threaded fashion to any posts in this channel, nor have they created a post that received any interaction. We can delete this vertex from the network by taking advantage of the `delete.vertices()` function specifying that we want to remove all vertices with degree equal to zero. You’ll recall that the degree of a vertex is the number of edges that connect to it.
```
slack=delete.vertices(slack,degree(slack)==0)
```
There are various ways that we can improve the network visualization, but we will soon see that *layout* is, by far, the most important. First, let’s explore how we can use the plot options to change the line weight, size, and color of the nodes and edges to improve the visualization in the following chunk.
```
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='gray',edge.color='blue')
```
### 19\.2\.1 Layout algorithms for `igraph` package
The igraph package has many different layout algorithms available; type `?igraph::layout` for a list of them. By clicking on each layout in the help menu, you’ll be able to distinguish which of the layouts are force\-directed and which are not. Force\-directed layouts generally provide the highest quality network visualizations. The Davidson\-Harel (`layout_with_dh`), Fruchterman\-Reingold (`layout_with_fr`), DrL (`layout_with_drl`) and multidimensional scaling algorithms (`layout_with_mds`) are probably the most well\-known algorithms available in this package.
We recommend that you compute the layout outside of the plot function so that you may use it again without re\-computing it. After all, a layout is just a two dimensional array of coordinates that specifies where each node should be placed. If you compute the layout inside the plot function then every time you make a small adjustment like color or edge arrow size, you will have to your computer will have to re\-compute the layout algorithm.
The following code chunk computes 4 different layouts and then plots the resulting networks on a 2x2 grid for comparison. We encourage you to substitute four *different* layouts (listed in the help document at the bottom) in place of the ones chosen here as part of your exploration.
```
#?igraph::layout
l = layout_with_lgl(slack)
l2 = layout_with_fr(slack)
l3 = layout_with_drl(slack)
l4 = layout_with_mds(slack)
par(mfrow=c(2,2),mar=c(1,1,1,1))
# Above tells the graphic window to use the
# following plots to fill out a 2x2 grid with margins of 1 unit
# on each side. Must reset these options with dev.off() when done!
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l,main="Large Graph Layout")
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l2,main="Fruchterman-Reingold")
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l3,main="DrL")
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l4,main = "MDS")
```
To reset your plot window, you should run `dev.off()` or else your future plots will continue to display in a 2x2 grid.
```
dev.off()
```
### 19\.2\.2 Adding attribute information to your visualization
We commonly want to represent information about our nodes using color or size. This is easily done by passing a vector of colors into the plot function that maintains the order in the *users* data frame. We can then create a legend and locate it in our plot window as desired.
```
plot(slack, edge.arrow.size = .2,
vertex.label=V(slack)$name,
vertex.size=10,
vertex.label.cex = 0.3,
vertex.color=c("blue","orange")[as.factor(V(slack)$Cohort)],
layout=l3,
main = "Slack Network Colored by Cohort")
legend(x=-1.5,y=0,unique(V(slack)$Cohort),pch=21,
pt.bg=c("blue","orange"),pt.cex=2,bty="n",ncol=1)
```
A (nearly) complete list of plot option parameters is given below:
* **vertex.color**: Node color
* **vertex.frame.color**: Node border color
* **vertex.shape**: Vector containing shape of vertices, like “circle,” “square,” “csquare,” “rectangle” etc
* **vertex.size**: Size of the node (default is 15\)
* **vertex.size2**: The second size of the node (e.g. for a rectangle)
* **vertex.label**: Character vector used to label the nodes
* **vertex.label.color**: Character vector specifying color the nodes
* **vertex.label.family**: Font family of the label (e.g.“Times,” “Helvetica”)
* **vertex.label.font**: Font: 1 plain, 2 bold, 3, italic, 4 bold italic, 5 symbol
* **vertex.label.cex**: Font size (multiplication factor, device\-dependent)
* **vertex.label.dist**: Distance between the label and the vertex
* **vertex.label.degree**: The position of the label in relation to the vertex (use pi)
* **edge.color**: Edge color
* **edge.width**: Edge width, defaults to 1
* **edge.arrow.size**: Arrow size, defaults to 1
* **edge.arrow.width**: Arrow width, defaults to 1
* **edge.lty**: Line type, 0 \=“blank,” 1 \=“solid,” 2 \=“dashed,” 3 \=“dotted,” etc
* **edge.curved**: Edge curvature, range 0\-1 (FALSE sets it to 0, TRUE to 0\.5\)
and if you’d like to try a dark\-mode style visualization, consider the global graphical parameter to change the background color of your visual: `par(bg="black")`.
Any one of these option parameters can be set according to a variable in your dataset, or a metric about your graph. For example, let’s define **degree** as the number of edges that are adjacent to a given vertex. We can *size* the vertices according to their degree by including that information in the plot function as follows, using the `degree()` function. We just have to keep in mind that the `vertex.size` plot attribute is expecting the same range of sizes that you would provide for any points on a plot, and since the degree of a vertex can be very high in this case, we should put it on a scale that seems more reasonable. In this example. we divide the degree by the maximum degree to create a number between 0 and 1 and then multiply it by 10 to create `vertex.size` values between zero and 10\.
```
plot(slack, edge.arrow.size = .2,
vertex.label=V(slack)$name,
vertex.size=10*degree(slack, v=V(slack), mode='all')/max(degree(slack, v=V(slack), mode='all')),
vertex.label.cex = 0.3,
vertex.color=c("blue","orange")[as.factor(V(slack)$Cohort)],
layout=l3,
main = "Slack Network Colored by Cohort")
legend(x=-1.5,y=0,c("Orange","Blue"),pch=21,
pt.bg=c("Orange","Blue"),pt.cex=2,bty="n",ncol=1)
```
### 19\.2\.1 Layout algorithms for `igraph` package
The igraph package has many different layout algorithms available; type `?igraph::layout` for a list of them. By clicking on each layout in the help menu, you’ll be able to distinguish which of the layouts are force\-directed and which are not. Force\-directed layouts generally provide the highest quality network visualizations. The Davidson\-Harel (`layout_with_dh`), Fruchterman\-Reingold (`layout_with_fr`), DrL (`layout_with_drl`) and multidimensional scaling algorithms (`layout_with_mds`) are probably the most well\-known algorithms available in this package.
We recommend that you compute the layout outside of the plot function so that you may use it again without re\-computing it. After all, a layout is just a two dimensional array of coordinates that specifies where each node should be placed. If you compute the layout inside the plot function then every time you make a small adjustment like color or edge arrow size, you will have to your computer will have to re\-compute the layout algorithm.
The following code chunk computes 4 different layouts and then plots the resulting networks on a 2x2 grid for comparison. We encourage you to substitute four *different* layouts (listed in the help document at the bottom) in place of the ones chosen here as part of your exploration.
```
#?igraph::layout
l = layout_with_lgl(slack)
l2 = layout_with_fr(slack)
l3 = layout_with_drl(slack)
l4 = layout_with_mds(slack)
par(mfrow=c(2,2),mar=c(1,1,1,1))
# Above tells the graphic window to use the
# following plots to fill out a 2x2 grid with margins of 1 unit
# on each side. Must reset these options with dev.off() when done!
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l,main="Large Graph Layout")
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l2,main="Fruchterman-Reingold")
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l3,main="DrL")
plot(slack, edge.arrow.size = .3, vertex.label=NA,vertex.size=10,
vertex.color='lightblue', layout=l4,main = "MDS")
```
To reset your plot window, you should run `dev.off()` or else your future plots will continue to display in a 2x2 grid.
```
dev.off()
```
### 19\.2\.2 Adding attribute information to your visualization
We commonly want to represent information about our nodes using color or size. This is easily done by passing a vector of colors into the plot function that maintains the order in the *users* data frame. We can then create a legend and locate it in our plot window as desired.
```
plot(slack, edge.arrow.size = .2,
vertex.label=V(slack)$name,
vertex.size=10,
vertex.label.cex = 0.3,
vertex.color=c("blue","orange")[as.factor(V(slack)$Cohort)],
layout=l3,
main = "Slack Network Colored by Cohort")
legend(x=-1.5,y=0,unique(V(slack)$Cohort),pch=21,
pt.bg=c("blue","orange"),pt.cex=2,bty="n",ncol=1)
```
A (nearly) complete list of plot option parameters is given below:
* **vertex.color**: Node color
* **vertex.frame.color**: Node border color
* **vertex.shape**: Vector containing shape of vertices, like “circle,” “square,” “csquare,” “rectangle” etc
* **vertex.size**: Size of the node (default is 15\)
* **vertex.size2**: The second size of the node (e.g. for a rectangle)
* **vertex.label**: Character vector used to label the nodes
* **vertex.label.color**: Character vector specifying color the nodes
* **vertex.label.family**: Font family of the label (e.g.“Times,” “Helvetica”)
* **vertex.label.font**: Font: 1 plain, 2 bold, 3, italic, 4 bold italic, 5 symbol
* **vertex.label.cex**: Font size (multiplication factor, device\-dependent)
* **vertex.label.dist**: Distance between the label and the vertex
* **vertex.label.degree**: The position of the label in relation to the vertex (use pi)
* **edge.color**: Edge color
* **edge.width**: Edge width, defaults to 1
* **edge.arrow.size**: Arrow size, defaults to 1
* **edge.arrow.width**: Arrow width, defaults to 1
* **edge.lty**: Line type, 0 \=“blank,” 1 \=“solid,” 2 \=“dashed,” 3 \=“dotted,” etc
* **edge.curved**: Edge curvature, range 0\-1 (FALSE sets it to 0, TRUE to 0\.5\)
and if you’d like to try a dark\-mode style visualization, consider the global graphical parameter to change the background color of your visual: `par(bg="black")`.
Any one of these option parameters can be set according to a variable in your dataset, or a metric about your graph. For example, let’s define **degree** as the number of edges that are adjacent to a given vertex. We can *size* the vertices according to their degree by including that information in the plot function as follows, using the `degree()` function. We just have to keep in mind that the `vertex.size` plot attribute is expecting the same range of sizes that you would provide for any points on a plot, and since the degree of a vertex can be very high in this case, we should put it on a scale that seems more reasonable. In this example. we divide the degree by the maximum degree to create a number between 0 and 1 and then multiply it by 10 to create `vertex.size` values between zero and 10\.
```
plot(slack, edge.arrow.size = .2,
vertex.label=V(slack)$name,
vertex.size=10*degree(slack, v=V(slack), mode='all')/max(degree(slack, v=V(slack), mode='all')),
vertex.label.cex = 0.3,
vertex.color=c("blue","orange")[as.factor(V(slack)$Cohort)],
layout=l3,
main = "Slack Network Colored by Cohort")
legend(x=-1.5,y=0,c("Orange","Blue"),pch=21,
pt.bg=c("Orange","Blue"),pt.cex=2,bty="n",ncol=1)
```
19\.3 Package `networkD3`
-------------------------
The network D3 package creates the same type of visualizations that you would see in the [JavaScript library D3](https://d3js.org). These visualizations are highly interactive and quite beautiful.
```
library(networkD3)
```
### 19\.3\.1 Preparing the data for `networkD3`
The one thing that you’ll have to keep in mind when creating this visualization is the insistence of this package that your label names (indices) of your nodes start from zero. To use this package, you need a data frame containing the edge list and a data frame containing the node data. While we already have these data frames prepared, the following chunk of code shows you how to extract them from an igraph object and easily transform your ID or label column into a counter that starts from 0\. You can see the first few rows of the resulting data frames below.
```
nodes=data.frame(vertex_attr(slack))
nodes$ID=0:(vcount(slack)-1)
#data frame with edge list
edges=data.frame(get.edgelist(slack))
colnames(edges)=c("source","target")
edges=merge(edges, nodes[,c("name","ID")],by.x="source",by.y="name")
edges=merge(edges, nodes[,c("name","ID")],by.x="target",by.y="name")
edges=edges[,3:4]
colnames(edges)=c("source","target")
head(edges)
```
```
## source target
## 1 80 0
## 2 77 0
## 3 99 0
## 4 22 0
## 5 101 1
## 6 71 1
```
```
head(nodes)
```
```
## name Cohort ID
## 1 U012E07QP1D o 0
## 2 U012E08EWTZ b 1
## 3 U012E08R0CX o 2
## 4 U012E08TYMV b 3
## 5 U012E096VF1 o 4
## 6 U012E09JRAB o 5
```
Once we have our data in the right format it’s easy to create the force\-directed network a la D3 with the `forceNetwork()` function, and to save it as an .html file with the `saveNetwork()` function.
### 19\.3\.2 Creating an Interactive Visualization with `networkD3`
The following visualization is interactive! Try it by hovering on or dragging a node.
```
colors = JS('d3.scaleOrdinal().domain(["b", "o"]).range(["#0000ff", "#ffa500"])')
forceNetwork(Links=edges, Nodes=nodes, Source = "source",
Target = "target", NodeID="name", Group="Cohort", colourScale=colors,
charge=-100,fontSize=12, opacity = 0.8, zoom=F, legend=T)
```
### 19\.3\.3 Saving your Interactive Visualization to .html
Exploration of the resulting visualization is likely to be smoother in .html, so let’s export this visualization to a file with `saveNetwork()`.
```
j=forceNetwork(Links=edges, Nodes=nodes, Source = "source",
Target = "target", NodeID="name", Group="Cohort",
fontSize=12, opacity = 0.8, zoom=T, legend=T)
saveNetwork(j, file = 'Slack2021.html')
```
You can find the resulting file in your working directory (or you can specify a path rather than just a file name) and open it with any web browser.
### 19\.3\.1 Preparing the data for `networkD3`
The one thing that you’ll have to keep in mind when creating this visualization is the insistence of this package that your label names (indices) of your nodes start from zero. To use this package, you need a data frame containing the edge list and a data frame containing the node data. While we already have these data frames prepared, the following chunk of code shows you how to extract them from an igraph object and easily transform your ID or label column into a counter that starts from 0\. You can see the first few rows of the resulting data frames below.
```
nodes=data.frame(vertex_attr(slack))
nodes$ID=0:(vcount(slack)-1)
#data frame with edge list
edges=data.frame(get.edgelist(slack))
colnames(edges)=c("source","target")
edges=merge(edges, nodes[,c("name","ID")],by.x="source",by.y="name")
edges=merge(edges, nodes[,c("name","ID")],by.x="target",by.y="name")
edges=edges[,3:4]
colnames(edges)=c("source","target")
head(edges)
```
```
## source target
## 1 80 0
## 2 77 0
## 3 99 0
## 4 22 0
## 5 101 1
## 6 71 1
```
```
head(nodes)
```
```
## name Cohort ID
## 1 U012E07QP1D o 0
## 2 U012E08EWTZ b 1
## 3 U012E08R0CX o 2
## 4 U012E08TYMV b 3
## 5 U012E096VF1 o 4
## 6 U012E09JRAB o 5
```
Once we have our data in the right format it’s easy to create the force\-directed network a la D3 with the `forceNetwork()` function, and to save it as an .html file with the `saveNetwork()` function.
### 19\.3\.2 Creating an Interactive Visualization with `networkD3`
The following visualization is interactive! Try it by hovering on or dragging a node.
```
colors = JS('d3.scaleOrdinal().domain(["b", "o"]).range(["#0000ff", "#ffa500"])')
forceNetwork(Links=edges, Nodes=nodes, Source = "source",
Target = "target", NodeID="name", Group="Cohort", colourScale=colors,
charge=-100,fontSize=12, opacity = 0.8, zoom=F, legend=T)
```
### 19\.3\.3 Saving your Interactive Visualization to .html
Exploration of the resulting visualization is likely to be smoother in .html, so let’s export this visualization to a file with `saveNetwork()`.
```
j=forceNetwork(Links=edges, Nodes=nodes, Source = "source",
Target = "target", NodeID="name", Group="Cohort",
fontSize=12, opacity = 0.8, zoom=T, legend=T)
saveNetwork(j, file = 'Slack2021.html')
```
You can find the resulting file in your working directory (or you can specify a path rather than just a file name) and open it with any web browser.
| Field Specific |
sctyner.github.io | https://sctyner.github.io/OpenForSciR/intro.html |
Chapter 1 Introduction
======================
1\.1 R Packages
---------------
Here are all of the [`R`](glossary.html#def:rstats) packages used in this book. Some are installed from [CRAN](https://cran.r-project.org/), some are installed from [Bioconductor](https://www.bioconductor.org/), and some are installed from GitHub using the [`devtools`](https://devtools.r-lib.org/) package (Wickham, Hester, and Chang [2019](#ref-R-devtools)). If you would like to download and compile this book on your own system, this chunk will install and load all of the necessary packages for you. If you have problems with installation of any of these packages, this is probably due to an individual computer or `R` version issue. If you experience any installation problems, the best thing to do is to search for the error message you get on the web to see if anyone else has already solved your problem. If the package you are having troubles with has a [GitHub](https://github.com/) repository, you should search its [Issues](https://github.com/issues) to see if the author of the package has encountered and/or solved your problem before. If not, [community.rstudio.com](https://community.rstudio.com/) and [Stack Overflow](https://stackoverflow.com/questions/tagged/r) tend to be the most trustworthy sources of troubleshooting information. If all else fails, you can post an issue on the [Github repo](https://github.com/sctyner/OpenForSciR/issues) for this book. However, this should be your last resort: the authors of this book are subject matter experts, not necessarily computer debugging experts for your specific system and installation.
```
options(repos = "http://cran.us.r-project.org")
# check if a package is installed, if not install it from cran
install_cran_missing <- function(pkgname){
if(class(pkgname) != "character"){
stop("pkgname must be a character")
}
if (!require(pkgname, character.only = TRUE, quietly = TRUE)){
install.packages(pkgname, dep = T)
if(!require(pkgname,character.only = TRUE)) stop("CRAN Package not found")
}
}
# check if a package is installed, if not install it from Github
install_dev_missing <- function(pkgname, ghuser){
if(class(pkgname) != "character"){
stop("pkgname must be a character")
}
if (!require(pkgname, character.only = TRUE, quietly = TRUE)){
repo <- paste(ghuser, pkgname, sep ='/')
devtools::install_github(repo)
if(!require(pkgname,character.only = TRUE)) stop("Github repo for package not found")
}
}
# need devtools to get some packages
install_cran_missing("devtools")
# packages used throughout the book
install_cran_missing("tidyverse") # or install_dev_missing("tidyverse", "tidyverse")
install_dev_missing("gt", "rstudio")
# Chapter 2: DNA Validation
# Watch out for dependency issues with RGtk2...
install_cran_missing("strvalidator")
# Chapter 3: Bullets
install_cran_missing("x3ptools") # or install_dev_missing("x3ptools", "heike")
install_cran_missing("randomForest")
install_dev_missing("bulletxtrctr", "heike")
# Chapter 4: Casings
if (!requireNamespace("BiocManager", quietly = TRUE)){
install.packages("BiocManager")
}
BiocManager::install("EBImage", version = "3.8")
install_dev_missing("cartridges", "xhtai")
# Chapter 5: Fingerprints
install_cran_missing("bmp")
install_cran_missing("kableExtra")
install_dev_missing("fingerprintr", "kdp4be")
# Chapter 6: Shoe
install_dev_missing("shoeprintr", "CSAFE-ISU")
install_dev_missing("patchwork", "thomasp85")
# Chapter 7: Glass
install_cran_missing("caret")
install_cran_missing("GGally")
install_cran_missing("stringr")
install_dev_missing("csafethemes", "csafe-isu")
# Chapter 8: Decision Making
install_dev_missing("blackboxstudyR", "aluby")
install_cran_missing("rstan")
install_cran_missing("RColorBrewer")
install_cran_missing("gridExtra")
install_cran_missing("gghighlight")
install_cran_missing("ggpubr")
```
1\.2 Terminology and Definitions
--------------------------------
In the appendix, you will find a dedicated [Glossary](glossary.html#glossary) section. The terms are listed in the order that they appear in the book. Throughout the book, the first time you encounter a glossary phrase, the phrase will be a link to the glossary entry, similar to the way Wikipedia handles their links. If you feel a term is lacking a glossary entry, please file an issue on [GitHub](https://github.com/sctyner/OpenForSciR/issues).
1\.3 Forensic Science Problems
------------------------------
The purpose of this section is to introduce the general type of [forensic science](glossary.html#def:forsci) problem that will be covered in the book. The common understanding of forensic science is that law enforcement uses it to help solve crimes, but the primary professionals in forensic science are scientists, not members of law enforcement. According to the American Academy of Forensic Sciences ([AAFS](https://www.aafs.org/home-page/students/choosing-a-career/whats-a-forensic-scientist/)), any forensic scientist is first and foremost a scientist, who communicates their knowledge and their tests results to lawyers, juries, and judges.
Where a legal matter is concerned, law enforcement is in charge of answering a very different question than the forensic scientist examining the [evidence](glossary.html#def:evidence). Specifically in the criminal context, law enforcement wants to know who committed the crime, while the forensic scientist wants to understand the nature of the evidence. In theory, the forensic scientist’s conclusion can be used by law enforcement, but law enforcement information should generally not be used by the scientist, with some exceptions per R. Cook et al. ([1998](#ref-cook_hierarchy_1998)).
Law enforcement and the courts focus on the *offense level hypotheses*, which describe the crime and its possible perpetrators (R. Cook et al. [1998](#ref-cook_hierarchy_1998)). By contrast, the forensic scientist devotes their attention most often to the *source level hypotheses*, which describe the evidence and its possible sources (R. Cook et al. [1998](#ref-cook_hierarchy_1998)). In between the offense level and the source level are the *activity level hypotheses*, which describe an action associated with the crime and the persons who did that action. For each level, two or more disjoint hypotheses are considered. For example, consider the following set of hypotheses adapted from R. Cook et al. ([1998](#ref-cook_hierarchy_1998)):
| Hypothesis Level | Competing Hypotheses |
| --- | --- |
| Offense | Mr X committed the burglary |
| "" | Another person committed the burglary |
| Activity | Mr X smashed window W at the scene of the burglary |
| "" | Mr X was not present when the window W was smashed |
| Source | The glass fragments came from window W |
| "" | The glass fragments came from another unknown broken glass object |
In this example, the forensic scientist does not know the source of the glass fragments being analyzed, while the police may know that they were found on Mr X’s clothes upon his arrest. The forensic scientist will likely have a sample from the window W and the small fragments, and will only be asked to determine if the two samples are distinguishable. Note that the scientist does not need to know any details from the case, such as where the burglary occurred, who the suspect is, or what was stolen, to analyze the two glass samples. Notice the distinction: the forensic scientist is not a member of law enforcement, so their main concern is not “catching the bad guy.” Instead, they are concerned with coming to the best conclusion using science.
The problems discussed in this book concern the source level hypotheses exclusively: we are not interested in taking the place of law enforcement or legal professionals. We are chiefly concerned with *science*, not the law. Each chapter in this book is outlined as follows:
1. Introduction: This section familiarizes the reader with the type of forensic evidence covered in the chapter. Basic terms and concepts are covered, as well as what is typically done in a forensic science setting or lab to evaluate this type of evidence. The introduction section is not comprehensive by any stretch of the imagination: enough information is provided to understand the rest of the chapter, but that is all. For detailed coverage of a particular type of forensic evidence covered here, please consult each chapter’s references section.
2. Data: This section covers the form of the forensic evidence and describes how that evidence is converted into computer\-readable format for analysis in `R`. It will also provide the reader with an example data set, including the type and structure of data objects that are required for the analyses.
3. `R` Package(s): This section introduces the `R` package(s) required for performing the analysis of the forensic evidence discussed. In some cases, the chapter author discusses (and in most cases is the author of) an `R` package explicitly for the particular forensic analysis. In other cases, the `R` package(s) used was applied the particular forensic science application, but is broadly applicable to other data analyses.
4. Drawing Conclusions: This section describes how to make a decision about the evidence under consideration. In some cases, there are [score](glossary.html#def:score) based methods used to make a decision, and in others, a [likelihood ratio](glossary.html#def:lr) is used. Comparisons to the current practice in forensic science are also made where applicable.
5. Case study: This section follows a piece of forensic evidence from reading the requisite data into `R` to drawing a conclusion about the source of the evidence. The author guides the reader step\-by\-step through the `R` code required to perform a complete analysis.
[
1\.1 R Packages
---------------
Here are all of the [`R`](glossary.html#def:rstats) packages used in this book. Some are installed from [CRAN](https://cran.r-project.org/), some are installed from [Bioconductor](https://www.bioconductor.org/), and some are installed from GitHub using the [`devtools`](https://devtools.r-lib.org/) package (Wickham, Hester, and Chang [2019](#ref-R-devtools)). If you would like to download and compile this book on your own system, this chunk will install and load all of the necessary packages for you. If you have problems with installation of any of these packages, this is probably due to an individual computer or `R` version issue. If you experience any installation problems, the best thing to do is to search for the error message you get on the web to see if anyone else has already solved your problem. If the package you are having troubles with has a [GitHub](https://github.com/) repository, you should search its [Issues](https://github.com/issues) to see if the author of the package has encountered and/or solved your problem before. If not, [community.rstudio.com](https://community.rstudio.com/) and [Stack Overflow](https://stackoverflow.com/questions/tagged/r) tend to be the most trustworthy sources of troubleshooting information. If all else fails, you can post an issue on the [Github repo](https://github.com/sctyner/OpenForSciR/issues) for this book. However, this should be your last resort: the authors of this book are subject matter experts, not necessarily computer debugging experts for your specific system and installation.
```
options(repos = "http://cran.us.r-project.org")
# check if a package is installed, if not install it from cran
install_cran_missing <- function(pkgname){
if(class(pkgname) != "character"){
stop("pkgname must be a character")
}
if (!require(pkgname, character.only = TRUE, quietly = TRUE)){
install.packages(pkgname, dep = T)
if(!require(pkgname,character.only = TRUE)) stop("CRAN Package not found")
}
}
# check if a package is installed, if not install it from Github
install_dev_missing <- function(pkgname, ghuser){
if(class(pkgname) != "character"){
stop("pkgname must be a character")
}
if (!require(pkgname, character.only = TRUE, quietly = TRUE)){
repo <- paste(ghuser, pkgname, sep ='/')
devtools::install_github(repo)
if(!require(pkgname,character.only = TRUE)) stop("Github repo for package not found")
}
}
# need devtools to get some packages
install_cran_missing("devtools")
# packages used throughout the book
install_cran_missing("tidyverse") # or install_dev_missing("tidyverse", "tidyverse")
install_dev_missing("gt", "rstudio")
# Chapter 2: DNA Validation
# Watch out for dependency issues with RGtk2...
install_cran_missing("strvalidator")
# Chapter 3: Bullets
install_cran_missing("x3ptools") # or install_dev_missing("x3ptools", "heike")
install_cran_missing("randomForest")
install_dev_missing("bulletxtrctr", "heike")
# Chapter 4: Casings
if (!requireNamespace("BiocManager", quietly = TRUE)){
install.packages("BiocManager")
}
BiocManager::install("EBImage", version = "3.8")
install_dev_missing("cartridges", "xhtai")
# Chapter 5: Fingerprints
install_cran_missing("bmp")
install_cran_missing("kableExtra")
install_dev_missing("fingerprintr", "kdp4be")
# Chapter 6: Shoe
install_dev_missing("shoeprintr", "CSAFE-ISU")
install_dev_missing("patchwork", "thomasp85")
# Chapter 7: Glass
install_cran_missing("caret")
install_cran_missing("GGally")
install_cran_missing("stringr")
install_dev_missing("csafethemes", "csafe-isu")
# Chapter 8: Decision Making
install_dev_missing("blackboxstudyR", "aluby")
install_cran_missing("rstan")
install_cran_missing("RColorBrewer")
install_cran_missing("gridExtra")
install_cran_missing("gghighlight")
install_cran_missing("ggpubr")
```
1\.2 Terminology and Definitions
--------------------------------
In the appendix, you will find a dedicated [Glossary](glossary.html#glossary) section. The terms are listed in the order that they appear in the book. Throughout the book, the first time you encounter a glossary phrase, the phrase will be a link to the glossary entry, similar to the way Wikipedia handles their links. If you feel a term is lacking a glossary entry, please file an issue on [GitHub](https://github.com/sctyner/OpenForSciR/issues).
1\.3 Forensic Science Problems
------------------------------
The purpose of this section is to introduce the general type of [forensic science](glossary.html#def:forsci) problem that will be covered in the book. The common understanding of forensic science is that law enforcement uses it to help solve crimes, but the primary professionals in forensic science are scientists, not members of law enforcement. According to the American Academy of Forensic Sciences ([AAFS](https://www.aafs.org/home-page/students/choosing-a-career/whats-a-forensic-scientist/)), any forensic scientist is first and foremost a scientist, who communicates their knowledge and their tests results to lawyers, juries, and judges.
Where a legal matter is concerned, law enforcement is in charge of answering a very different question than the forensic scientist examining the [evidence](glossary.html#def:evidence). Specifically in the criminal context, law enforcement wants to know who committed the crime, while the forensic scientist wants to understand the nature of the evidence. In theory, the forensic scientist’s conclusion can be used by law enforcement, but law enforcement information should generally not be used by the scientist, with some exceptions per R. Cook et al. ([1998](#ref-cook_hierarchy_1998)).
Law enforcement and the courts focus on the *offense level hypotheses*, which describe the crime and its possible perpetrators (R. Cook et al. [1998](#ref-cook_hierarchy_1998)). By contrast, the forensic scientist devotes their attention most often to the *source level hypotheses*, which describe the evidence and its possible sources (R. Cook et al. [1998](#ref-cook_hierarchy_1998)). In between the offense level and the source level are the *activity level hypotheses*, which describe an action associated with the crime and the persons who did that action. For each level, two or more disjoint hypotheses are considered. For example, consider the following set of hypotheses adapted from R. Cook et al. ([1998](#ref-cook_hierarchy_1998)):
| Hypothesis Level | Competing Hypotheses |
| --- | --- |
| Offense | Mr X committed the burglary |
| "" | Another person committed the burglary |
| Activity | Mr X smashed window W at the scene of the burglary |
| "" | Mr X was not present when the window W was smashed |
| Source | The glass fragments came from window W |
| "" | The glass fragments came from another unknown broken glass object |
In this example, the forensic scientist does not know the source of the glass fragments being analyzed, while the police may know that they were found on Mr X’s clothes upon his arrest. The forensic scientist will likely have a sample from the window W and the small fragments, and will only be asked to determine if the two samples are distinguishable. Note that the scientist does not need to know any details from the case, such as where the burglary occurred, who the suspect is, or what was stolen, to analyze the two glass samples. Notice the distinction: the forensic scientist is not a member of law enforcement, so their main concern is not “catching the bad guy.” Instead, they are concerned with coming to the best conclusion using science.
The problems discussed in this book concern the source level hypotheses exclusively: we are not interested in taking the place of law enforcement or legal professionals. We are chiefly concerned with *science*, not the law. Each chapter in this book is outlined as follows:
1. Introduction: This section familiarizes the reader with the type of forensic evidence covered in the chapter. Basic terms and concepts are covered, as well as what is typically done in a forensic science setting or lab to evaluate this type of evidence. The introduction section is not comprehensive by any stretch of the imagination: enough information is provided to understand the rest of the chapter, but that is all. For detailed coverage of a particular type of forensic evidence covered here, please consult each chapter’s references section.
2. Data: This section covers the form of the forensic evidence and describes how that evidence is converted into computer\-readable format for analysis in `R`. It will also provide the reader with an example data set, including the type and structure of data objects that are required for the analyses.
3. `R` Package(s): This section introduces the `R` package(s) required for performing the analysis of the forensic evidence discussed. In some cases, the chapter author discusses (and in most cases is the author of) an `R` package explicitly for the particular forensic analysis. In other cases, the `R` package(s) used was applied the particular forensic science application, but is broadly applicable to other data analyses.
4. Drawing Conclusions: This section describes how to make a decision about the evidence under consideration. In some cases, there are [score](glossary.html#def:score) based methods used to make a decision, and in others, a [likelihood ratio](glossary.html#def:lr) is used. Comparisons to the current practice in forensic science are also made where applicable.
5. Case study: This section follows a piece of forensic evidence from reading the requisite data into `R` to drawing a conclusion about the source of the evidence. The author guides the reader step\-by\-step through the `R` code required to perform a complete analysis.
[
| Field Specific |
sctyner.github.io | https://sctyner.github.io/OpenForSciR/dnaval.html |
Chapter 2 Validation of DNA Interpretation Systems
==================================================
#### Sam Tyner
### Acknowledgements
This work would not have been possible without the excellent documentation of the [`strvalidator`](https://sites.google.com/site/forensicapps/strvalidator) package (Hansson, Gill, and Egeland [2014](#ref-strval)). Thank you to the package’s author, [Oskar Hansson, Ph.D](https://www.linkedin.com/in/goto-oskarhansson/), who has authored many, many supporting documents, tutorials, etc. for his `strvalidator` package. Thank you, Oskar!
2\.1 Introduction
-----------------
The earliest documented use of [DNA profiling](glossary.html#def:dnaprof) in the legal system was an immigration dispute in the United Kingdom (Butler [2005](#ref-butler05)). A young man of Ghanaian descent with family in the UK was believed to have forged his Ghanaian passport and had an expired British passport[2](#fn2). DNA profiling techniques developed by Sir Alec Jeffreys were used to prove that the he was indeed his mother’s son, and thus he did have a right to immigrate to the UK. The technique was subsequently used for many other parentage cases, and soon after, DNA profiling was used for the first time to convict someone of murder in 1986 (Butler [2009](#ref-butler09)).
When DNA profiling began, an individual’s blood sample was taken to create their DNA profile. Now, DNA can be taken by a cheek swab, and minute traces of touch DNA can tie a perpetrator to the scene of the crime. This is thanks to the [polymerase chain reaction](glossary.html#def:pcr) (PCR), a method of copying a DNA sample over and over again to amplify the genetic signal for profile extraction. Once a DNA sample is amplified with PCR, different DNA markers can be analyzed to make an identification. The standard for forensic DNA typing is to use [short tandem repeats](glossary.html#def:str) (STRs) as the DNA marker. Other markers, [single nucleotide polymorphisms](glossary.html#def:snp) (SNPs) and the mitochondrial genome ([mtDNA](glossary.html#def:mtdna)), have different uses. SNPs can be used to identify ancestry or visible traits of a human, while mtDNA is used in cases where DNA is highly degraded (Liu and Harbison [2018](#ref-dnareview)). Because STR is the standard, we dedicate the rest of this chapter to its methodology.
### 2\.1\.1 Procedure for DNA Analysis using STRs
In order to understand the STR methodology, we first need to understand what is being analysed. We present the comparison of genetic and printed information from Butler ([2009](#ref-butler09)) in Table [2\.1](dnaval.html#tab:comparetable). When forensic scientists analyze a DNA sample, they are looking for repeated “words” or DNA sequences in different “paragraphs,” or loci. The locus information is stored in the [chromosome](glossary.html#def:chromo), which is the “page” the genetic information is on. Your *chromosomes* exist in the *nucleus* of every *cell* in your *body*, just like a *page* is within a *chapter* in a *book* in a *library*. STR markers are a set of loci or genes. At each locus, the number of times a tetranucleotide sequence (e.g. AAGC) repeats is counted (Butler [2005](#ref-butler05)). This count indicates the [allele](glossary.html#def:allele), or gene variation, at that particular locus.
Table 2\.1: Recreation of Table 2\.1 from Butler ([2009](#ref-butler09))
| Printed Information | Genetic Information |
| --- | --- |
| Library | Body |
| Book | Cell |
| Chapter | Nucleus |
| Page | Chromosome |
| Paragraph | [Locus](glossary.html#def:locus) or gene |
| Word | Short DNA sequence |
| Letter | DNA [nucleotide](glossary.html#def:nucleotide) |
In forensic DNA profiling, a particular set loci are examined for comparison. The number of loci depends on the equipment and method used to analyzed the DNA sample, but can be as high as 27 for the particular method we discuss here (Butler, Hill, and Coble [2012](#ref-loci27)). As of January 1, 2017, there are 20 core loci in CODIS, the Combined DNA Index System, which is the FBI’s national program for DNA [databases](glossary.html#def:database) and software used in criminal justice systems across the United States (FBI [2017](#ref-codis)). These sets of loci were chosen because of their high variability in the population. To find the alleles at each loci, the DNA sample is amplified using PCR, and then run through [capillary electrophoresis](glossary.html#def:ce) (CE). The result of CE is the DNA profile, with the alleles on each locus indicated by different colored peaks from a chemical dyeing process.
The amplification process introduces random change known as *slippage*, which creates [*stutter peaks*](glossary.html#def:stutter) in the observed DNA profile that are different than the true allele peaks (Butler [2009](#ref-butler09)). In addition, different labs may use different machines and materials in forensic analysis resulting in different measurements for the same DNA sample. Thus, the validation of methods and materials is a crucial step. According to the Scientific Working Group on DNA Analysis Methods (SWGDAM), “validation is a process by which a procedure is evaluated to determine its efficacy and reliability for forensic casework and/or database analysis” (SWGDAM [2016](#ref-swgdamval)). Validation helps minimize error in forensic DNA analysis and helps keep results consistent across laboratories and materials.
The process of validation for forensic DNA methodology is expensive, time consuming, and unstandardized, and the R package `strvalidator` was created to help solve these problems in forensic DNA analysis (Hansson, Gill, and Egeland [2014](#ref-strval)). The `strvalidator` package makes the validation process faster by automating data analysis with respect to “heterozygote balance, stutter ratio, inter\-locus balance, and the stochastic threshold” (Hansson, Gill, and Egeland [2014](#ref-strval)). In the remainder of this chapter, we introduce the type of data to import for use of this package, the primary functions of the package, and show an example of each of the four aforementioned validation steps in R.
2\.2 Data
---------
The `strvalidator` package takes files exported from the [GeneMapper®](https://www.thermofisher.com/order/catalog/product/4475073) software, or a similar expert system that exports tab\-delimited text files, as inputs. The data exported from these software programs typically come in a very wide format, and on import it needs to be transformed into a long format more appropriate for data analysis. In Figure [2\.1](dnaval.html#fig:widelonggif), we visualize the process of data being transformed from wide to long format and back. In wide format, variable values are column names, while in the long format these column names become part of the data.
Figure 2\.1: Animation heuristic showing the transformation from long form to wide form data and back. Code for GIF from [Omni Analytics Group](https://omnianalytics.io/2018/08/30/animating-the-data-transformation-process-with-gganimate/)
The `strvalidator` package contains import methods to make sure that the data imported from other software is in the right form for validation analysis. The GeneMapper® software creates one column for each possible allele observed at a locus and their corresponding sizes, heights, and data points. Once the data have been trimmed and slimmed, they look something like this:
```
head(myDNAdata)
```
```
## Sample.Name Marker Dye Allele Height
## 1 03-A2.1 AMEL B X 27940
## 2 03-A2.1 AMEL B Y 107
## 3 03-A2.1 D3S1358 B OL 80
## 4 03-A2.1 D3S1358 B OL 453
## 5 03-A2.1 D3S1358 B OL 85
## 6 03-A2.1 D3S1358 B OL 408
```
where
* `Sample.Name` is the name of the sample being analyzed
* `Marker` is the locus in the DNA analysis kit
* `Dye` is the dye channel for that locus
* `Allele` is the allele (\# of sequence repeats) at that locus
* `Height` is the observed peak height after amplification in RFUs (RFU \= Relative Fluorescence Unit)
2\.3 R Package
--------------
The package `strvalidator` has a [graphical user interface](glossary.html#def:gui) (GUI) to perform analyses so that no coding knowledge is necessary to run these analyses. The author of the package, Oskar Hansson, has written an extensive tutorial[3](#fn3) on the GUI. As this book is focused on open science, we do not use the GUI because it does not output the underlying code used for the point\-and\-click analyses. Instead, we use the code that powers the GUI directly. This code is called the “basic layer” of the package by Hansson, Gill, and Egeland ([2014](#ref-strval)).
The data are read into R via the `import()` function. This function combines the processes of trimming and slimming the data. Trimming selects the columns of interest for your analysis (e.g. `Sample.Name`, `Allele`, `Height`), while slimming converts the data from wide format to long format, as shown in Figure [2\.1](dnaval.html#fig:widelonggif).
After the data has be loaded, there are four main families of functions in the `strvalidator` package that are used for analysis.
* `add*()`: Add to the DNA data. For example, use `addMarker()` to add locus information or `addSize()` to add the fragment size in base pair (bp) for each allele.
* `calculate*()`: Compute properties of the DNA data. For example, use `calculateHb()` to compute heterozygous balance for the data or `calculateLb()` to compute the inter\-locus balance (profile balance) of the data.
* `remove*()` : Remove artifacts from the data with `removeArtefact()` and remove spikes from the data with `removeSpike()`.
* `table*()` : Summarize results from one of the `calculate*()` analyses. For example, `tableStutter()` summarizes the results from `calculateStutter()`.
For complete definitions and explanations of all functions available in `strvalidator`, please see the [`strvalidator` manual](https://drive.google.com/file/d/0B2v6NDpFIgvDMzlydllKaHVBYW8/view). There are many other capabilities of `strvalidator` that do not cover in this chapter for the sake of brevity.
2\.4 Drawing Conclusions
------------------------
There is no one tidy way to conclude a DNA validation analysis, which may be done for new machines, new kits, or any internal validation required. The `strvalidator` package’s primary purpose is to *import* large validation data sets and *analyze* the results of the validation experiment according to different metrics (Riman et al. [2016](#ref-riman16)). A more complete description of the necessary validation studies is found in SWGDAM ([2016](#ref-swgdamval)), and full step\-by\-step tutorials can be found in Riman et al. ([2016](#ref-riman16)) and Hansson ([2018](#ref-strvalweb)).
For validation analysis with respect to heterozygote balance, stutter ratio, inter\-locus balance, and stochastic threshold, there are recommended guidelines to follow.
### 2\.4\.1 Stutter ratio
The `*Stutter()` functions in `strvalidator` can analyze ratios of different types of stutter such as the backward stutter, the forward stutter, and the allowed overlap (none, stutter, or allele), as shown in Figure [2\.4](dnaval.html#fig:stutterfig). Each of Hill et al. ([2011](#ref-hill2011)), Westen et al. ([2012](#ref-westen2012)), Brookes et al. ([2012](#ref-brookes2012)), and Tvedebrink et al. ([2012](#ref-tvedebrink2012)) show greater stutter with more repeats, and these results are similar to those in Hansson, Gill, and Egeland ([2014](#ref-strval)). In addition they found that some loci, such as TH01, experience less stutter on average than others.
### 2\.4\.2 Heterozygote balance
For guidelines specific to the PowerPlex® ESX 17 and ESI 17 systems featured in Hansson, Gill, and Egeland ([2014](#ref-strval)), refer to Hill et al. ([2011](#ref-hill2011)). Generally speaking, per Gill, Sparkes, and Kimpton ([1997](#ref-gill97)), the heterozygote balance should be no less less than 60%.
### 2\.4\.3 Inter\-locus balance
Per Hansson, Gill, and Egeland ([2014](#ref-strval)), there are two methods in `strvalidator` to compute inter\-locus balance.
1. As the proportion of the total peak height of a profile
2. Relative to the highest peak total within a single locus in the profile, with the option to compute this value for each dye channel.
Ideally, the loci would be perfectly balanced and the total peak height in each locus would be equal to \\(\\frac{1}{n}\\) where \\(n\\) is the number of loci in the kit (Hansson, Gill, and Egeland [2014](#ref-strval)).
### 2\.4\.4 Stochastic threshold
The *stochastic threshold* (ST) or *interpretation threshold* is the “point above which there is a low probability that the second allele in a truly heterozygous sample has not dropped out” (Butler [2009](#ref-butler09)). The ST is used to assess dropout risk in `strvalidator`. Another important threshold in DNA interpretation is the analytical threshold (AT), which is a peak height (for example, 50 RFUs) above which peaks “are considered an analytical signal and thus recorded by the data analysis software” (Butler [2009](#ref-butler09)). Hansson, Gill, and Egeland ([2014](#ref-strval)) refer to analytical threshold (AT) as the limit of detection threshold (LDT). Peaks above the AT are considered *signal*, and any peaks below the AT are considered *noise*. The ST is the RFU value above which it is reasonable to assume that, at a given locus, allelic dropout of a sister allele has not occurred. .[4](#fn4) Peaks that appear to be homozygous but have heights above the AT and below the ST may not be true homozygotes and may have experienced stochastic effects, such as allele dropout or elevated stutter. Usually, these stochastic events only happen for very small amounts of DNA that have been amplified.
In `strvalidator`, dropout is scored according to the user\-provided LDT value and the reference data provided. The risk of dropout is then modeled using a logistic regression of the calculated dropout score on the allele heights. Then for an acceptable level of dropout risk, say 1%, the stochastic threshold is computed according to the logistic regression model. Thus, the ST is the peak height at which the probability of dropout is less than or equal to 1%.
2\.5 Case Study
---------------
We do a simple case study using eight repeated samples from the same individual that are included in the `strvalidator` package.
### 2\.5\.1 Get the data
We’ll use the package data `set1`, which is data from the genotyping of eight replicate measurements of a positive control sample, one replicate of a negative control sample, and the ladder used in analysis. The PowerPlex® ESX 17 System from the Promega Corporation[5](#fn5) was used on these samples for amplification of 17 loci recommended for analysis by the European Network of Forensic Science Institutes (ENFSI) and the European DNA Profiling Group (EDNAP), the European equivalent of SWGDAM. The known reference sample used is the `ref1` data in the `strvalidator` package.
First, we load the data, then slim it for analysis. Then, we use `generateEPG()` to visualize an electropherogram\-like plot of the data. This function, like the other plotting functions in `strvalidator`, is built on the `ggplot2` package (Wickham, Chang, et al. [2019](#ref-R-ggplot2)). We also use the [`dplyr`](https://dplyr.tidyverse.org/) package throughout for data manipulation tasks (Wickham, François, et al. [2019](#ref-R-dplyr)).
```
library(strvalidator)
library(dplyr)
library(ggplot2)
data(set1)
head(set1)
```
```
## Sample.Name Marker Dye Allele.1 Allele.2 Allele.3 Allele.4 Allele.5
## 1 PC1 AMEL B X OL Y <NA> <NA>
## 2 PC1 D3S1358 B 16 17 18 <NA> <NA>
## 3 PC1 TH01 B 6 9.3 <NA> <NA> <NA>
## 4 PC1 D21S11 B 28 29 30.2 31.2 <NA>
## 5 PC1 D18S51 B 15 16 17 18 <NA>
## 6 PC1 D10S1248 G 12 13 14 15 <NA>
## Height.1 Height.2 Height.3 Height.4 Height.5
## 1 2486 81 2850 <NA> <NA>
## 2 260 3251 2985 <NA> <NA>
## 3 3357 2687 <NA> <NA> <NA>
## 4 183 2036 180 1942 <NA>
## 5 161 2051 203 1617 <NA>
## 6 168 2142 243 2230 <NA>
```
```
# slim and trim the data
set1.slim <- slim(set1, fix = c("Sample.Name", "Marker", "Dye"), stack = c("Allele",
"Height"), keep.na = FALSE)
dim(set1)
```
```
## [1] 170 13
```
```
dim(set1.slim)
```
```
## [1] 575 5
```
```
head(set1.slim)
```
```
## Sample.Name Marker Dye Allele Height
## 1 PC1 AMEL B X 2486
## 2 PC1 AMEL B OL 81
## 3 PC1 AMEL B Y 2850
## 4 PC1 D3S1358 B 16 260
## 5 PC1 D3S1358 B 17 3251
## 6 PC1 D3S1358 B 18 2985
```
```
p <- set1.slim %>% filter(Sample.Name != "Ladder") %>% generateEPG(kit = "ESX17")
```
```
p + ggtitle("Mean peak heights for 8 samples from PC shown")
```
Figure 2\.2: Electropherogram\-like `ggplot2` plot of the mean of all 8 samples in `set1`
Next, get the reference sample data.
```
data(ref1)
head(ref1)
```
```
## Sample.Name Marker Allele.1 Allele.2
## 1 PC AMEL X Y
## 2 PC D3S1358 17 18
## 3 PC TH01 6 9.3
## 4 PC D21S11 29 31.2
## 5 PC D18S51 16 18
## 6 PC D10S1248 13 15
```
```
ref1.slim <- slim(ref1, fix = c("Sample.Name", "Marker"), stack = "Allele",
keep.na = FALSE)
head(ref1.slim)
```
```
## Sample.Name Marker Allele
## 1 PC AMEL X
## 2 PC AMEL Y
## 3 PC D3S1358 17
## 4 PC D3S1358 18
## 5 PC TH01 6
## 6 PC TH01 9.3
```
```
p <- generateEPG(ref1.slim, kit = "ESX17") + ggtitle("True profile for sample PC")
```
```
p
```
Figure 2\.3: The reference profile electrogpherogram, `ref1`.
### 2\.5\.2 Check the stutter ratio
Figure 2\.4: Figure 2 from Hansson, Gill, and Egeland ([2014](#ref-strval)). The analysis range, 2 back stutters and 1 forward stutter is shown at 3 levels of overlap.
Stutter peaks are byproducts of the DNA amplification process, and their presence muddles data interpretation (Hansson, Gill, and Egeland [2014](#ref-strval)). Stutter is caused by strand slippage in PCR (Butler [2009](#ref-butler09)). This slippage causes small peaks to appear next to true peaks, and a threshold is needed to determine if a peak is caused by slippage or if it could be a mixture sample with a minor contributor. We calculate the stutter for the eight replicates in `set1` using one back stutter, no forward stutter and no overlap. We compare these values to the 95\\(^{th}\\) percentiles in Table 3 of Hansson, Gill, and Egeland ([2014](#ref-strval)). See Figure [2\.4](dnaval.html#fig:stutterfig) for an example of stutter.
```
# make sure the right samples are being analyzed
checkSubset(data = set1.slim, ref = ref1.slim)
```
```
## Reference name: PC
## Subsetted samples: PC1, PC2, PC3, PC4, PC5, PC6, PC7, PC8
```
```
# supply the false stutter and true stutter values for your data. these are
# from the GUI.
stutter_false_val <- c(-1.9, -1.8, -1.7, -0.9, -0.8, -0.7, 0.9, 0.8, 0.7)
stutter_replace_val <- c(-1.3, -1.2, -1.1, -0.3, -0.2, -0.1, 0.3, 0.2, 0.1)
# calculate the stutter values
set1_stutter <- calculateStutter(set1.slim, ref1.slim, back = 1, forward = 0,
interference = 0, replace.val = stutter_false_val, by.val = stutter_replace_val)
stutterplot <- addColor(set1_stutter, kit = "ESX17") %>% sortMarker(kit = "ESX17",
add.missing.levels = FALSE)
marks <- levels(stutterplot$Marker)[-1]
stutterplot$Marker <- factor(as.character(stutterplot$Marker), levels = marks)
compare_dat <- data.frame(Marker = ref1$Marker[-1], perc95 = (c(11.9, 4.6, 10.9,
10.7, 12.1, 12, 11.1, 10.4, 16, 11.4, 9.1, 10.1, 8.3, 14.4, 10.1, 12.8))/100)
compare_dat <- filter(compare_dat, Marker %in% stutterplot$Marker)
ggplot() + geom_point(data = stutterplot, position = position_jitter(width = 0.1),
aes(x = Allele, y = Ratio, color = as.factor(Type)), alpha = 0.7) + geom_hline(data = compare_dat,
aes(yintercept = perc95), linetype = "dotted") + facet_wrap(~Marker, ncol = 4,
scales = "free_x", drop = FALSE) + labs(x = "True Allele", y = "Stutter Ratio",
color = "Type")
```
Figure 2\.5: Stutter ratios by allele for each of the eight samples in the `set1` data, computed for one back stutter, zero forward stutter, and no overlap. Note that SR increases with allele length (e.g. D10S1248; D2S1338; D12S391\). Horizontal dotted lines represent the 95th percentile of stutter ratio values from the study done in Hansson, Gill, and Egeland ([2014](#ref-strval)).
Figure [2\.5](dnaval.html#fig:stutter1) shows the ratio of stutter for each of the eight control samples in `set1`. The horizontal dotted lines show the 95\\(^{th}\\) percentile of the stutter ratio values computed in the same way from 220 samples in Hansson, Gill, and Egeland ([2014](#ref-strval)). There are a few stutter values above the dotted line, but overall the values correspond to what we expect to happen in a sample with only one contributor. Unusual values are shown in Table [2\.2](dnaval.html#tab:stuthigh).
Table 2\.2: Stutter peaks larger than the 95\\(^{th}\\) percentile of peak values for the study in Hansson, Gill, and Egeland ([2014](#ref-strval)).
| Sample.Name | Marker | Allele | HeightA | Stutter | HeightS | Ratio | Type | 95th perc. |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| PC1 | D18S51 | 18 | 1617 | 17 | 203 | 0\.126 | \-1 | 0\.107 |
| PC3 | D18S51 | 18 | 1681 | 17 | 195 | 0\.116 | \-1 | 0\.107 |
| PC4 | D2S1338 | 25 | 3133 | 24 | 352 | 0\.112 | \-1 | 0\.111 |
| PC5 | D12S391 | 23 | 4378 | 22 | 640 | 0\.146 | \-1 | 0\.144 |
| PC6 | D2S1338 | 25 | 2337 | 24 | 261 | 0\.112 | \-1 | 0\.111 |
| PC6 | vWA | 19 | 1571 | 18 | 196 | 0\.125 | \-1 | 0\.114 |
### 2\.5\.3 Check heterozygote balance (intra\-locus balance)
Computing the heterozygote peak balance (Hb) is most important for analyzing samples with two or more contributors. We calculate Hb values for the eight repeated samples in `set1` below using Equation 3 from Hansson, Gill, and Egeland ([2014](#ref-strval)) to compute the ratio.
```
# checkSubset(data = set3, ref = ref3)
set1_hb <- calculateHb(data = set1.slim, ref = ref1.slim, hb = 3, kit = "ESX17",
sex.rm = TRUE, qs.rm = TRUE, ignore.case = TRUE)
hbplot <- addColor(set1_hb, kit = "ESX17") %>% sortMarker(kit = "ESX17", add.missing.levels = FALSE)
hbplot$Marker <- factor(as.character(hbplot$Marker), levels = marks)
ggplot(data = hbplot) + geom_point(aes(x = MPH, y = Hb, color = Dye), position = position_jitter(width = 0.1)) +
geom_hline(yintercept = 0.6, linetype = "dotted") + facet_wrap(~Marker,
nrow = 4, scales = "free_x", drop = FALSE) + scale_color_manual(values = c("blue",
"green", "black", "red")) + labs(x = "Mean Peak Height (RFU)", y = "Ratio",
color = "Dye") + guides(color = guide_legend(nrow = 1)) + theme(axis.text.x = element_text(size = rel(0.8)),
legend.position = "top")
```
Figure 2\.6: Hb ratio values for the eight samples in `set1`. Most ratios are above the 0\.6 threshold.
Figure [2\.6](dnaval.html#fig:hb) shows the Hb values for the eight samples in `set1`. The balance ratio is typically no less than 0\.6 according to Gill, Sparkes, and Kimpton ([1997](#ref-gill97)), but there are a few exceptions to this rule in the `set1` sample, shown in Table [2\.3](dnaval.html#tab:smhb)
Table 2\.3: Observations in the `set1` data which have Hb value less than 0\.6\.
| Sample.Name | Marker | Dye | Delta | Small | Large | MPH | Hb |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PC1 | D12S391 | R | 5\.0 | 2323 | 4017 | 3170\.0 | 0\.578 |
| PC3 | SE33 | R | 1\.0 | 4017 | 6761 | 5389\.0 | 0\.594 |
| PC6 | D10S1248 | G | 2\.0 | 1760 | 3071 | 2415\.5 | 0\.573 |
| PC7 | D21S11 | B | 2\.2 | 1487 | 2678 | 2082\.5 | 0\.555 |
### 2\.5\.4 Check inter\-locus balance
Inter\-locus balance (Lb) is a measure of peak balances across loci (Hansson, Gill, and Egeland [2014](#ref-strval)). The total height of the peaks in all loci should be spread evenly across each individual locus in a sample. In the `set1` data, 17 loci are measured, thus each individual locus balance should be about \\(\\frac{1}{17}^{th}\\) of the total height of all peaks in RFUs.
```
set1_lb <- calculateLb(data = set1.slim, ref = ref1.slim, kit = "ESX17", option = "prop",
by.dye = FALSE, ol.rm = TRUE, sex.rm = FALSE, qs.rm = TRUE, ignore.case = TRUE,
na = 0)
set1_height <- calculateHeight(data = set1.slim, ref = ref1.slim, kit = "ESX17",
sex.rm = FALSE, qs.rm = TRUE, na.replace = 0)
set1_lb <- set1_lb %>% left_join(set1_height %>% select(Sample.Name:Marker,
Dye, TPH, H, Expected, Proportion) %>% distinct(), by = c("Sample.Name",
"Marker", "Dye", TPPH = "TPH"))
set1_lb <- sortMarker(set1_lb, kit = "ESX17", add.missing.levels = TRUE)
ggplot(set1_lb) + geom_boxplot(aes(x = Marker, y = Lb, color = Dye), alpha = 0.7) +
scale_color_manual(values = c("blue", "green", "black", "red")) + geom_hline(yintercept = 1/17,
linetype = "dotted") + theme(legend.position = "top", axis.text.x = element_text(size = rel(0.8),
angle = 270, hjust = 0, vjust = 0.5)) + labs(y = "Lb (proportional method)")
```
Figure 2\.7: Inter\-locus balance for the eight PC samples. At each locus, the value should be about 1/17\. The peak heights should ideally be similar in each locus.
The inter\-locus balance for this kit should ideally be about \\(\\frac{1}{17} \\approx 0\.059\\). This value is shown by the horizontal dotted line in Figure [2\.7](dnaval.html#fig:lb). However, the markers in the red dye channel have consistently higher than ideal peaks and those in the yellow channel have consistently lower than ideal peaks.
### 2\.5\.5 Check stochastic threshold
The stochastic threshold is the value of interest for determining allele drop\-out. If a peak is above the stochastic threshold, it is unlikely that an allele in a heterozygous sample “has dropped out” (Butler [2009](#ref-butler09)). Allele drop\-out occurs when the allele peak height is less than the limit of detection threshold (LDT). As recommended in Butler ([2009](#ref-butler09)), we use an LDT of 50\. The stochastic threshold is modeled with a logistic regression.
```
set1_do <- calculateDropout(data = set1.slim, ref = ref1.slim, threshold = 50,
method = "1", kit = "ESX17")
table(set1_do$Dropout)
```
```
##
## 0
## 264
```
In `set1`, there is no dropout, as the samples included are control samples, and thus enough DNA is present during amplification so there are no stochastic effects.
For a more exciting dropout analysis, we use another data set with more appropriate information. The data `set4` was created specifically for drop\-out analysis, and contains 32 samples from three different reference profiles. The `method = "1"` argument computes dropout with respect to the low molecular weight allele in the locus.
```
data(set4)
data(ref4)
set4_do <- calculateDropout(data = set4, ref = ref4, threshold = 50, method = "1",
kit = "ESX17")
table(set4_do$Dropout)
```
```
##
## 0 1 2
## 822 33 68
```
In the `set4` data, 33 alleles dropped out (`Dropout = 1`), and locus dropout (`Dropout = 2`) occurred in 9 samples (68 alleles). In one sample, all loci dropped out, while only one locus dropped out in three samples. The locus which most commonly dropped out was D22S1045 in seven samples, while loci D19S433 and D8S1179 only dropped out in two samples each.
The probability of allele drop\-out is computed via logistic regression of the dropout score with respect to the method `1`, on the the height of the allele with low molecular weight. The model parameters are also computed using the `calculateT()` function. This function also returns the smallest threshold value at which probability of dropout is less than or equal to a set value, typically 0\.01 or 0\.05, as well as a conservative threshold, which is the value at which the risk of observing a drop\-out probability greater than the specified threshold limit is less than the set value of 0\.01 or 0\.05\.
```
set4_do2 <- set4_do %>% filter(Dropout != 2) %>% rename(Dep = Method1, Exp = Height)
do_mod <- glm(Dep ~ Exp, family = binomial("logit"), data = set4_do2)
set4_ths <- calculateT(set4_do2, pred.int = 0.98)
```
Next, we compute predicted dropout probabilities \\(P(D)\\) and corresponding 95% confidence intervals and plot the results.
```
xmin <- min(set4_do2$Exp, na.rm = T)
xmax <- max(set4_do2$Exp, na.rm = T)
predRange <- data.frame(Exp = seq(xmin, xmax))
ypred <- predict(do_mod, predRange, type = "link", se.fit = TRUE)
# 95% prediction interval
ylower <- plogis(ypred$fit - qnorm(1 - 0.05/2) * ypred$se) # Lower confidence limit.
yupper <- plogis(ypred$fit + qnorm(1 - 0.05/2) * ypred$se) # Upper confidence limit.
# Calculate conservative prediction curve.
yconservative <- plogis(ypred$fit + qnorm(1 - 0.05) * ypred$se)
# Calculate y values for plot.
yplot <- plogis(ypred$fit)
# combine them into a data frame for plotting
predictionDf <- data.frame(Exp = predRange$Exp, Prob = yplot, yupper = yupper,
ylower = ylower)
# plot
th_dat <- data.frame(x = 500, y = 0.5, label = paste0("At ", round(set4_ths[1],
0), " RFUs,\nthe estimated probability\nof dropout is 0.01."))
ggplot(data = predictionDf, aes(x = Exp, y = Prob)) + geom_line() + geom_ribbon(fill = "red",
alpha = 0.4, aes(ymin = ylower, ymax = yupper)) + geom_vline(xintercept = set4_ths[1],
linetype = "dotted") + geom_text(data = th_dat, inherit.aes = FALSE, aes(x = x,
y = y, label = label), hjust = 0) + xlim(c(0, 1500)) + labs(x = "Peak Height (RFUs)",
y = "Probability of allele drop-out")
```
Figure 2\.8: Probability of dropout in `set4` for peaks from 100\-1500 RFUs. 95% confidence interval for drop\-out probability shown in red.
We can also look at a heat map of dropout for each marker by sample. All the loci in sample BC10\.11 dropped\-out, while most other samples have no dropout whatsoever.
```
set4_do %>% tidyr::separate(Sample.Name, into = c("num", "name", "num2")) %>%
mutate(Sample.Name = paste(name, num, ifelse(is.na(num2), "", num2), sep = ".")) %>%
ggplot(aes(x = Sample.Name, y = Marker, fill = as.factor(Dropout))) + geom_tile(color = "white") +
scale_fill_brewer(name = "Dropout", palette = "Set2", labels = c("none",
"allele", "locus")) + theme(axis.text.x = element_text(size = rel(0.8),
angle = 270, hjust = 0, vjust = 0.5), legend.position = "top")
```
Figure 2\.9: Dropout for all samples in `set4` by marker.
[
#### Sam Tyner
2\.1 Introduction
-----------------
The earliest documented use of [DNA profiling](glossary.html#def:dnaprof) in the legal system was an immigration dispute in the United Kingdom (Butler [2005](#ref-butler05)). A young man of Ghanaian descent with family in the UK was believed to have forged his Ghanaian passport and had an expired British passport[2](#fn2). DNA profiling techniques developed by Sir Alec Jeffreys were used to prove that the he was indeed his mother’s son, and thus he did have a right to immigrate to the UK. The technique was subsequently used for many other parentage cases, and soon after, DNA profiling was used for the first time to convict someone of murder in 1986 (Butler [2009](#ref-butler09)).
When DNA profiling began, an individual’s blood sample was taken to create their DNA profile. Now, DNA can be taken by a cheek swab, and minute traces of touch DNA can tie a perpetrator to the scene of the crime. This is thanks to the [polymerase chain reaction](glossary.html#def:pcr) (PCR), a method of copying a DNA sample over and over again to amplify the genetic signal for profile extraction. Once a DNA sample is amplified with PCR, different DNA markers can be analyzed to make an identification. The standard for forensic DNA typing is to use [short tandem repeats](glossary.html#def:str) (STRs) as the DNA marker. Other markers, [single nucleotide polymorphisms](glossary.html#def:snp) (SNPs) and the mitochondrial genome ([mtDNA](glossary.html#def:mtdna)), have different uses. SNPs can be used to identify ancestry or visible traits of a human, while mtDNA is used in cases where DNA is highly degraded (Liu and Harbison [2018](#ref-dnareview)). Because STR is the standard, we dedicate the rest of this chapter to its methodology.
### 2\.1\.1 Procedure for DNA Analysis using STRs
In order to understand the STR methodology, we first need to understand what is being analysed. We present the comparison of genetic and printed information from Butler ([2009](#ref-butler09)) in Table [2\.1](dnaval.html#tab:comparetable). When forensic scientists analyze a DNA sample, they are looking for repeated “words” or DNA sequences in different “paragraphs,” or loci. The locus information is stored in the [chromosome](glossary.html#def:chromo), which is the “page” the genetic information is on. Your *chromosomes* exist in the *nucleus* of every *cell* in your *body*, just like a *page* is within a *chapter* in a *book* in a *library*. STR markers are a set of loci or genes. At each locus, the number of times a tetranucleotide sequence (e.g. AAGC) repeats is counted (Butler [2005](#ref-butler05)). This count indicates the [allele](glossary.html#def:allele), or gene variation, at that particular locus.
Table 2\.1: Recreation of Table 2\.1 from Butler ([2009](#ref-butler09))
| Printed Information | Genetic Information |
| --- | --- |
| Library | Body |
| Book | Cell |
| Chapter | Nucleus |
| Page | Chromosome |
| Paragraph | [Locus](glossary.html#def:locus) or gene |
| Word | Short DNA sequence |
| Letter | DNA [nucleotide](glossary.html#def:nucleotide) |
In forensic DNA profiling, a particular set loci are examined for comparison. The number of loci depends on the equipment and method used to analyzed the DNA sample, but can be as high as 27 for the particular method we discuss here (Butler, Hill, and Coble [2012](#ref-loci27)). As of January 1, 2017, there are 20 core loci in CODIS, the Combined DNA Index System, which is the FBI’s national program for DNA [databases](glossary.html#def:database) and software used in criminal justice systems across the United States (FBI [2017](#ref-codis)). These sets of loci were chosen because of their high variability in the population. To find the alleles at each loci, the DNA sample is amplified using PCR, and then run through [capillary electrophoresis](glossary.html#def:ce) (CE). The result of CE is the DNA profile, with the alleles on each locus indicated by different colored peaks from a chemical dyeing process.
The amplification process introduces random change known as *slippage*, which creates [*stutter peaks*](glossary.html#def:stutter) in the observed DNA profile that are different than the true allele peaks (Butler [2009](#ref-butler09)). In addition, different labs may use different machines and materials in forensic analysis resulting in different measurements for the same DNA sample. Thus, the validation of methods and materials is a crucial step. According to the Scientific Working Group on DNA Analysis Methods (SWGDAM), “validation is a process by which a procedure is evaluated to determine its efficacy and reliability for forensic casework and/or database analysis” (SWGDAM [2016](#ref-swgdamval)). Validation helps minimize error in forensic DNA analysis and helps keep results consistent across laboratories and materials.
The process of validation for forensic DNA methodology is expensive, time consuming, and unstandardized, and the R package `strvalidator` was created to help solve these problems in forensic DNA analysis (Hansson, Gill, and Egeland [2014](#ref-strval)). The `strvalidator` package makes the validation process faster by automating data analysis with respect to “heterozygote balance, stutter ratio, inter\-locus balance, and the stochastic threshold” (Hansson, Gill, and Egeland [2014](#ref-strval)). In the remainder of this chapter, we introduce the type of data to import for use of this package, the primary functions of the package, and show an example of each of the four aforementioned validation steps in R.
### 2\.1\.1 Procedure for DNA Analysis using STRs
In order to understand the STR methodology, we first need to understand what is being analysed. We present the comparison of genetic and printed information from Butler ([2009](#ref-butler09)) in Table [2\.1](dnaval.html#tab:comparetable). When forensic scientists analyze a DNA sample, they are looking for repeated “words” or DNA sequences in different “paragraphs,” or loci. The locus information is stored in the [chromosome](glossary.html#def:chromo), which is the “page” the genetic information is on. Your *chromosomes* exist in the *nucleus* of every *cell* in your *body*, just like a *page* is within a *chapter* in a *book* in a *library*. STR markers are a set of loci or genes. At each locus, the number of times a tetranucleotide sequence (e.g. AAGC) repeats is counted (Butler [2005](#ref-butler05)). This count indicates the [allele](glossary.html#def:allele), or gene variation, at that particular locus.
Table 2\.1: Recreation of Table 2\.1 from Butler ([2009](#ref-butler09))
| Printed Information | Genetic Information |
| --- | --- |
| Library | Body |
| Book | Cell |
| Chapter | Nucleus |
| Page | Chromosome |
| Paragraph | [Locus](glossary.html#def:locus) or gene |
| Word | Short DNA sequence |
| Letter | DNA [nucleotide](glossary.html#def:nucleotide) |
In forensic DNA profiling, a particular set loci are examined for comparison. The number of loci depends on the equipment and method used to analyzed the DNA sample, but can be as high as 27 for the particular method we discuss here (Butler, Hill, and Coble [2012](#ref-loci27)). As of January 1, 2017, there are 20 core loci in CODIS, the Combined DNA Index System, which is the FBI’s national program for DNA [databases](glossary.html#def:database) and software used in criminal justice systems across the United States (FBI [2017](#ref-codis)). These sets of loci were chosen because of their high variability in the population. To find the alleles at each loci, the DNA sample is amplified using PCR, and then run through [capillary electrophoresis](glossary.html#def:ce) (CE). The result of CE is the DNA profile, with the alleles on each locus indicated by different colored peaks from a chemical dyeing process.
The amplification process introduces random change known as *slippage*, which creates [*stutter peaks*](glossary.html#def:stutter) in the observed DNA profile that are different than the true allele peaks (Butler [2009](#ref-butler09)). In addition, different labs may use different machines and materials in forensic analysis resulting in different measurements for the same DNA sample. Thus, the validation of methods and materials is a crucial step. According to the Scientific Working Group on DNA Analysis Methods (SWGDAM), “validation is a process by which a procedure is evaluated to determine its efficacy and reliability for forensic casework and/or database analysis” (SWGDAM [2016](#ref-swgdamval)). Validation helps minimize error in forensic DNA analysis and helps keep results consistent across laboratories and materials.
The process of validation for forensic DNA methodology is expensive, time consuming, and unstandardized, and the R package `strvalidator` was created to help solve these problems in forensic DNA analysis (Hansson, Gill, and Egeland [2014](#ref-strval)). The `strvalidator` package makes the validation process faster by automating data analysis with respect to “heterozygote balance, stutter ratio, inter\-locus balance, and the stochastic threshold” (Hansson, Gill, and Egeland [2014](#ref-strval)). In the remainder of this chapter, we introduce the type of data to import for use of this package, the primary functions of the package, and show an example of each of the four aforementioned validation steps in R.
2\.2 Data
---------
The `strvalidator` package takes files exported from the [GeneMapper®](https://www.thermofisher.com/order/catalog/product/4475073) software, or a similar expert system that exports tab\-delimited text files, as inputs. The data exported from these software programs typically come in a very wide format, and on import it needs to be transformed into a long format more appropriate for data analysis. In Figure [2\.1](dnaval.html#fig:widelonggif), we visualize the process of data being transformed from wide to long format and back. In wide format, variable values are column names, while in the long format these column names become part of the data.
Figure 2\.1: Animation heuristic showing the transformation from long form to wide form data and back. Code for GIF from [Omni Analytics Group](https://omnianalytics.io/2018/08/30/animating-the-data-transformation-process-with-gganimate/)
The `strvalidator` package contains import methods to make sure that the data imported from other software is in the right form for validation analysis. The GeneMapper® software creates one column for each possible allele observed at a locus and their corresponding sizes, heights, and data points. Once the data have been trimmed and slimmed, they look something like this:
```
head(myDNAdata)
```
```
## Sample.Name Marker Dye Allele Height
## 1 03-A2.1 AMEL B X 27940
## 2 03-A2.1 AMEL B Y 107
## 3 03-A2.1 D3S1358 B OL 80
## 4 03-A2.1 D3S1358 B OL 453
## 5 03-A2.1 D3S1358 B OL 85
## 6 03-A2.1 D3S1358 B OL 408
```
where
* `Sample.Name` is the name of the sample being analyzed
* `Marker` is the locus in the DNA analysis kit
* `Dye` is the dye channel for that locus
* `Allele` is the allele (\# of sequence repeats) at that locus
* `Height` is the observed peak height after amplification in RFUs (RFU \= Relative Fluorescence Unit)
2\.3 R Package
--------------
The package `strvalidator` has a [graphical user interface](glossary.html#def:gui) (GUI) to perform analyses so that no coding knowledge is necessary to run these analyses. The author of the package, Oskar Hansson, has written an extensive tutorial[3](#fn3) on the GUI. As this book is focused on open science, we do not use the GUI because it does not output the underlying code used for the point\-and\-click analyses. Instead, we use the code that powers the GUI directly. This code is called the “basic layer” of the package by Hansson, Gill, and Egeland ([2014](#ref-strval)).
The data are read into R via the `import()` function. This function combines the processes of trimming and slimming the data. Trimming selects the columns of interest for your analysis (e.g. `Sample.Name`, `Allele`, `Height`), while slimming converts the data from wide format to long format, as shown in Figure [2\.1](dnaval.html#fig:widelonggif).
After the data has be loaded, there are four main families of functions in the `strvalidator` package that are used for analysis.
* `add*()`: Add to the DNA data. For example, use `addMarker()` to add locus information or `addSize()` to add the fragment size in base pair (bp) for each allele.
* `calculate*()`: Compute properties of the DNA data. For example, use `calculateHb()` to compute heterozygous balance for the data or `calculateLb()` to compute the inter\-locus balance (profile balance) of the data.
* `remove*()` : Remove artifacts from the data with `removeArtefact()` and remove spikes from the data with `removeSpike()`.
* `table*()` : Summarize results from one of the `calculate*()` analyses. For example, `tableStutter()` summarizes the results from `calculateStutter()`.
For complete definitions and explanations of all functions available in `strvalidator`, please see the [`strvalidator` manual](https://drive.google.com/file/d/0B2v6NDpFIgvDMzlydllKaHVBYW8/view). There are many other capabilities of `strvalidator` that do not cover in this chapter for the sake of brevity.
2\.4 Drawing Conclusions
------------------------
There is no one tidy way to conclude a DNA validation analysis, which may be done for new machines, new kits, or any internal validation required. The `strvalidator` package’s primary purpose is to *import* large validation data sets and *analyze* the results of the validation experiment according to different metrics (Riman et al. [2016](#ref-riman16)). A more complete description of the necessary validation studies is found in SWGDAM ([2016](#ref-swgdamval)), and full step\-by\-step tutorials can be found in Riman et al. ([2016](#ref-riman16)) and Hansson ([2018](#ref-strvalweb)).
For validation analysis with respect to heterozygote balance, stutter ratio, inter\-locus balance, and stochastic threshold, there are recommended guidelines to follow.
### 2\.4\.1 Stutter ratio
The `*Stutter()` functions in `strvalidator` can analyze ratios of different types of stutter such as the backward stutter, the forward stutter, and the allowed overlap (none, stutter, or allele), as shown in Figure [2\.4](dnaval.html#fig:stutterfig). Each of Hill et al. ([2011](#ref-hill2011)), Westen et al. ([2012](#ref-westen2012)), Brookes et al. ([2012](#ref-brookes2012)), and Tvedebrink et al. ([2012](#ref-tvedebrink2012)) show greater stutter with more repeats, and these results are similar to those in Hansson, Gill, and Egeland ([2014](#ref-strval)). In addition they found that some loci, such as TH01, experience less stutter on average than others.
### 2\.4\.2 Heterozygote balance
For guidelines specific to the PowerPlex® ESX 17 and ESI 17 systems featured in Hansson, Gill, and Egeland ([2014](#ref-strval)), refer to Hill et al. ([2011](#ref-hill2011)). Generally speaking, per Gill, Sparkes, and Kimpton ([1997](#ref-gill97)), the heterozygote balance should be no less less than 60%.
### 2\.4\.3 Inter\-locus balance
Per Hansson, Gill, and Egeland ([2014](#ref-strval)), there are two methods in `strvalidator` to compute inter\-locus balance.
1. As the proportion of the total peak height of a profile
2. Relative to the highest peak total within a single locus in the profile, with the option to compute this value for each dye channel.
Ideally, the loci would be perfectly balanced and the total peak height in each locus would be equal to \\(\\frac{1}{n}\\) where \\(n\\) is the number of loci in the kit (Hansson, Gill, and Egeland [2014](#ref-strval)).
### 2\.4\.4 Stochastic threshold
The *stochastic threshold* (ST) or *interpretation threshold* is the “point above which there is a low probability that the second allele in a truly heterozygous sample has not dropped out” (Butler [2009](#ref-butler09)). The ST is used to assess dropout risk in `strvalidator`. Another important threshold in DNA interpretation is the analytical threshold (AT), which is a peak height (for example, 50 RFUs) above which peaks “are considered an analytical signal and thus recorded by the data analysis software” (Butler [2009](#ref-butler09)). Hansson, Gill, and Egeland ([2014](#ref-strval)) refer to analytical threshold (AT) as the limit of detection threshold (LDT). Peaks above the AT are considered *signal*, and any peaks below the AT are considered *noise*. The ST is the RFU value above which it is reasonable to assume that, at a given locus, allelic dropout of a sister allele has not occurred. .[4](#fn4) Peaks that appear to be homozygous but have heights above the AT and below the ST may not be true homozygotes and may have experienced stochastic effects, such as allele dropout or elevated stutter. Usually, these stochastic events only happen for very small amounts of DNA that have been amplified.
In `strvalidator`, dropout is scored according to the user\-provided LDT value and the reference data provided. The risk of dropout is then modeled using a logistic regression of the calculated dropout score on the allele heights. Then for an acceptable level of dropout risk, say 1%, the stochastic threshold is computed according to the logistic regression model. Thus, the ST is the peak height at which the probability of dropout is less than or equal to 1%.
### 2\.4\.1 Stutter ratio
The `*Stutter()` functions in `strvalidator` can analyze ratios of different types of stutter such as the backward stutter, the forward stutter, and the allowed overlap (none, stutter, or allele), as shown in Figure [2\.4](dnaval.html#fig:stutterfig). Each of Hill et al. ([2011](#ref-hill2011)), Westen et al. ([2012](#ref-westen2012)), Brookes et al. ([2012](#ref-brookes2012)), and Tvedebrink et al. ([2012](#ref-tvedebrink2012)) show greater stutter with more repeats, and these results are similar to those in Hansson, Gill, and Egeland ([2014](#ref-strval)). In addition they found that some loci, such as TH01, experience less stutter on average than others.
### 2\.4\.2 Heterozygote balance
For guidelines specific to the PowerPlex® ESX 17 and ESI 17 systems featured in Hansson, Gill, and Egeland ([2014](#ref-strval)), refer to Hill et al. ([2011](#ref-hill2011)). Generally speaking, per Gill, Sparkes, and Kimpton ([1997](#ref-gill97)), the heterozygote balance should be no less less than 60%.
### 2\.4\.3 Inter\-locus balance
Per Hansson, Gill, and Egeland ([2014](#ref-strval)), there are two methods in `strvalidator` to compute inter\-locus balance.
1. As the proportion of the total peak height of a profile
2. Relative to the highest peak total within a single locus in the profile, with the option to compute this value for each dye channel.
Ideally, the loci would be perfectly balanced and the total peak height in each locus would be equal to \\(\\frac{1}{n}\\) where \\(n\\) is the number of loci in the kit (Hansson, Gill, and Egeland [2014](#ref-strval)).
### 2\.4\.4 Stochastic threshold
The *stochastic threshold* (ST) or *interpretation threshold* is the “point above which there is a low probability that the second allele in a truly heterozygous sample has not dropped out” (Butler [2009](#ref-butler09)). The ST is used to assess dropout risk in `strvalidator`. Another important threshold in DNA interpretation is the analytical threshold (AT), which is a peak height (for example, 50 RFUs) above which peaks “are considered an analytical signal and thus recorded by the data analysis software” (Butler [2009](#ref-butler09)). Hansson, Gill, and Egeland ([2014](#ref-strval)) refer to analytical threshold (AT) as the limit of detection threshold (LDT). Peaks above the AT are considered *signal*, and any peaks below the AT are considered *noise*. The ST is the RFU value above which it is reasonable to assume that, at a given locus, allelic dropout of a sister allele has not occurred. .[4](#fn4) Peaks that appear to be homozygous but have heights above the AT and below the ST may not be true homozygotes and may have experienced stochastic effects, such as allele dropout or elevated stutter. Usually, these stochastic events only happen for very small amounts of DNA that have been amplified.
In `strvalidator`, dropout is scored according to the user\-provided LDT value and the reference data provided. The risk of dropout is then modeled using a logistic regression of the calculated dropout score on the allele heights. Then for an acceptable level of dropout risk, say 1%, the stochastic threshold is computed according to the logistic regression model. Thus, the ST is the peak height at which the probability of dropout is less than or equal to 1%.
2\.5 Case Study
---------------
We do a simple case study using eight repeated samples from the same individual that are included in the `strvalidator` package.
### 2\.5\.1 Get the data
We’ll use the package data `set1`, which is data from the genotyping of eight replicate measurements of a positive control sample, one replicate of a negative control sample, and the ladder used in analysis. The PowerPlex® ESX 17 System from the Promega Corporation[5](#fn5) was used on these samples for amplification of 17 loci recommended for analysis by the European Network of Forensic Science Institutes (ENFSI) and the European DNA Profiling Group (EDNAP), the European equivalent of SWGDAM. The known reference sample used is the `ref1` data in the `strvalidator` package.
First, we load the data, then slim it for analysis. Then, we use `generateEPG()` to visualize an electropherogram\-like plot of the data. This function, like the other plotting functions in `strvalidator`, is built on the `ggplot2` package (Wickham, Chang, et al. [2019](#ref-R-ggplot2)). We also use the [`dplyr`](https://dplyr.tidyverse.org/) package throughout for data manipulation tasks (Wickham, François, et al. [2019](#ref-R-dplyr)).
```
library(strvalidator)
library(dplyr)
library(ggplot2)
data(set1)
head(set1)
```
```
## Sample.Name Marker Dye Allele.1 Allele.2 Allele.3 Allele.4 Allele.5
## 1 PC1 AMEL B X OL Y <NA> <NA>
## 2 PC1 D3S1358 B 16 17 18 <NA> <NA>
## 3 PC1 TH01 B 6 9.3 <NA> <NA> <NA>
## 4 PC1 D21S11 B 28 29 30.2 31.2 <NA>
## 5 PC1 D18S51 B 15 16 17 18 <NA>
## 6 PC1 D10S1248 G 12 13 14 15 <NA>
## Height.1 Height.2 Height.3 Height.4 Height.5
## 1 2486 81 2850 <NA> <NA>
## 2 260 3251 2985 <NA> <NA>
## 3 3357 2687 <NA> <NA> <NA>
## 4 183 2036 180 1942 <NA>
## 5 161 2051 203 1617 <NA>
## 6 168 2142 243 2230 <NA>
```
```
# slim and trim the data
set1.slim <- slim(set1, fix = c("Sample.Name", "Marker", "Dye"), stack = c("Allele",
"Height"), keep.na = FALSE)
dim(set1)
```
```
## [1] 170 13
```
```
dim(set1.slim)
```
```
## [1] 575 5
```
```
head(set1.slim)
```
```
## Sample.Name Marker Dye Allele Height
## 1 PC1 AMEL B X 2486
## 2 PC1 AMEL B OL 81
## 3 PC1 AMEL B Y 2850
## 4 PC1 D3S1358 B 16 260
## 5 PC1 D3S1358 B 17 3251
## 6 PC1 D3S1358 B 18 2985
```
```
p <- set1.slim %>% filter(Sample.Name != "Ladder") %>% generateEPG(kit = "ESX17")
```
```
p + ggtitle("Mean peak heights for 8 samples from PC shown")
```
Figure 2\.2: Electropherogram\-like `ggplot2` plot of the mean of all 8 samples in `set1`
Next, get the reference sample data.
```
data(ref1)
head(ref1)
```
```
## Sample.Name Marker Allele.1 Allele.2
## 1 PC AMEL X Y
## 2 PC D3S1358 17 18
## 3 PC TH01 6 9.3
## 4 PC D21S11 29 31.2
## 5 PC D18S51 16 18
## 6 PC D10S1248 13 15
```
```
ref1.slim <- slim(ref1, fix = c("Sample.Name", "Marker"), stack = "Allele",
keep.na = FALSE)
head(ref1.slim)
```
```
## Sample.Name Marker Allele
## 1 PC AMEL X
## 2 PC AMEL Y
## 3 PC D3S1358 17
## 4 PC D3S1358 18
## 5 PC TH01 6
## 6 PC TH01 9.3
```
```
p <- generateEPG(ref1.slim, kit = "ESX17") + ggtitle("True profile for sample PC")
```
```
p
```
Figure 2\.3: The reference profile electrogpherogram, `ref1`.
### 2\.5\.2 Check the stutter ratio
Figure 2\.4: Figure 2 from Hansson, Gill, and Egeland ([2014](#ref-strval)). The analysis range, 2 back stutters and 1 forward stutter is shown at 3 levels of overlap.
Stutter peaks are byproducts of the DNA amplification process, and their presence muddles data interpretation (Hansson, Gill, and Egeland [2014](#ref-strval)). Stutter is caused by strand slippage in PCR (Butler [2009](#ref-butler09)). This slippage causes small peaks to appear next to true peaks, and a threshold is needed to determine if a peak is caused by slippage or if it could be a mixture sample with a minor contributor. We calculate the stutter for the eight replicates in `set1` using one back stutter, no forward stutter and no overlap. We compare these values to the 95\\(^{th}\\) percentiles in Table 3 of Hansson, Gill, and Egeland ([2014](#ref-strval)). See Figure [2\.4](dnaval.html#fig:stutterfig) for an example of stutter.
```
# make sure the right samples are being analyzed
checkSubset(data = set1.slim, ref = ref1.slim)
```
```
## Reference name: PC
## Subsetted samples: PC1, PC2, PC3, PC4, PC5, PC6, PC7, PC8
```
```
# supply the false stutter and true stutter values for your data. these are
# from the GUI.
stutter_false_val <- c(-1.9, -1.8, -1.7, -0.9, -0.8, -0.7, 0.9, 0.8, 0.7)
stutter_replace_val <- c(-1.3, -1.2, -1.1, -0.3, -0.2, -0.1, 0.3, 0.2, 0.1)
# calculate the stutter values
set1_stutter <- calculateStutter(set1.slim, ref1.slim, back = 1, forward = 0,
interference = 0, replace.val = stutter_false_val, by.val = stutter_replace_val)
stutterplot <- addColor(set1_stutter, kit = "ESX17") %>% sortMarker(kit = "ESX17",
add.missing.levels = FALSE)
marks <- levels(stutterplot$Marker)[-1]
stutterplot$Marker <- factor(as.character(stutterplot$Marker), levels = marks)
compare_dat <- data.frame(Marker = ref1$Marker[-1], perc95 = (c(11.9, 4.6, 10.9,
10.7, 12.1, 12, 11.1, 10.4, 16, 11.4, 9.1, 10.1, 8.3, 14.4, 10.1, 12.8))/100)
compare_dat <- filter(compare_dat, Marker %in% stutterplot$Marker)
ggplot() + geom_point(data = stutterplot, position = position_jitter(width = 0.1),
aes(x = Allele, y = Ratio, color = as.factor(Type)), alpha = 0.7) + geom_hline(data = compare_dat,
aes(yintercept = perc95), linetype = "dotted") + facet_wrap(~Marker, ncol = 4,
scales = "free_x", drop = FALSE) + labs(x = "True Allele", y = "Stutter Ratio",
color = "Type")
```
Figure 2\.5: Stutter ratios by allele for each of the eight samples in the `set1` data, computed for one back stutter, zero forward stutter, and no overlap. Note that SR increases with allele length (e.g. D10S1248; D2S1338; D12S391\). Horizontal dotted lines represent the 95th percentile of stutter ratio values from the study done in Hansson, Gill, and Egeland ([2014](#ref-strval)).
Figure [2\.5](dnaval.html#fig:stutter1) shows the ratio of stutter for each of the eight control samples in `set1`. The horizontal dotted lines show the 95\\(^{th}\\) percentile of the stutter ratio values computed in the same way from 220 samples in Hansson, Gill, and Egeland ([2014](#ref-strval)). There are a few stutter values above the dotted line, but overall the values correspond to what we expect to happen in a sample with only one contributor. Unusual values are shown in Table [2\.2](dnaval.html#tab:stuthigh).
Table 2\.2: Stutter peaks larger than the 95\\(^{th}\\) percentile of peak values for the study in Hansson, Gill, and Egeland ([2014](#ref-strval)).
| Sample.Name | Marker | Allele | HeightA | Stutter | HeightS | Ratio | Type | 95th perc. |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| PC1 | D18S51 | 18 | 1617 | 17 | 203 | 0\.126 | \-1 | 0\.107 |
| PC3 | D18S51 | 18 | 1681 | 17 | 195 | 0\.116 | \-1 | 0\.107 |
| PC4 | D2S1338 | 25 | 3133 | 24 | 352 | 0\.112 | \-1 | 0\.111 |
| PC5 | D12S391 | 23 | 4378 | 22 | 640 | 0\.146 | \-1 | 0\.144 |
| PC6 | D2S1338 | 25 | 2337 | 24 | 261 | 0\.112 | \-1 | 0\.111 |
| PC6 | vWA | 19 | 1571 | 18 | 196 | 0\.125 | \-1 | 0\.114 |
### 2\.5\.3 Check heterozygote balance (intra\-locus balance)
Computing the heterozygote peak balance (Hb) is most important for analyzing samples with two or more contributors. We calculate Hb values for the eight repeated samples in `set1` below using Equation 3 from Hansson, Gill, and Egeland ([2014](#ref-strval)) to compute the ratio.
```
# checkSubset(data = set3, ref = ref3)
set1_hb <- calculateHb(data = set1.slim, ref = ref1.slim, hb = 3, kit = "ESX17",
sex.rm = TRUE, qs.rm = TRUE, ignore.case = TRUE)
hbplot <- addColor(set1_hb, kit = "ESX17") %>% sortMarker(kit = "ESX17", add.missing.levels = FALSE)
hbplot$Marker <- factor(as.character(hbplot$Marker), levels = marks)
ggplot(data = hbplot) + geom_point(aes(x = MPH, y = Hb, color = Dye), position = position_jitter(width = 0.1)) +
geom_hline(yintercept = 0.6, linetype = "dotted") + facet_wrap(~Marker,
nrow = 4, scales = "free_x", drop = FALSE) + scale_color_manual(values = c("blue",
"green", "black", "red")) + labs(x = "Mean Peak Height (RFU)", y = "Ratio",
color = "Dye") + guides(color = guide_legend(nrow = 1)) + theme(axis.text.x = element_text(size = rel(0.8)),
legend.position = "top")
```
Figure 2\.6: Hb ratio values for the eight samples in `set1`. Most ratios are above the 0\.6 threshold.
Figure [2\.6](dnaval.html#fig:hb) shows the Hb values for the eight samples in `set1`. The balance ratio is typically no less than 0\.6 according to Gill, Sparkes, and Kimpton ([1997](#ref-gill97)), but there are a few exceptions to this rule in the `set1` sample, shown in Table [2\.3](dnaval.html#tab:smhb)
Table 2\.3: Observations in the `set1` data which have Hb value less than 0\.6\.
| Sample.Name | Marker | Dye | Delta | Small | Large | MPH | Hb |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PC1 | D12S391 | R | 5\.0 | 2323 | 4017 | 3170\.0 | 0\.578 |
| PC3 | SE33 | R | 1\.0 | 4017 | 6761 | 5389\.0 | 0\.594 |
| PC6 | D10S1248 | G | 2\.0 | 1760 | 3071 | 2415\.5 | 0\.573 |
| PC7 | D21S11 | B | 2\.2 | 1487 | 2678 | 2082\.5 | 0\.555 |
### 2\.5\.4 Check inter\-locus balance
Inter\-locus balance (Lb) is a measure of peak balances across loci (Hansson, Gill, and Egeland [2014](#ref-strval)). The total height of the peaks in all loci should be spread evenly across each individual locus in a sample. In the `set1` data, 17 loci are measured, thus each individual locus balance should be about \\(\\frac{1}{17}^{th}\\) of the total height of all peaks in RFUs.
```
set1_lb <- calculateLb(data = set1.slim, ref = ref1.slim, kit = "ESX17", option = "prop",
by.dye = FALSE, ol.rm = TRUE, sex.rm = FALSE, qs.rm = TRUE, ignore.case = TRUE,
na = 0)
set1_height <- calculateHeight(data = set1.slim, ref = ref1.slim, kit = "ESX17",
sex.rm = FALSE, qs.rm = TRUE, na.replace = 0)
set1_lb <- set1_lb %>% left_join(set1_height %>% select(Sample.Name:Marker,
Dye, TPH, H, Expected, Proportion) %>% distinct(), by = c("Sample.Name",
"Marker", "Dye", TPPH = "TPH"))
set1_lb <- sortMarker(set1_lb, kit = "ESX17", add.missing.levels = TRUE)
ggplot(set1_lb) + geom_boxplot(aes(x = Marker, y = Lb, color = Dye), alpha = 0.7) +
scale_color_manual(values = c("blue", "green", "black", "red")) + geom_hline(yintercept = 1/17,
linetype = "dotted") + theme(legend.position = "top", axis.text.x = element_text(size = rel(0.8),
angle = 270, hjust = 0, vjust = 0.5)) + labs(y = "Lb (proportional method)")
```
Figure 2\.7: Inter\-locus balance for the eight PC samples. At each locus, the value should be about 1/17\. The peak heights should ideally be similar in each locus.
The inter\-locus balance for this kit should ideally be about \\(\\frac{1}{17} \\approx 0\.059\\). This value is shown by the horizontal dotted line in Figure [2\.7](dnaval.html#fig:lb). However, the markers in the red dye channel have consistently higher than ideal peaks and those in the yellow channel have consistently lower than ideal peaks.
### 2\.5\.5 Check stochastic threshold
The stochastic threshold is the value of interest for determining allele drop\-out. If a peak is above the stochastic threshold, it is unlikely that an allele in a heterozygous sample “has dropped out” (Butler [2009](#ref-butler09)). Allele drop\-out occurs when the allele peak height is less than the limit of detection threshold (LDT). As recommended in Butler ([2009](#ref-butler09)), we use an LDT of 50\. The stochastic threshold is modeled with a logistic regression.
```
set1_do <- calculateDropout(data = set1.slim, ref = ref1.slim, threshold = 50,
method = "1", kit = "ESX17")
table(set1_do$Dropout)
```
```
##
## 0
## 264
```
In `set1`, there is no dropout, as the samples included are control samples, and thus enough DNA is present during amplification so there are no stochastic effects.
For a more exciting dropout analysis, we use another data set with more appropriate information. The data `set4` was created specifically for drop\-out analysis, and contains 32 samples from three different reference profiles. The `method = "1"` argument computes dropout with respect to the low molecular weight allele in the locus.
```
data(set4)
data(ref4)
set4_do <- calculateDropout(data = set4, ref = ref4, threshold = 50, method = "1",
kit = "ESX17")
table(set4_do$Dropout)
```
```
##
## 0 1 2
## 822 33 68
```
In the `set4` data, 33 alleles dropped out (`Dropout = 1`), and locus dropout (`Dropout = 2`) occurred in 9 samples (68 alleles). In one sample, all loci dropped out, while only one locus dropped out in three samples. The locus which most commonly dropped out was D22S1045 in seven samples, while loci D19S433 and D8S1179 only dropped out in two samples each.
The probability of allele drop\-out is computed via logistic regression of the dropout score with respect to the method `1`, on the the height of the allele with low molecular weight. The model parameters are also computed using the `calculateT()` function. This function also returns the smallest threshold value at which probability of dropout is less than or equal to a set value, typically 0\.01 or 0\.05, as well as a conservative threshold, which is the value at which the risk of observing a drop\-out probability greater than the specified threshold limit is less than the set value of 0\.01 or 0\.05\.
```
set4_do2 <- set4_do %>% filter(Dropout != 2) %>% rename(Dep = Method1, Exp = Height)
do_mod <- glm(Dep ~ Exp, family = binomial("logit"), data = set4_do2)
set4_ths <- calculateT(set4_do2, pred.int = 0.98)
```
Next, we compute predicted dropout probabilities \\(P(D)\\) and corresponding 95% confidence intervals and plot the results.
```
xmin <- min(set4_do2$Exp, na.rm = T)
xmax <- max(set4_do2$Exp, na.rm = T)
predRange <- data.frame(Exp = seq(xmin, xmax))
ypred <- predict(do_mod, predRange, type = "link", se.fit = TRUE)
# 95% prediction interval
ylower <- plogis(ypred$fit - qnorm(1 - 0.05/2) * ypred$se) # Lower confidence limit.
yupper <- plogis(ypred$fit + qnorm(1 - 0.05/2) * ypred$se) # Upper confidence limit.
# Calculate conservative prediction curve.
yconservative <- plogis(ypred$fit + qnorm(1 - 0.05) * ypred$se)
# Calculate y values for plot.
yplot <- plogis(ypred$fit)
# combine them into a data frame for plotting
predictionDf <- data.frame(Exp = predRange$Exp, Prob = yplot, yupper = yupper,
ylower = ylower)
# plot
th_dat <- data.frame(x = 500, y = 0.5, label = paste0("At ", round(set4_ths[1],
0), " RFUs,\nthe estimated probability\nof dropout is 0.01."))
ggplot(data = predictionDf, aes(x = Exp, y = Prob)) + geom_line() + geom_ribbon(fill = "red",
alpha = 0.4, aes(ymin = ylower, ymax = yupper)) + geom_vline(xintercept = set4_ths[1],
linetype = "dotted") + geom_text(data = th_dat, inherit.aes = FALSE, aes(x = x,
y = y, label = label), hjust = 0) + xlim(c(0, 1500)) + labs(x = "Peak Height (RFUs)",
y = "Probability of allele drop-out")
```
Figure 2\.8: Probability of dropout in `set4` for peaks from 100\-1500 RFUs. 95% confidence interval for drop\-out probability shown in red.
We can also look at a heat map of dropout for each marker by sample. All the loci in sample BC10\.11 dropped\-out, while most other samples have no dropout whatsoever.
```
set4_do %>% tidyr::separate(Sample.Name, into = c("num", "name", "num2")) %>%
mutate(Sample.Name = paste(name, num, ifelse(is.na(num2), "", num2), sep = ".")) %>%
ggplot(aes(x = Sample.Name, y = Marker, fill = as.factor(Dropout))) + geom_tile(color = "white") +
scale_fill_brewer(name = "Dropout", palette = "Set2", labels = c("none",
"allele", "locus")) + theme(axis.text.x = element_text(size = rel(0.8),
angle = 270, hjust = 0, vjust = 0.5), legend.position = "top")
```
Figure 2\.9: Dropout for all samples in `set4` by marker.
[
### 2\.5\.1 Get the data
We’ll use the package data `set1`, which is data from the genotyping of eight replicate measurements of a positive control sample, one replicate of a negative control sample, and the ladder used in analysis. The PowerPlex® ESX 17 System from the Promega Corporation[5](#fn5) was used on these samples for amplification of 17 loci recommended for analysis by the European Network of Forensic Science Institutes (ENFSI) and the European DNA Profiling Group (EDNAP), the European equivalent of SWGDAM. The known reference sample used is the `ref1` data in the `strvalidator` package.
First, we load the data, then slim it for analysis. Then, we use `generateEPG()` to visualize an electropherogram\-like plot of the data. This function, like the other plotting functions in `strvalidator`, is built on the `ggplot2` package (Wickham, Chang, et al. [2019](#ref-R-ggplot2)). We also use the [`dplyr`](https://dplyr.tidyverse.org/) package throughout for data manipulation tasks (Wickham, François, et al. [2019](#ref-R-dplyr)).
```
library(strvalidator)
library(dplyr)
library(ggplot2)
data(set1)
head(set1)
```
```
## Sample.Name Marker Dye Allele.1 Allele.2 Allele.3 Allele.4 Allele.5
## 1 PC1 AMEL B X OL Y <NA> <NA>
## 2 PC1 D3S1358 B 16 17 18 <NA> <NA>
## 3 PC1 TH01 B 6 9.3 <NA> <NA> <NA>
## 4 PC1 D21S11 B 28 29 30.2 31.2 <NA>
## 5 PC1 D18S51 B 15 16 17 18 <NA>
## 6 PC1 D10S1248 G 12 13 14 15 <NA>
## Height.1 Height.2 Height.3 Height.4 Height.5
## 1 2486 81 2850 <NA> <NA>
## 2 260 3251 2985 <NA> <NA>
## 3 3357 2687 <NA> <NA> <NA>
## 4 183 2036 180 1942 <NA>
## 5 161 2051 203 1617 <NA>
## 6 168 2142 243 2230 <NA>
```
```
# slim and trim the data
set1.slim <- slim(set1, fix = c("Sample.Name", "Marker", "Dye"), stack = c("Allele",
"Height"), keep.na = FALSE)
dim(set1)
```
```
## [1] 170 13
```
```
dim(set1.slim)
```
```
## [1] 575 5
```
```
head(set1.slim)
```
```
## Sample.Name Marker Dye Allele Height
## 1 PC1 AMEL B X 2486
## 2 PC1 AMEL B OL 81
## 3 PC1 AMEL B Y 2850
## 4 PC1 D3S1358 B 16 260
## 5 PC1 D3S1358 B 17 3251
## 6 PC1 D3S1358 B 18 2985
```
```
p <- set1.slim %>% filter(Sample.Name != "Ladder") %>% generateEPG(kit = "ESX17")
```
```
p + ggtitle("Mean peak heights for 8 samples from PC shown")
```
Figure 2\.2: Electropherogram\-like `ggplot2` plot of the mean of all 8 samples in `set1`
Next, get the reference sample data.
```
data(ref1)
head(ref1)
```
```
## Sample.Name Marker Allele.1 Allele.2
## 1 PC AMEL X Y
## 2 PC D3S1358 17 18
## 3 PC TH01 6 9.3
## 4 PC D21S11 29 31.2
## 5 PC D18S51 16 18
## 6 PC D10S1248 13 15
```
```
ref1.slim <- slim(ref1, fix = c("Sample.Name", "Marker"), stack = "Allele",
keep.na = FALSE)
head(ref1.slim)
```
```
## Sample.Name Marker Allele
## 1 PC AMEL X
## 2 PC AMEL Y
## 3 PC D3S1358 17
## 4 PC D3S1358 18
## 5 PC TH01 6
## 6 PC TH01 9.3
```
```
p <- generateEPG(ref1.slim, kit = "ESX17") + ggtitle("True profile for sample PC")
```
```
p
```
Figure 2\.3: The reference profile electrogpherogram, `ref1`.
### 2\.5\.2 Check the stutter ratio
Figure 2\.4: Figure 2 from Hansson, Gill, and Egeland ([2014](#ref-strval)). The analysis range, 2 back stutters and 1 forward stutter is shown at 3 levels of overlap.
Stutter peaks are byproducts of the DNA amplification process, and their presence muddles data interpretation (Hansson, Gill, and Egeland [2014](#ref-strval)). Stutter is caused by strand slippage in PCR (Butler [2009](#ref-butler09)). This slippage causes small peaks to appear next to true peaks, and a threshold is needed to determine if a peak is caused by slippage or if it could be a mixture sample with a minor contributor. We calculate the stutter for the eight replicates in `set1` using one back stutter, no forward stutter and no overlap. We compare these values to the 95\\(^{th}\\) percentiles in Table 3 of Hansson, Gill, and Egeland ([2014](#ref-strval)). See Figure [2\.4](dnaval.html#fig:stutterfig) for an example of stutter.
```
# make sure the right samples are being analyzed
checkSubset(data = set1.slim, ref = ref1.slim)
```
```
## Reference name: PC
## Subsetted samples: PC1, PC2, PC3, PC4, PC5, PC6, PC7, PC8
```
```
# supply the false stutter and true stutter values for your data. these are
# from the GUI.
stutter_false_val <- c(-1.9, -1.8, -1.7, -0.9, -0.8, -0.7, 0.9, 0.8, 0.7)
stutter_replace_val <- c(-1.3, -1.2, -1.1, -0.3, -0.2, -0.1, 0.3, 0.2, 0.1)
# calculate the stutter values
set1_stutter <- calculateStutter(set1.slim, ref1.slim, back = 1, forward = 0,
interference = 0, replace.val = stutter_false_val, by.val = stutter_replace_val)
stutterplot <- addColor(set1_stutter, kit = "ESX17") %>% sortMarker(kit = "ESX17",
add.missing.levels = FALSE)
marks <- levels(stutterplot$Marker)[-1]
stutterplot$Marker <- factor(as.character(stutterplot$Marker), levels = marks)
compare_dat <- data.frame(Marker = ref1$Marker[-1], perc95 = (c(11.9, 4.6, 10.9,
10.7, 12.1, 12, 11.1, 10.4, 16, 11.4, 9.1, 10.1, 8.3, 14.4, 10.1, 12.8))/100)
compare_dat <- filter(compare_dat, Marker %in% stutterplot$Marker)
ggplot() + geom_point(data = stutterplot, position = position_jitter(width = 0.1),
aes(x = Allele, y = Ratio, color = as.factor(Type)), alpha = 0.7) + geom_hline(data = compare_dat,
aes(yintercept = perc95), linetype = "dotted") + facet_wrap(~Marker, ncol = 4,
scales = "free_x", drop = FALSE) + labs(x = "True Allele", y = "Stutter Ratio",
color = "Type")
```
Figure 2\.5: Stutter ratios by allele for each of the eight samples in the `set1` data, computed for one back stutter, zero forward stutter, and no overlap. Note that SR increases with allele length (e.g. D10S1248; D2S1338; D12S391\). Horizontal dotted lines represent the 95th percentile of stutter ratio values from the study done in Hansson, Gill, and Egeland ([2014](#ref-strval)).
Figure [2\.5](dnaval.html#fig:stutter1) shows the ratio of stutter for each of the eight control samples in `set1`. The horizontal dotted lines show the 95\\(^{th}\\) percentile of the stutter ratio values computed in the same way from 220 samples in Hansson, Gill, and Egeland ([2014](#ref-strval)). There are a few stutter values above the dotted line, but overall the values correspond to what we expect to happen in a sample with only one contributor. Unusual values are shown in Table [2\.2](dnaval.html#tab:stuthigh).
Table 2\.2: Stutter peaks larger than the 95\\(^{th}\\) percentile of peak values for the study in Hansson, Gill, and Egeland ([2014](#ref-strval)).
| Sample.Name | Marker | Allele | HeightA | Stutter | HeightS | Ratio | Type | 95th perc. |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| PC1 | D18S51 | 18 | 1617 | 17 | 203 | 0\.126 | \-1 | 0\.107 |
| PC3 | D18S51 | 18 | 1681 | 17 | 195 | 0\.116 | \-1 | 0\.107 |
| PC4 | D2S1338 | 25 | 3133 | 24 | 352 | 0\.112 | \-1 | 0\.111 |
| PC5 | D12S391 | 23 | 4378 | 22 | 640 | 0\.146 | \-1 | 0\.144 |
| PC6 | D2S1338 | 25 | 2337 | 24 | 261 | 0\.112 | \-1 | 0\.111 |
| PC6 | vWA | 19 | 1571 | 18 | 196 | 0\.125 | \-1 | 0\.114 |
### 2\.5\.3 Check heterozygote balance (intra\-locus balance)
Computing the heterozygote peak balance (Hb) is most important for analyzing samples with two or more contributors. We calculate Hb values for the eight repeated samples in `set1` below using Equation 3 from Hansson, Gill, and Egeland ([2014](#ref-strval)) to compute the ratio.
```
# checkSubset(data = set3, ref = ref3)
set1_hb <- calculateHb(data = set1.slim, ref = ref1.slim, hb = 3, kit = "ESX17",
sex.rm = TRUE, qs.rm = TRUE, ignore.case = TRUE)
hbplot <- addColor(set1_hb, kit = "ESX17") %>% sortMarker(kit = "ESX17", add.missing.levels = FALSE)
hbplot$Marker <- factor(as.character(hbplot$Marker), levels = marks)
ggplot(data = hbplot) + geom_point(aes(x = MPH, y = Hb, color = Dye), position = position_jitter(width = 0.1)) +
geom_hline(yintercept = 0.6, linetype = "dotted") + facet_wrap(~Marker,
nrow = 4, scales = "free_x", drop = FALSE) + scale_color_manual(values = c("blue",
"green", "black", "red")) + labs(x = "Mean Peak Height (RFU)", y = "Ratio",
color = "Dye") + guides(color = guide_legend(nrow = 1)) + theme(axis.text.x = element_text(size = rel(0.8)),
legend.position = "top")
```
Figure 2\.6: Hb ratio values for the eight samples in `set1`. Most ratios are above the 0\.6 threshold.
Figure [2\.6](dnaval.html#fig:hb) shows the Hb values for the eight samples in `set1`. The balance ratio is typically no less than 0\.6 according to Gill, Sparkes, and Kimpton ([1997](#ref-gill97)), but there are a few exceptions to this rule in the `set1` sample, shown in Table [2\.3](dnaval.html#tab:smhb)
Table 2\.3: Observations in the `set1` data which have Hb value less than 0\.6\.
| Sample.Name | Marker | Dye | Delta | Small | Large | MPH | Hb |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PC1 | D12S391 | R | 5\.0 | 2323 | 4017 | 3170\.0 | 0\.578 |
| PC3 | SE33 | R | 1\.0 | 4017 | 6761 | 5389\.0 | 0\.594 |
| PC6 | D10S1248 | G | 2\.0 | 1760 | 3071 | 2415\.5 | 0\.573 |
| PC7 | D21S11 | B | 2\.2 | 1487 | 2678 | 2082\.5 | 0\.555 |
### 2\.5\.4 Check inter\-locus balance
Inter\-locus balance (Lb) is a measure of peak balances across loci (Hansson, Gill, and Egeland [2014](#ref-strval)). The total height of the peaks in all loci should be spread evenly across each individual locus in a sample. In the `set1` data, 17 loci are measured, thus each individual locus balance should be about \\(\\frac{1}{17}^{th}\\) of the total height of all peaks in RFUs.
```
set1_lb <- calculateLb(data = set1.slim, ref = ref1.slim, kit = "ESX17", option = "prop",
by.dye = FALSE, ol.rm = TRUE, sex.rm = FALSE, qs.rm = TRUE, ignore.case = TRUE,
na = 0)
set1_height <- calculateHeight(data = set1.slim, ref = ref1.slim, kit = "ESX17",
sex.rm = FALSE, qs.rm = TRUE, na.replace = 0)
set1_lb <- set1_lb %>% left_join(set1_height %>% select(Sample.Name:Marker,
Dye, TPH, H, Expected, Proportion) %>% distinct(), by = c("Sample.Name",
"Marker", "Dye", TPPH = "TPH"))
set1_lb <- sortMarker(set1_lb, kit = "ESX17", add.missing.levels = TRUE)
ggplot(set1_lb) + geom_boxplot(aes(x = Marker, y = Lb, color = Dye), alpha = 0.7) +
scale_color_manual(values = c("blue", "green", "black", "red")) + geom_hline(yintercept = 1/17,
linetype = "dotted") + theme(legend.position = "top", axis.text.x = element_text(size = rel(0.8),
angle = 270, hjust = 0, vjust = 0.5)) + labs(y = "Lb (proportional method)")
```
Figure 2\.7: Inter\-locus balance for the eight PC samples. At each locus, the value should be about 1/17\. The peak heights should ideally be similar in each locus.
The inter\-locus balance for this kit should ideally be about \\(\\frac{1}{17} \\approx 0\.059\\). This value is shown by the horizontal dotted line in Figure [2\.7](dnaval.html#fig:lb). However, the markers in the red dye channel have consistently higher than ideal peaks and those in the yellow channel have consistently lower than ideal peaks.
### 2\.5\.5 Check stochastic threshold
The stochastic threshold is the value of interest for determining allele drop\-out. If a peak is above the stochastic threshold, it is unlikely that an allele in a heterozygous sample “has dropped out” (Butler [2009](#ref-butler09)). Allele drop\-out occurs when the allele peak height is less than the limit of detection threshold (LDT). As recommended in Butler ([2009](#ref-butler09)), we use an LDT of 50\. The stochastic threshold is modeled with a logistic regression.
```
set1_do <- calculateDropout(data = set1.slim, ref = ref1.slim, threshold = 50,
method = "1", kit = "ESX17")
table(set1_do$Dropout)
```
```
##
## 0
## 264
```
In `set1`, there is no dropout, as the samples included are control samples, and thus enough DNA is present during amplification so there are no stochastic effects.
For a more exciting dropout analysis, we use another data set with more appropriate information. The data `set4` was created specifically for drop\-out analysis, and contains 32 samples from three different reference profiles. The `method = "1"` argument computes dropout with respect to the low molecular weight allele in the locus.
```
data(set4)
data(ref4)
set4_do <- calculateDropout(data = set4, ref = ref4, threshold = 50, method = "1",
kit = "ESX17")
table(set4_do$Dropout)
```
```
##
## 0 1 2
## 822 33 68
```
In the `set4` data, 33 alleles dropped out (`Dropout = 1`), and locus dropout (`Dropout = 2`) occurred in 9 samples (68 alleles). In one sample, all loci dropped out, while only one locus dropped out in three samples. The locus which most commonly dropped out was D22S1045 in seven samples, while loci D19S433 and D8S1179 only dropped out in two samples each.
The probability of allele drop\-out is computed via logistic regression of the dropout score with respect to the method `1`, on the the height of the allele with low molecular weight. The model parameters are also computed using the `calculateT()` function. This function also returns the smallest threshold value at which probability of dropout is less than or equal to a set value, typically 0\.01 or 0\.05, as well as a conservative threshold, which is the value at which the risk of observing a drop\-out probability greater than the specified threshold limit is less than the set value of 0\.01 or 0\.05\.
```
set4_do2 <- set4_do %>% filter(Dropout != 2) %>% rename(Dep = Method1, Exp = Height)
do_mod <- glm(Dep ~ Exp, family = binomial("logit"), data = set4_do2)
set4_ths <- calculateT(set4_do2, pred.int = 0.98)
```
Next, we compute predicted dropout probabilities \\(P(D)\\) and corresponding 95% confidence intervals and plot the results.
```
xmin <- min(set4_do2$Exp, na.rm = T)
xmax <- max(set4_do2$Exp, na.rm = T)
predRange <- data.frame(Exp = seq(xmin, xmax))
ypred <- predict(do_mod, predRange, type = "link", se.fit = TRUE)
# 95% prediction interval
ylower <- plogis(ypred$fit - qnorm(1 - 0.05/2) * ypred$se) # Lower confidence limit.
yupper <- plogis(ypred$fit + qnorm(1 - 0.05/2) * ypred$se) # Upper confidence limit.
# Calculate conservative prediction curve.
yconservative <- plogis(ypred$fit + qnorm(1 - 0.05) * ypred$se)
# Calculate y values for plot.
yplot <- plogis(ypred$fit)
# combine them into a data frame for plotting
predictionDf <- data.frame(Exp = predRange$Exp, Prob = yplot, yupper = yupper,
ylower = ylower)
# plot
th_dat <- data.frame(x = 500, y = 0.5, label = paste0("At ", round(set4_ths[1],
0), " RFUs,\nthe estimated probability\nof dropout is 0.01."))
ggplot(data = predictionDf, aes(x = Exp, y = Prob)) + geom_line() + geom_ribbon(fill = "red",
alpha = 0.4, aes(ymin = ylower, ymax = yupper)) + geom_vline(xintercept = set4_ths[1],
linetype = "dotted") + geom_text(data = th_dat, inherit.aes = FALSE, aes(x = x,
y = y, label = label), hjust = 0) + xlim(c(0, 1500)) + labs(x = "Peak Height (RFUs)",
y = "Probability of allele drop-out")
```
Figure 2\.8: Probability of dropout in `set4` for peaks from 100\-1500 RFUs. 95% confidence interval for drop\-out probability shown in red.
We can also look at a heat map of dropout for each marker by sample. All the loci in sample BC10\.11 dropped\-out, while most other samples have no dropout whatsoever.
```
set4_do %>% tidyr::separate(Sample.Name, into = c("num", "name", "num2")) %>%
mutate(Sample.Name = paste(name, num, ifelse(is.na(num2), "", num2), sep = ".")) %>%
ggplot(aes(x = Sample.Name, y = Marker, fill = as.factor(Dropout))) + geom_tile(color = "white") +
scale_fill_brewer(name = "Dropout", palette = "Set2", labels = c("none",
"allele", "locus")) + theme(axis.text.x = element_text(size = rel(0.8),
angle = 270, hjust = 0, vjust = 0.5), legend.position = "top")
```
Figure 2\.9: Dropout for all samples in `set4` by marker.
[
| Field Specific |
sctyner.github.io | https://sctyner.github.io/OpenForSciR/bullets.html |
Chapter 3 Firearms: bullets
===========================
#### Eric Hare, Heike Hofmann
Figure 3\.1: Close\-up of a bullet under a Confocal Light Microscope in the Roy J Carver High\-resolution microscopy lab at Iowa State University. Photo by Heike Hofmann. Source: [forensicstats.org](https://forensicstats.org/wp-content/uploads/2017/01/csafe-logo-90.png)
3\.1 Introduction
-----------------
When a [bullet](glossary.html#def:bullet) is fired from a [gun barrel](glossary.html#def:gunbarrel), small imperfections in the barrel leave [striation](glossary.html#def:striations) marks on the bullet. These marks are expressed most in the area of the bullet that has the closest contact to the barrel.
These engravings are assumed to be unique to individual gun barrels, and as a result, traditional forensic science methods have employed trained forensic examiners to assess the likelihood of two bullets being fired from the same barrel (a “match”). Conventionally, this has been done using the metric Consecutively Matching Striae(CMS) (Biasotti [1959](#ref-biasotti:1959)). However, no official standards have been established to scientifically delineate a number that effectively separates matches from non\-matches. Therefore, significant work has been done, and continues to be done, in order to add scientific rigor to the bullet matching process.
The 2009 National Academy of Sciences Report (National Research Council [2009](#ref-NAS:2009)[a](#ref-NAS:2009)) may have been the “call\-to\-arms” that the field needed. This report criticized the lack of rigor in the field at the time, but also described the “path forward”. As the authors saw it, the path forward included adoption of standards. A standard format to represent the structure of bullets opened the door for much of what you’ll read about in this chapter, including opening up the formerly unknown process of bullet matching to a much wider audience, and providing the foundations for truly automated, statistical algorithms to perform the procedure.
In this chapter, we outline the new standard data format used to store three\-dimensional bullet scans. We proceed by outlying relevant R packages for the processing and analysis of these scans. Finally, we discuss ways in which to draw conclusions based on these results, and tie it all together in the form of a relevant case study.
3\.2 Data
---------
Data on both [breech face](glossary.html#def:breechface) impression and [land engraved areas](glossary.html#def:leas) are available from the [NIST Ballistics Toolmark Research Database](https://tsapps.nist.gov/NRBTD/Studies/Search) (NBTRD) in the [x3p](https://tsapps.nist.gov/NRBTD/Home/DataFormat) (XML 3\-D Surface Profile) format. The x3p format was designed to implement a standard for exchanging 3D profile data. It was adopted by the Open Forensic [Metrology](glossary.html#def:metrology) Consortium, or [OpenFMC](https://www.openfmc.org/), a group of firearm forensics researchers whose aim is to establish best practices for researchers using metrology in forensic science.
Figure [3\.2](bullets.html#fig:bullets-x3pcontain) shows an illustration of the internal structure of the x3p file format. x3p files contain an XML data file with metadata on the bullet scans, as well as binary data containing the surface topology measurements. The metadata includes information on the scanning equipment and operator, as well as information on the resolution of the scans.
Figure 3\.2: An illustration of the internal structure of the x3p file format. x3p files contain an XML data file with metadata on the bullet scans, as well as binary data containing the surface topology measurements. Source: [openGPS](https://sourceforge.net/p/open-gps/mwiki/X3p/)
The use of the x3p format has positively impacted procedures relating to forensic analysis of bullets. Because the format is an open standard, researchers on a wide range of computing platforms can access and analyze the data. Due to the x3p container holding a rich set of metadata, the limitations of traditional “black box”\-type file formats are eliminated. The source, parameters, and raw data contained within each 3D scan is readily available for critical analysis and examination.
3\.3 R Package(s)
-----------------
The first R package created to read and process x3p files was `x3pr` (OpenFMC [2014](#ref-x3pr)). This package includes reading routines to read in both the data as well as the metadata of a particular bullet land. The package also has some plotting functions and a writing routine to create x3p files. A new package, `x3ptools` (Hofmann et al. [2018](#ref-x3ptools)), was created to handle some limitations in `x3pr` and expand upon the functionality. A companion package, `bulletxtrctr` (Hofmann, Vanderplas, and Krishnan [2018](#ref-bulletxtrctr)), expands upon x3ptools and provides functions to perform an automated bullet analysis routine based on the algorithms described in Hare, Hofmann, and Carriquiry ([2017](#ref-hare2017)).
The two packages [`x3ptools`](https://heike.github.io/x3ptools/) and [`bulletxtrctr`](https://heike.github.io/bulletxtrctr/) will be the focus of the remainder of this chapter.
### 3\.3\.1 x3ptools
Although `x3ptools` isn’t written specifically for the purposes of handling bullet scans, it is the package of choice to begin a bullet analysis. In fact, the package itself is generic and can handle a wide range of data types that use the x3p container format.
To begin, the package can be installed from [CRAN](https://cran.r-project.org/web/packages/x3ptools/) (stable release) or [GitHub](https://github.com/heike/x3ptools) (development version):
```
# from CRAN: install.packages('x3ptools') install development version from
# GitHub:
devtools::install_github("heike/x3ptools")
```
We load the package and use some built\-in x3p data to get a feel for the package functionality. We will work with the Center for Statistical Applications in Forensic Evidence (CSAFE) logo. In its original colored form, the logo looks like Figure [3\.3](bullets.html#fig:bullets-csafelogo).
Figure 3\.3: The CSAFE logo. Source: [CSAFE](https://forensicstats.org/).
A 3D version of this logo is available in `x3ptools`, where portions of the logo are raised and recessed. This makes for a good test case in introducing `x3ptools` and the idea behind 3D scans of objects, as we transition towards bullet analysis.
```
library(tidyverse)
library(x3ptools)
logo <- read_x3p(system.file("csafe-logo.x3p", package = "x3ptools"))
names(logo)
```
```
## [1] "header.info" "surface.matrix" "feature.info" "general.info"
## [5] "matrix.info"
```
We can see that there are five elements to the list object returned:
* **header.info** \- Provides us information on the resolution of the scan
* **surface.matrix** \- The actual surface data of the scan
* **feature.info** \- Properties of the scan itself
* **general.info** \- Information on how the data was captured
* **matrix.info** \- Some information expanding upon header.info
The two most relevant for our purposes are **header.info** and **surface.matrix**. To begin to understand this container format better, we can use the `image_x3p` function to produce a visualization of the surface, shown in Figure [3\.4](bullets.html#fig:bullets-csafelogoscan).
```
image_x3p(logo)
```
Figure 3\.4: 3D surface scan of the CSAFE logo. (Rendered image has been down\-sampled to speed up page load.)
We can use the function `x3p_to_df` in order to convert this structure into a standard R data frame, which will allow us to do any number of data manipulation and plotting routines. In this case, Figure [3\.5](bullets.html#fig:bullets-x3pplot) shows a simple scatter plot created with `ggplot2` of the height measurements across the surface of the bullet.
```
logo_df <- x3p_to_df(logo)
ggplot(data = logo_df, aes(x = x, y = y, color = value)) + geom_point() + scale_color_gradient(low = "white",
high = "black") + theme_bw()
```
Figure 3\.5: A simple scatterplot created with ggplot2 of the height measurements across the surface of the bullet.
A key feature of the data is that the `value` column represents the height of the pixel corresponding to the particular \\((x,y)\\) location. In this logo, we can see that the fingerprint section of the logo is raised above the background quite clearly. As we transition to operating on images of bullets, this will be important to note.
One other important feature of the package is the ability to sample. Depending on the size and resolution of a particular scan, the resulting object could be quite large. This CSAFE logo, despite being a relatively small physical size, still results in a 310,479 row data frame. Though manageable, this means that certain routines, such as producing the above scatter plot, can be quite slow.
When high resolution is not needed, we may elect to sample the data to reduce the resulting size. This can be done with the `sample_x3p` function. The function takes a parameter `m` to indicate the sampling factor to use. For example, a value of `m = 4` will sample every 4th height value from the 3D scan, as illustrated in Figure [3\.6](bullets.html#fig:bullets-samp).
```
sample_logo <- sample_x3p(logo, m = 4)
sample_logo_df <- x3p_to_df(sample_logo)
ggplot(data = sample_logo_df, aes(x = x, y = y, color = value)) + geom_point() +
scale_color_gradient(low = "white", high = "black") + theme_bw()
```
Figure 3\.6: A sampled scan of an x3p file extracted using the sample\_x3p function.
You can see the clarity of the resulting plot has noticeably declined, but the overall structure has been maintained. Depending on the application, this could be a solution for making a slow analytical process a bit faster.
### 3\.3\.2 `bulletxtrctr`
As mentioned, we will use the `bulletxtrctr` package to process 3D surface scans of bullets. This package depends on `x3ptools` for reading and writing x3p files but otherwise focuses on statistical routines for matching bullets. The package is not yet available on CRAN, but can be installed from GitHub:
```
devtools::install_github("heike/bulletxtrctr")
```
To demonstrate the functionality of `bulletxtrctr`, we use data from the NBTRD at NIST. We download the surface scan for a bullet from the Hamby Study (Hamby, Brundage, and Thorpe [2009](#ref-hamby:2009)), using the `read_bullet` function, transform the measurements from meters to microns (`x3p_m_to_mum`), and rotate the images so that the long axis is the horizontal. Note that the object `hamby252demo` is a list object exported from `bulletxtrctr` that contains URLs to the NIST NBTRD.
```
library(randomForest)
library(bulletxtrctr)
# note: length(hamby252demo[[1]]) is 6
br1_b1 <- read_bullet(urllist = hamby252demo[[1]]) %>% # x3p_m_to_mum: converts from meters to microns
mutate(x3p = x3p %>% purrr::map(.f = x3p_m_to_mum)) %>% # rotate_x3p(angle = -90: change orientation by 90 degrees clockwise
# y_flip_x3p: flip image to conform to new ISO norm (see ??y_flip_x3p)
mutate(x3p = x3p %>% purrr::map(.f = function(x) x %>% rotate_x3p(angle = -90) %>%
y_flip_x3p()))
```
When working with lots of bullet data, it’s important to stay organized when naming objects in your R session. The name of the object we just created is `br1_b1`.
This indicates that we are looking at the first bullet (`b1`) that was fired from Barrel 1 (`br1`). A bullet is composed of a certain number of land engraved areas (LEAs), and each LEA is a separate file with a separate URL. So, the object `br1_b1` contains `nrow(br1_b1)` (6\) observations, one for each land engraved area, which compose the whole bullet scan. The [rifling](glossary.html#def:rifling) of the barrel induces these land engraved areas, which are a series of alternating raised and recessed portions on the fired bullet. In addition, manufacturing defects engrave striation marks on the bullet as it travels through the gun barrel when fired (AFTE Criteria for Identification Committee [1992](#ref-afte:1992)[b](#ref-afte:1992)).
Let’s take a quick look at what we see one the first bullet land (Figure [3\.7](bullets.html#fig:bullets-b111)).
```
image_x3p(br1_b1$x3p[[1]])
```
Figure 3\.7: Land 1 of Bullet 1 from Barrel 1 of the Hamby Study (Set 44\). Source: [NRBTD](https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/43567404-1611-4b40-ae74-a1e440e79f6a). (Rendered image has been down\-sampled to speed up page load.)
Immediately, we can clearly see the vertical striation marks. To better visualize these marks, we can extract a cross\-section from the bullet and plot it in two dimensions. To accomplish this, `bulletxtrctr` provides us with a function `x3p_crosscut_optimize` to choose the ideal location at which to do so.
```
cc_b11 <- x3p_crosscut_optimize(br1_b1$x3p[[1]])
cc_b11
```
```
## [1] 100
```
This value provides us with the location (in microns) of a horizontal line that the algorithm determines to be a good place to extract a cross\-section. The two primary criteria for determining this are:
1. The location should be close to the base of the bullet (\\(y \= 0\\)) because the striation marks are most pronounced there.
2. Cross\-sections taken near this location should be similar to this cross\-section (stability).
The `x3p_crosscut_optimize` function looks for the first cross\-section meeting this criteria, searching upwards from the base of the bullet land. With this value, we can extract and plot the cross\-section, shown in Figure [3\.8](bullets.html#fig:bullets-cc).
```
ccdata_b11 <- x3p_crosscut(br1_b1$x3p[[1]], y = cc_b11)
ggplot(data = ccdata_b11, aes(x = x, y = value)) + geom_line() + theme_bw()
```
Figure 3\.8: Cross\-section of the bullet land at the ideal cross\-section location.
Most of the scans exhibit the pattern that we see here, where there are “wedges” on the left and right side. The wedge area is called the **shoulder**, and it is the area separating the land engraved area (the curved region in the middle) from the groove (the area not scanned because it doesn’t exhibit striations). In other words, to better hone in on the striation marks along the land, we should subset this region to include only the middle curved land engraved area portion. Fortunately, `bulletxtrctr` provides us with functionality to automatically do that. First, we use the `cc_locate_grooves` function to detect the location of the grooves. This returns a list object, with one element being the two locations along the axis, and the lother element being the plot, given in Figure [3\.9](bullets.html#fig:bullets-grooveloc).
```
grooves_b11 <- cc_locate_grooves(ccdata_b11, return_plot = TRUE, method = "middle")
grooves_b11$plot
```
Figure 3\.9: Location of the grooves in our bullet scan, as detected by the `get_grooves` function.
With the grooves detected, we can now smooth out the surface using locally estimated scatter plot smoothing (LOESS) (Cleveland [1979](#ref-cleveland:1979)). Once we do so, we obtain what we call a **bullet signature**, Figure [3\.10](bullets.html#fig:bullets-loess), representing the clearest picture yet of the striation marks along the surface of the land.
```
b111_processed <- cc_get_signature(ccdata = ccdata_b11, grooves = grooves_b11,
span1 = 0.75, span2 = 0.03) %>% filter(!is.na(sig), !is.na(raw_sig))
ggplot(data = b111_processed, aes(x = x, y = sig)) + geom_line() + theme_bw()
```
Figure 3\.10: LOESS\-smoothed version of our bullet profile, called the bullet signature.
The land signature is the element of analysis for feature extraction out of the `bulletxtrctr` package. With multiple bullet signatures, matches can quickly and easily be made using the `sig_align` function, in conjunction with the `extract_feature` family of functions, which we will discuss later on in the chapter.
3\.4 Drawing Conclusions
------------------------
We have seen the process of extracting the signature of a bullet and plotting it using R. But recall that the application of these procedures demands an answer to the question of whether this bullet was fired from the same gun barrel as another bullet. The question becomes, does this bullet signature “match” the signature of another bullet with high probability?
This answer could be derived quite seamlessly in an ideal world given a reference database of all bullets in existence that have been fired from all gun barrels. With this database, we would compute the signatures for all of them and we could then make probabilistic judgments based on the similarities of signatures fired from the same barrel versus those from different barrels. Without this database, the best we can do is to begin a large data collection process resulting in a reference database, such as the approach in the NIST NBTRD. To come to a conclusion about the source of two fired bullets, we need to quantify the similarity of two land signatures that were part of bullets fired from the same barrel. This will be the focus of the Case Study section.
One other approach to drawing conclusions is to use the generated signatures as a supplement to the manual examination by trained forensic examiners. This semi\-automated procedure maintains the valuable expertise of the examiner and provides a scientific backing to some of the conclusions made. In the cases where conclusions may differ, this can lead to either refinement of the examination procedure, or refinement of the automated algorithms described.
3\.5 Case Study
---------------
We will now walk through the process of performing a bullet match. Much of the code for this section has been adapted from the excellent `bulletxtrctr` [README](https://github.com/heike/bulletxtrctr/blob/master/README.Rmd). We take two bullets with 6 lands each for comparison. Thus, there are 36 land\-to\-land comparisons to be made, of which 6 are known matches, and 30 are known non\-matches. We begin by reading the bullets:
```
# bullet 1
urllist1 <- c("https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/cd204983-465b-4ec3-9da8-cba515a779ff",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/0e72228c-5e39-4a42-8c4e-3da41a11f32c",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/b9d6e187-2de7-44e8-9b88-c83c29a8129d",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/fda92f6a-71ba-4735-ade0-02942d14d1e9",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/8fa798b4-c5bb-40e2-acf4-d9296865e8d4",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/81e817e5-15d8-409f-b5bd-d67c525941fe")
# bullet 2
urllist2 <- c("https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/288341e0-0fdf-4b0c-bd26-b31ac8c43f72",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/c97ada55-3a35-44fd-adf3-ac27dd202522",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/8a1805d9-9d01-4427-8873-aef4a0bd323a",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/a116e448-18e1-4500-859c-38a5f5cc38fd",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/0b7182d3-1275-456e-a9b4-ae378105e4af",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/86934fcd-7317-4c74-86ae-f167dbc2f434")
b1 <- read_bullet(urllist = urllist1)
b2 <- read_bullet(urllist = urllist2)
```
For ease of analysis, we bind the bullets in a single data frame, and identify them using numeric values inside the data frame. We also indicate the six different lands.
```
b1$bullet <- 1
b2$bullet <- 2
b1$land <- 1:6
b2$land <- 1:6
bullets <- rbind(b1, b2)
```
As before, we want to rotate the bullets such that the long axis is along the horizontal, as the functions within `bulletxtrctr` assume this format.
```
bullets <- bullets %>% mutate(x3p = x3p %>% purrr::map(.f = x3p_m_to_mum)) %>%
mutate(x3p = x3p %>% purrr::map(.f = function(x) x %>% rotate_x3p(angle = -90) %>%
y_flip_x3p()))
```
We extract the ideal cross\-sections from all 12 bullet lands, which are shown in Figure [3\.11](bullets.html#fig:bullets-cscrosscut). In each land, we see the standard curved pattern, with well defined and a pronounced shoulders indicating the cutoff location for extracting the land.
```
bullets <- bullets %>% mutate(crosscut = x3p %>% purrr::map_dbl(.f = x3p_crosscut_optimize))
bullets <- bullets %>% mutate(ccdata = purrr::map2(.x = x3p, .y = crosscut,
.f = x3p_crosscut))
crosscuts <- bullets %>% tidyr::unnest(ccdata)
ggplot(data = crosscuts, aes(x = x, y = value)) + geom_line() + facet_grid(bullet ~
land, labeller = "label_both") + theme_bw() + theme(axis.text.x = element_text(angle = 30,
hjust = 1, vjust = 1, size = rel(0.9)))
```
Figure 3\.11: Ideal cross\-sections for all 12 bullet lands.
Next, with each of these profiles, we need to detect grooves to extract the bullet signature between them. In Figure [3\.12](bullets.html#fig:bullets-csgrooves), we can see that the groove locations of the 12 bullet lands appear to be detected well, such that the middle portion between the two vertical blue lines represents a good sample of the land\-engraved area.
```
bullets <- bullets %>% mutate(grooves = ccdata %>% purrr::map(.f = cc_locate_grooves,
method = "middle", adjust = 30, return_plot = TRUE))
do.call(gridExtra::grid.arrange, lapply(bullets$grooves, `[[`, 2))
```
Figure 3\.12: Groove locations of each of the 12 bullet lands.
With the groove locations detected, we proceed as before by using LOESS to smooth out the curvature of the surface and focus on the striation marks. Figure [3\.13](bullets.html#fig:bullets-cssigs) shows us the raw signatures of the 12 lands. The striation marks are much more visible now.
```
bullets <- bullets %>% mutate(sigs = purrr::map2(.x = ccdata, .y = grooves,
.f = function(x, y) {
cc_get_signature(ccdata = x, grooves = y, span1 = 0.75, span2 = 0.03)
}))
signatures <- bullets %>% select(source, sigs) %>% tidyr::unnest()
bullet_info <- bullets %>% select(source, bullet, land)
signatures %>% filter(!is.na(sig), !is.na(raw_sig)) %>% left_join(bullet_info,
by = "source") %>% ggplot(aes(x = x)) + geom_line(aes(y = raw_sig), colour = "grey70") +
geom_line(aes(y = sig), colour = "grey30") + facet_grid(bullet ~ land, labeller = "label_both") +
ylab("value") + ylim(c(-5, 5)) + theme_bw()
```
Figure 3\.13: Signatures for the 12 bullet lands. Light gray lines show the raw data, while the dark gray lines are the smoothed signatures.
Because we are working with 12 signatures, our goal will be to align all pairwise comparisons (36 comparisons total) between the six lands in each bullet. Figure [3\.14](bullets.html#fig:bullets-csalign) shows the alignment of Bullet 2 Land 3 with Bullet 1 Land 2, two of the known matches. Immediately it is clear that the pattern of the signatures appears very similar between the two lands.
```
bullets$bulletland <- paste0(bullets$bullet, "-", bullets$land)
lands <- unique(bullets$bulletland)
comparisons <- data.frame(expand.grid(land1 = lands, land2 = lands), stringsAsFactors = FALSE)
comparisons <- comparisons %>% mutate(aligned = purrr::map2(.x = land1, .y = land2,
.f = function(xx, yy) {
land1 <- bullets$sigs[bullets$bulletland == xx][[1]]
land2 <- bullets$sigs[bullets$bulletland == yy][[1]]
land1$bullet <- "first-land"
land2$bullet <- "second-land"
sig_align(land1$sig, land2$sig)
}))
subset(comparisons, land1 == "2-3" & land2 == "1-2")$aligned[[1]]$lands %>%
mutate(`b2-l3` = sig1, `b1-l2` = sig2) %>% select(-sig1, -sig2) %>% tidyr::gather(sigs,
value, `b2-l3`, `b1-l2`) %>% ggplot(aes(x = x, y = value, colour = sigs)) +
geom_line() + theme_bw() + scale_color_brewer(palette = "Dark2")
```
Figure 3\.14: Alignment of two bullet lands (2\-3 \& 1\-2\)
Though the visual evidence is strong, we want to quantify the similarity. To do this, we’re going to use a number of functions which extract features from the aligned signatures of the bullets. We’ll extract the [cross\-correlation](glossary.html#def:crosscor) (`extract_feature_ccf`), the [matching striation count](glossary.html#def:matchingstria) (`bulletxtrctr:::extract_helper_feature_n_striae`), the [non\-matching striation count](glossary.html#def:nmatchingstria), and many more (`extract_feature_*`).
```
comparisons <- comparisons %>% mutate(ccf0 = aligned %>% purrr::map_dbl(.f = function(x) extract_feature_ccf(x$lands)),
lag0 = aligned %>% purrr::map_dbl(.f = function(x) extract_feature_lag(x$lands)),
D0 = aligned %>% purrr::map_dbl(.f = function(x) extract_feature_D(x$lands)),
length0 = aligned %>% purrr::map_dbl(.f = function(x) extract_feature_length(x$lands)),
overlap0 = aligned %>% purrr::map_dbl(.f = function(x) extract_feature_overlap(x$lands)),
striae = aligned %>% purrr::map(.f = sig_cms_max, span = 75), cms_per_mm = purrr::map2(striae,
aligned, .f = function(s, a) {
extract_feature_cms_per_mm(s$lines, a$lands, resolution = 1.5625)
}), matches0 = striae %>% purrr::map_dbl(.f = function(s) {
bulletxtrctr:::extract_helper_feature_n_striae(s$lines, type = "peak",
match = TRUE)
}), mismatches0 = striae %>% purrr::map_dbl(.f = function(s) {
bulletxtrctr:::extract_helper_feature_n_striae(s$lines, type = "peak",
match = FALSE)
}), bulletA = gsub("([1-2])-([1-6])", "\\1", land1), bulletB = gsub("([1-2])-([1-6])",
"\\1", land2), landA = gsub("([1-2])-([1-6])", "\\2", land1), landB = gsub("([1-2])-([1-6])",
"\\2", land2))
```
We are now ready to begin matching the bullets. We’ll start by looking at Figure [3\.15](bullets.html#fig:bullets-cscompare), which aligns the two bullets by bullet land and colors each of the cells (comparisons) by the [cross\-correlation function](glossary.html#def:ccfval) (CCF) value. Encouragingly, we see a diagonal pattern in the matrix, which is to be expected given the assumption that the bullet scans were collected by rotating the bullet and are stored in rotational order. Note that we are also comparing each land to itself (top left and bottom right) in two of the four panels, which as expected exhibit the highest CCF for matches.
```
comparisons <- comparisons %>% mutate(features = purrr::map2(.x = aligned, .y = striae,
.f = extract_features_all, resolution = 1.5625), legacy_features = purrr::map(striae,
extract_features_all_legacy, resolution = 1.5625)) %>% tidyr::unnest(legacy_features)
comparisons %>% ggplot(aes(x = landA, y = landB, fill = ccf)) + geom_tile() +
scale_fill_gradient2(low = "grey80", high = "darkorange", midpoint = 0.5) +
facet_grid(bulletB ~ bulletA, labeller = "label_both") + xlab("Land A") +
ylab("Land B") + theme(aspect.ratio = 1)
```
Figure 3\.15: Land\-to\-Land Comparison of the two bullets colored by the CCF.
We can improve upon these results by using a trained random forest, `bulletxtrctr::rtrees`, which was introduced in Hare, Hofmann, and Carriquiry ([2017](#ref-hare2017)) in order to assess the probability of a match between bullet lands. Figure [3\.16](bullets.html#fig:bullets-csrf) displays the random forest score, or match probability, of each of the land\-to\-land comparisons. The results are stronger than using only the CCF in this case.
```
comparisons$rfscore <- predict(bulletxtrctr::rtrees, newdata = comparisons,
type = "prob")[, 2]
comparisons %>% ggplot(aes(x = landA, y = landB, fill = rfscore)) + geom_tile() +
scale_fill_gradient2(low = "grey80", high = "darkorange", midpoint = 0.5) +
facet_grid(bulletB ~ bulletA, labeller = "label_both") + xlab("Land A") +
ylab("Land B") + theme(aspect.ratio = 1)
```
Figure 3\.16: Random forest matching probabilities of all land\-to\-land comparisons.
Finally, we can visualize the accuracy of our comparisons by highlighting the cells in which they were in fact matches (same\-source). Figure [3\.17](bullets.html#fig:bullets-csss) shows this, indicating that for the comparison between the two bullets, a couple of the lands didn’t exhibit a high match probability. With that said, given that the other four lands exhibited a strong probability, this is high evidence these bullets were in fact fired from the same barrel. Methods for bullet\-to\-bullet matching using the random forest results of land\-to\-land comparisons are still in development at CSAFE. Currently, sequence average matching (SAM) from Sensofar Metrology ([2016](#ref-sensofarsam)) is used in similar problems to compare the CCF values in sequence (by rotation of the bullet), and methods in development have been using SAM as a baseline.
Figure 3\.17: All Land\-to\-Land Comparisons of the bullets, highlighting same\-source lands.
[
#### Eric Hare, Heike Hofmann
Figure 3\.1: Close\-up of a bullet under a Confocal Light Microscope in the Roy J Carver High\-resolution microscopy lab at Iowa State University. Photo by Heike Hofmann. Source: [forensicstats.org](https://forensicstats.org/wp-content/uploads/2017/01/csafe-logo-90.png)
3\.1 Introduction
-----------------
When a [bullet](glossary.html#def:bullet) is fired from a [gun barrel](glossary.html#def:gunbarrel), small imperfections in the barrel leave [striation](glossary.html#def:striations) marks on the bullet. These marks are expressed most in the area of the bullet that has the closest contact to the barrel.
These engravings are assumed to be unique to individual gun barrels, and as a result, traditional forensic science methods have employed trained forensic examiners to assess the likelihood of two bullets being fired from the same barrel (a “match”). Conventionally, this has been done using the metric Consecutively Matching Striae(CMS) (Biasotti [1959](#ref-biasotti:1959)). However, no official standards have been established to scientifically delineate a number that effectively separates matches from non\-matches. Therefore, significant work has been done, and continues to be done, in order to add scientific rigor to the bullet matching process.
The 2009 National Academy of Sciences Report (National Research Council [2009](#ref-NAS:2009)[a](#ref-NAS:2009)) may have been the “call\-to\-arms” that the field needed. This report criticized the lack of rigor in the field at the time, but also described the “path forward”. As the authors saw it, the path forward included adoption of standards. A standard format to represent the structure of bullets opened the door for much of what you’ll read about in this chapter, including opening up the formerly unknown process of bullet matching to a much wider audience, and providing the foundations for truly automated, statistical algorithms to perform the procedure.
In this chapter, we outline the new standard data format used to store three\-dimensional bullet scans. We proceed by outlying relevant R packages for the processing and analysis of these scans. Finally, we discuss ways in which to draw conclusions based on these results, and tie it all together in the form of a relevant case study.
3\.2 Data
---------
Data on both [breech face](glossary.html#def:breechface) impression and [land engraved areas](glossary.html#def:leas) are available from the [NIST Ballistics Toolmark Research Database](https://tsapps.nist.gov/NRBTD/Studies/Search) (NBTRD) in the [x3p](https://tsapps.nist.gov/NRBTD/Home/DataFormat) (XML 3\-D Surface Profile) format. The x3p format was designed to implement a standard for exchanging 3D profile data. It was adopted by the Open Forensic [Metrology](glossary.html#def:metrology) Consortium, or [OpenFMC](https://www.openfmc.org/), a group of firearm forensics researchers whose aim is to establish best practices for researchers using metrology in forensic science.
Figure [3\.2](bullets.html#fig:bullets-x3pcontain) shows an illustration of the internal structure of the x3p file format. x3p files contain an XML data file with metadata on the bullet scans, as well as binary data containing the surface topology measurements. The metadata includes information on the scanning equipment and operator, as well as information on the resolution of the scans.
Figure 3\.2: An illustration of the internal structure of the x3p file format. x3p files contain an XML data file with metadata on the bullet scans, as well as binary data containing the surface topology measurements. Source: [openGPS](https://sourceforge.net/p/open-gps/mwiki/X3p/)
The use of the x3p format has positively impacted procedures relating to forensic analysis of bullets. Because the format is an open standard, researchers on a wide range of computing platforms can access and analyze the data. Due to the x3p container holding a rich set of metadata, the limitations of traditional “black box”\-type file formats are eliminated. The source, parameters, and raw data contained within each 3D scan is readily available for critical analysis and examination.
3\.3 R Package(s)
-----------------
The first R package created to read and process x3p files was `x3pr` (OpenFMC [2014](#ref-x3pr)). This package includes reading routines to read in both the data as well as the metadata of a particular bullet land. The package also has some plotting functions and a writing routine to create x3p files. A new package, `x3ptools` (Hofmann et al. [2018](#ref-x3ptools)), was created to handle some limitations in `x3pr` and expand upon the functionality. A companion package, `bulletxtrctr` (Hofmann, Vanderplas, and Krishnan [2018](#ref-bulletxtrctr)), expands upon x3ptools and provides functions to perform an automated bullet analysis routine based on the algorithms described in Hare, Hofmann, and Carriquiry ([2017](#ref-hare2017)).
The two packages [`x3ptools`](https://heike.github.io/x3ptools/) and [`bulletxtrctr`](https://heike.github.io/bulletxtrctr/) will be the focus of the remainder of this chapter.
### 3\.3\.1 x3ptools
Although `x3ptools` isn’t written specifically for the purposes of handling bullet scans, it is the package of choice to begin a bullet analysis. In fact, the package itself is generic and can handle a wide range of data types that use the x3p container format.
To begin, the package can be installed from [CRAN](https://cran.r-project.org/web/packages/x3ptools/) (stable release) or [GitHub](https://github.com/heike/x3ptools) (development version):
```
# from CRAN: install.packages('x3ptools') install development version from
# GitHub:
devtools::install_github("heike/x3ptools")
```
We load the package and use some built\-in x3p data to get a feel for the package functionality. We will work with the Center for Statistical Applications in Forensic Evidence (CSAFE) logo. In its original colored form, the logo looks like Figure [3\.3](bullets.html#fig:bullets-csafelogo).
Figure 3\.3: The CSAFE logo. Source: [CSAFE](https://forensicstats.org/).
A 3D version of this logo is available in `x3ptools`, where portions of the logo are raised and recessed. This makes for a good test case in introducing `x3ptools` and the idea behind 3D scans of objects, as we transition towards bullet analysis.
```
library(tidyverse)
library(x3ptools)
logo <- read_x3p(system.file("csafe-logo.x3p", package = "x3ptools"))
names(logo)
```
```
## [1] "header.info" "surface.matrix" "feature.info" "general.info"
## [5] "matrix.info"
```
We can see that there are five elements to the list object returned:
* **header.info** \- Provides us information on the resolution of the scan
* **surface.matrix** \- The actual surface data of the scan
* **feature.info** \- Properties of the scan itself
* **general.info** \- Information on how the data was captured
* **matrix.info** \- Some information expanding upon header.info
The two most relevant for our purposes are **header.info** and **surface.matrix**. To begin to understand this container format better, we can use the `image_x3p` function to produce a visualization of the surface, shown in Figure [3\.4](bullets.html#fig:bullets-csafelogoscan).
```
image_x3p(logo)
```
Figure 3\.4: 3D surface scan of the CSAFE logo. (Rendered image has been down\-sampled to speed up page load.)
We can use the function `x3p_to_df` in order to convert this structure into a standard R data frame, which will allow us to do any number of data manipulation and plotting routines. In this case, Figure [3\.5](bullets.html#fig:bullets-x3pplot) shows a simple scatter plot created with `ggplot2` of the height measurements across the surface of the bullet.
```
logo_df <- x3p_to_df(logo)
ggplot(data = logo_df, aes(x = x, y = y, color = value)) + geom_point() + scale_color_gradient(low = "white",
high = "black") + theme_bw()
```
Figure 3\.5: A simple scatterplot created with ggplot2 of the height measurements across the surface of the bullet.
A key feature of the data is that the `value` column represents the height of the pixel corresponding to the particular \\((x,y)\\) location. In this logo, we can see that the fingerprint section of the logo is raised above the background quite clearly. As we transition to operating on images of bullets, this will be important to note.
One other important feature of the package is the ability to sample. Depending on the size and resolution of a particular scan, the resulting object could be quite large. This CSAFE logo, despite being a relatively small physical size, still results in a 310,479 row data frame. Though manageable, this means that certain routines, such as producing the above scatter plot, can be quite slow.
When high resolution is not needed, we may elect to sample the data to reduce the resulting size. This can be done with the `sample_x3p` function. The function takes a parameter `m` to indicate the sampling factor to use. For example, a value of `m = 4` will sample every 4th height value from the 3D scan, as illustrated in Figure [3\.6](bullets.html#fig:bullets-samp).
```
sample_logo <- sample_x3p(logo, m = 4)
sample_logo_df <- x3p_to_df(sample_logo)
ggplot(data = sample_logo_df, aes(x = x, y = y, color = value)) + geom_point() +
scale_color_gradient(low = "white", high = "black") + theme_bw()
```
Figure 3\.6: A sampled scan of an x3p file extracted using the sample\_x3p function.
You can see the clarity of the resulting plot has noticeably declined, but the overall structure has been maintained. Depending on the application, this could be a solution for making a slow analytical process a bit faster.
### 3\.3\.2 `bulletxtrctr`
As mentioned, we will use the `bulletxtrctr` package to process 3D surface scans of bullets. This package depends on `x3ptools` for reading and writing x3p files but otherwise focuses on statistical routines for matching bullets. The package is not yet available on CRAN, but can be installed from GitHub:
```
devtools::install_github("heike/bulletxtrctr")
```
To demonstrate the functionality of `bulletxtrctr`, we use data from the NBTRD at NIST. We download the surface scan for a bullet from the Hamby Study (Hamby, Brundage, and Thorpe [2009](#ref-hamby:2009)), using the `read_bullet` function, transform the measurements from meters to microns (`x3p_m_to_mum`), and rotate the images so that the long axis is the horizontal. Note that the object `hamby252demo` is a list object exported from `bulletxtrctr` that contains URLs to the NIST NBTRD.
```
library(randomForest)
library(bulletxtrctr)
# note: length(hamby252demo[[1]]) is 6
br1_b1 <- read_bullet(urllist = hamby252demo[[1]]) %>% # x3p_m_to_mum: converts from meters to microns
mutate(x3p = x3p %>% purrr::map(.f = x3p_m_to_mum)) %>% # rotate_x3p(angle = -90: change orientation by 90 degrees clockwise
# y_flip_x3p: flip image to conform to new ISO norm (see ??y_flip_x3p)
mutate(x3p = x3p %>% purrr::map(.f = function(x) x %>% rotate_x3p(angle = -90) %>%
y_flip_x3p()))
```
When working with lots of bullet data, it’s important to stay organized when naming objects in your R session. The name of the object we just created is `br1_b1`.
This indicates that we are looking at the first bullet (`b1`) that was fired from Barrel 1 (`br1`). A bullet is composed of a certain number of land engraved areas (LEAs), and each LEA is a separate file with a separate URL. So, the object `br1_b1` contains `nrow(br1_b1)` (6\) observations, one for each land engraved area, which compose the whole bullet scan. The [rifling](glossary.html#def:rifling) of the barrel induces these land engraved areas, which are a series of alternating raised and recessed portions on the fired bullet. In addition, manufacturing defects engrave striation marks on the bullet as it travels through the gun barrel when fired (AFTE Criteria for Identification Committee [1992](#ref-afte:1992)[b](#ref-afte:1992)).
Let’s take a quick look at what we see one the first bullet land (Figure [3\.7](bullets.html#fig:bullets-b111)).
```
image_x3p(br1_b1$x3p[[1]])
```
Figure 3\.7: Land 1 of Bullet 1 from Barrel 1 of the Hamby Study (Set 44\). Source: [NRBTD](https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/43567404-1611-4b40-ae74-a1e440e79f6a). (Rendered image has been down\-sampled to speed up page load.)
Immediately, we can clearly see the vertical striation marks. To better visualize these marks, we can extract a cross\-section from the bullet and plot it in two dimensions. To accomplish this, `bulletxtrctr` provides us with a function `x3p_crosscut_optimize` to choose the ideal location at which to do so.
```
cc_b11 <- x3p_crosscut_optimize(br1_b1$x3p[[1]])
cc_b11
```
```
## [1] 100
```
This value provides us with the location (in microns) of a horizontal line that the algorithm determines to be a good place to extract a cross\-section. The two primary criteria for determining this are:
1. The location should be close to the base of the bullet (\\(y \= 0\\)) because the striation marks are most pronounced there.
2. Cross\-sections taken near this location should be similar to this cross\-section (stability).
The `x3p_crosscut_optimize` function looks for the first cross\-section meeting this criteria, searching upwards from the base of the bullet land. With this value, we can extract and plot the cross\-section, shown in Figure [3\.8](bullets.html#fig:bullets-cc).
```
ccdata_b11 <- x3p_crosscut(br1_b1$x3p[[1]], y = cc_b11)
ggplot(data = ccdata_b11, aes(x = x, y = value)) + geom_line() + theme_bw()
```
Figure 3\.8: Cross\-section of the bullet land at the ideal cross\-section location.
Most of the scans exhibit the pattern that we see here, where there are “wedges” on the left and right side. The wedge area is called the **shoulder**, and it is the area separating the land engraved area (the curved region in the middle) from the groove (the area not scanned because it doesn’t exhibit striations). In other words, to better hone in on the striation marks along the land, we should subset this region to include only the middle curved land engraved area portion. Fortunately, `bulletxtrctr` provides us with functionality to automatically do that. First, we use the `cc_locate_grooves` function to detect the location of the grooves. This returns a list object, with one element being the two locations along the axis, and the lother element being the plot, given in Figure [3\.9](bullets.html#fig:bullets-grooveloc).
```
grooves_b11 <- cc_locate_grooves(ccdata_b11, return_plot = TRUE, method = "middle")
grooves_b11$plot
```
Figure 3\.9: Location of the grooves in our bullet scan, as detected by the `get_grooves` function.
With the grooves detected, we can now smooth out the surface using locally estimated scatter plot smoothing (LOESS) (Cleveland [1979](#ref-cleveland:1979)). Once we do so, we obtain what we call a **bullet signature**, Figure [3\.10](bullets.html#fig:bullets-loess), representing the clearest picture yet of the striation marks along the surface of the land.
```
b111_processed <- cc_get_signature(ccdata = ccdata_b11, grooves = grooves_b11,
span1 = 0.75, span2 = 0.03) %>% filter(!is.na(sig), !is.na(raw_sig))
ggplot(data = b111_processed, aes(x = x, y = sig)) + geom_line() + theme_bw()
```
Figure 3\.10: LOESS\-smoothed version of our bullet profile, called the bullet signature.
The land signature is the element of analysis for feature extraction out of the `bulletxtrctr` package. With multiple bullet signatures, matches can quickly and easily be made using the `sig_align` function, in conjunction with the `extract_feature` family of functions, which we will discuss later on in the chapter.
### 3\.3\.1 x3ptools
Although `x3ptools` isn’t written specifically for the purposes of handling bullet scans, it is the package of choice to begin a bullet analysis. In fact, the package itself is generic and can handle a wide range of data types that use the x3p container format.
To begin, the package can be installed from [CRAN](https://cran.r-project.org/web/packages/x3ptools/) (stable release) or [GitHub](https://github.com/heike/x3ptools) (development version):
```
# from CRAN: install.packages('x3ptools') install development version from
# GitHub:
devtools::install_github("heike/x3ptools")
```
We load the package and use some built\-in x3p data to get a feel for the package functionality. We will work with the Center for Statistical Applications in Forensic Evidence (CSAFE) logo. In its original colored form, the logo looks like Figure [3\.3](bullets.html#fig:bullets-csafelogo).
Figure 3\.3: The CSAFE logo. Source: [CSAFE](https://forensicstats.org/).
A 3D version of this logo is available in `x3ptools`, where portions of the logo are raised and recessed. This makes for a good test case in introducing `x3ptools` and the idea behind 3D scans of objects, as we transition towards bullet analysis.
```
library(tidyverse)
library(x3ptools)
logo <- read_x3p(system.file("csafe-logo.x3p", package = "x3ptools"))
names(logo)
```
```
## [1] "header.info" "surface.matrix" "feature.info" "general.info"
## [5] "matrix.info"
```
We can see that there are five elements to the list object returned:
* **header.info** \- Provides us information on the resolution of the scan
* **surface.matrix** \- The actual surface data of the scan
* **feature.info** \- Properties of the scan itself
* **general.info** \- Information on how the data was captured
* **matrix.info** \- Some information expanding upon header.info
The two most relevant for our purposes are **header.info** and **surface.matrix**. To begin to understand this container format better, we can use the `image_x3p` function to produce a visualization of the surface, shown in Figure [3\.4](bullets.html#fig:bullets-csafelogoscan).
```
image_x3p(logo)
```
Figure 3\.4: 3D surface scan of the CSAFE logo. (Rendered image has been down\-sampled to speed up page load.)
We can use the function `x3p_to_df` in order to convert this structure into a standard R data frame, which will allow us to do any number of data manipulation and plotting routines. In this case, Figure [3\.5](bullets.html#fig:bullets-x3pplot) shows a simple scatter plot created with `ggplot2` of the height measurements across the surface of the bullet.
```
logo_df <- x3p_to_df(logo)
ggplot(data = logo_df, aes(x = x, y = y, color = value)) + geom_point() + scale_color_gradient(low = "white",
high = "black") + theme_bw()
```
Figure 3\.5: A simple scatterplot created with ggplot2 of the height measurements across the surface of the bullet.
A key feature of the data is that the `value` column represents the height of the pixel corresponding to the particular \\((x,y)\\) location. In this logo, we can see that the fingerprint section of the logo is raised above the background quite clearly. As we transition to operating on images of bullets, this will be important to note.
One other important feature of the package is the ability to sample. Depending on the size and resolution of a particular scan, the resulting object could be quite large. This CSAFE logo, despite being a relatively small physical size, still results in a 310,479 row data frame. Though manageable, this means that certain routines, such as producing the above scatter plot, can be quite slow.
When high resolution is not needed, we may elect to sample the data to reduce the resulting size. This can be done with the `sample_x3p` function. The function takes a parameter `m` to indicate the sampling factor to use. For example, a value of `m = 4` will sample every 4th height value from the 3D scan, as illustrated in Figure [3\.6](bullets.html#fig:bullets-samp).
```
sample_logo <- sample_x3p(logo, m = 4)
sample_logo_df <- x3p_to_df(sample_logo)
ggplot(data = sample_logo_df, aes(x = x, y = y, color = value)) + geom_point() +
scale_color_gradient(low = "white", high = "black") + theme_bw()
```
Figure 3\.6: A sampled scan of an x3p file extracted using the sample\_x3p function.
You can see the clarity of the resulting plot has noticeably declined, but the overall structure has been maintained. Depending on the application, this could be a solution for making a slow analytical process a bit faster.
### 3\.3\.2 `bulletxtrctr`
As mentioned, we will use the `bulletxtrctr` package to process 3D surface scans of bullets. This package depends on `x3ptools` for reading and writing x3p files but otherwise focuses on statistical routines for matching bullets. The package is not yet available on CRAN, but can be installed from GitHub:
```
devtools::install_github("heike/bulletxtrctr")
```
To demonstrate the functionality of `bulletxtrctr`, we use data from the NBTRD at NIST. We download the surface scan for a bullet from the Hamby Study (Hamby, Brundage, and Thorpe [2009](#ref-hamby:2009)), using the `read_bullet` function, transform the measurements from meters to microns (`x3p_m_to_mum`), and rotate the images so that the long axis is the horizontal. Note that the object `hamby252demo` is a list object exported from `bulletxtrctr` that contains URLs to the NIST NBTRD.
```
library(randomForest)
library(bulletxtrctr)
# note: length(hamby252demo[[1]]) is 6
br1_b1 <- read_bullet(urllist = hamby252demo[[1]]) %>% # x3p_m_to_mum: converts from meters to microns
mutate(x3p = x3p %>% purrr::map(.f = x3p_m_to_mum)) %>% # rotate_x3p(angle = -90: change orientation by 90 degrees clockwise
# y_flip_x3p: flip image to conform to new ISO norm (see ??y_flip_x3p)
mutate(x3p = x3p %>% purrr::map(.f = function(x) x %>% rotate_x3p(angle = -90) %>%
y_flip_x3p()))
```
When working with lots of bullet data, it’s important to stay organized when naming objects in your R session. The name of the object we just created is `br1_b1`.
This indicates that we are looking at the first bullet (`b1`) that was fired from Barrel 1 (`br1`). A bullet is composed of a certain number of land engraved areas (LEAs), and each LEA is a separate file with a separate URL. So, the object `br1_b1` contains `nrow(br1_b1)` (6\) observations, one for each land engraved area, which compose the whole bullet scan. The [rifling](glossary.html#def:rifling) of the barrel induces these land engraved areas, which are a series of alternating raised and recessed portions on the fired bullet. In addition, manufacturing defects engrave striation marks on the bullet as it travels through the gun barrel when fired (AFTE Criteria for Identification Committee [1992](#ref-afte:1992)[b](#ref-afte:1992)).
Let’s take a quick look at what we see one the first bullet land (Figure [3\.7](bullets.html#fig:bullets-b111)).
```
image_x3p(br1_b1$x3p[[1]])
```
Figure 3\.7: Land 1 of Bullet 1 from Barrel 1 of the Hamby Study (Set 44\). Source: [NRBTD](https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/43567404-1611-4b40-ae74-a1e440e79f6a). (Rendered image has been down\-sampled to speed up page load.)
Immediately, we can clearly see the vertical striation marks. To better visualize these marks, we can extract a cross\-section from the bullet and plot it in two dimensions. To accomplish this, `bulletxtrctr` provides us with a function `x3p_crosscut_optimize` to choose the ideal location at which to do so.
```
cc_b11 <- x3p_crosscut_optimize(br1_b1$x3p[[1]])
cc_b11
```
```
## [1] 100
```
This value provides us with the location (in microns) of a horizontal line that the algorithm determines to be a good place to extract a cross\-section. The two primary criteria for determining this are:
1. The location should be close to the base of the bullet (\\(y \= 0\\)) because the striation marks are most pronounced there.
2. Cross\-sections taken near this location should be similar to this cross\-section (stability).
The `x3p_crosscut_optimize` function looks for the first cross\-section meeting this criteria, searching upwards from the base of the bullet land. With this value, we can extract and plot the cross\-section, shown in Figure [3\.8](bullets.html#fig:bullets-cc).
```
ccdata_b11 <- x3p_crosscut(br1_b1$x3p[[1]], y = cc_b11)
ggplot(data = ccdata_b11, aes(x = x, y = value)) + geom_line() + theme_bw()
```
Figure 3\.8: Cross\-section of the bullet land at the ideal cross\-section location.
Most of the scans exhibit the pattern that we see here, where there are “wedges” on the left and right side. The wedge area is called the **shoulder**, and it is the area separating the land engraved area (the curved region in the middle) from the groove (the area not scanned because it doesn’t exhibit striations). In other words, to better hone in on the striation marks along the land, we should subset this region to include only the middle curved land engraved area portion. Fortunately, `bulletxtrctr` provides us with functionality to automatically do that. First, we use the `cc_locate_grooves` function to detect the location of the grooves. This returns a list object, with one element being the two locations along the axis, and the lother element being the plot, given in Figure [3\.9](bullets.html#fig:bullets-grooveloc).
```
grooves_b11 <- cc_locate_grooves(ccdata_b11, return_plot = TRUE, method = "middle")
grooves_b11$plot
```
Figure 3\.9: Location of the grooves in our bullet scan, as detected by the `get_grooves` function.
With the grooves detected, we can now smooth out the surface using locally estimated scatter plot smoothing (LOESS) (Cleveland [1979](#ref-cleveland:1979)). Once we do so, we obtain what we call a **bullet signature**, Figure [3\.10](bullets.html#fig:bullets-loess), representing the clearest picture yet of the striation marks along the surface of the land.
```
b111_processed <- cc_get_signature(ccdata = ccdata_b11, grooves = grooves_b11,
span1 = 0.75, span2 = 0.03) %>% filter(!is.na(sig), !is.na(raw_sig))
ggplot(data = b111_processed, aes(x = x, y = sig)) + geom_line() + theme_bw()
```
Figure 3\.10: LOESS\-smoothed version of our bullet profile, called the bullet signature.
The land signature is the element of analysis for feature extraction out of the `bulletxtrctr` package. With multiple bullet signatures, matches can quickly and easily be made using the `sig_align` function, in conjunction with the `extract_feature` family of functions, which we will discuss later on in the chapter.
3\.4 Drawing Conclusions
------------------------
We have seen the process of extracting the signature of a bullet and plotting it using R. But recall that the application of these procedures demands an answer to the question of whether this bullet was fired from the same gun barrel as another bullet. The question becomes, does this bullet signature “match” the signature of another bullet with high probability?
This answer could be derived quite seamlessly in an ideal world given a reference database of all bullets in existence that have been fired from all gun barrels. With this database, we would compute the signatures for all of them and we could then make probabilistic judgments based on the similarities of signatures fired from the same barrel versus those from different barrels. Without this database, the best we can do is to begin a large data collection process resulting in a reference database, such as the approach in the NIST NBTRD. To come to a conclusion about the source of two fired bullets, we need to quantify the similarity of two land signatures that were part of bullets fired from the same barrel. This will be the focus of the Case Study section.
One other approach to drawing conclusions is to use the generated signatures as a supplement to the manual examination by trained forensic examiners. This semi\-automated procedure maintains the valuable expertise of the examiner and provides a scientific backing to some of the conclusions made. In the cases where conclusions may differ, this can lead to either refinement of the examination procedure, or refinement of the automated algorithms described.
3\.5 Case Study
---------------
We will now walk through the process of performing a bullet match. Much of the code for this section has been adapted from the excellent `bulletxtrctr` [README](https://github.com/heike/bulletxtrctr/blob/master/README.Rmd). We take two bullets with 6 lands each for comparison. Thus, there are 36 land\-to\-land comparisons to be made, of which 6 are known matches, and 30 are known non\-matches. We begin by reading the bullets:
```
# bullet 1
urllist1 <- c("https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/cd204983-465b-4ec3-9da8-cba515a779ff",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/0e72228c-5e39-4a42-8c4e-3da41a11f32c",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/b9d6e187-2de7-44e8-9b88-c83c29a8129d",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/fda92f6a-71ba-4735-ade0-02942d14d1e9",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/8fa798b4-c5bb-40e2-acf4-d9296865e8d4",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/81e817e5-15d8-409f-b5bd-d67c525941fe")
# bullet 2
urllist2 <- c("https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/288341e0-0fdf-4b0c-bd26-b31ac8c43f72",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/c97ada55-3a35-44fd-adf3-ac27dd202522",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/8a1805d9-9d01-4427-8873-aef4a0bd323a",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/a116e448-18e1-4500-859c-38a5f5cc38fd",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/0b7182d3-1275-456e-a9b4-ae378105e4af",
"https://tsapps.nist.gov/NRBTD/Studies/BulletMeasurement/DownloadMeasurement/86934fcd-7317-4c74-86ae-f167dbc2f434")
b1 <- read_bullet(urllist = urllist1)
b2 <- read_bullet(urllist = urllist2)
```
For ease of analysis, we bind the bullets in a single data frame, and identify them using numeric values inside the data frame. We also indicate the six different lands.
```
b1$bullet <- 1
b2$bullet <- 2
b1$land <- 1:6
b2$land <- 1:6
bullets <- rbind(b1, b2)
```
As before, we want to rotate the bullets such that the long axis is along the horizontal, as the functions within `bulletxtrctr` assume this format.
```
bullets <- bullets %>% mutate(x3p = x3p %>% purrr::map(.f = x3p_m_to_mum)) %>%
mutate(x3p = x3p %>% purrr::map(.f = function(x) x %>% rotate_x3p(angle = -90) %>%
y_flip_x3p()))
```
We extract the ideal cross\-sections from all 12 bullet lands, which are shown in Figure [3\.11](bullets.html#fig:bullets-cscrosscut). In each land, we see the standard curved pattern, with well defined and a pronounced shoulders indicating the cutoff location for extracting the land.
```
bullets <- bullets %>% mutate(crosscut = x3p %>% purrr::map_dbl(.f = x3p_crosscut_optimize))
bullets <- bullets %>% mutate(ccdata = purrr::map2(.x = x3p, .y = crosscut,
.f = x3p_crosscut))
crosscuts <- bullets %>% tidyr::unnest(ccdata)
ggplot(data = crosscuts, aes(x = x, y = value)) + geom_line() + facet_grid(bullet ~
land, labeller = "label_both") + theme_bw() + theme(axis.text.x = element_text(angle = 30,
hjust = 1, vjust = 1, size = rel(0.9)))
```
Figure 3\.11: Ideal cross\-sections for all 12 bullet lands.
Next, with each of these profiles, we need to detect grooves to extract the bullet signature between them. In Figure [3\.12](bullets.html#fig:bullets-csgrooves), we can see that the groove locations of the 12 bullet lands appear to be detected well, such that the middle portion between the two vertical blue lines represents a good sample of the land\-engraved area.
```
bullets <- bullets %>% mutate(grooves = ccdata %>% purrr::map(.f = cc_locate_grooves,
method = "middle", adjust = 30, return_plot = TRUE))
do.call(gridExtra::grid.arrange, lapply(bullets$grooves, `[[`, 2))
```
Figure 3\.12: Groove locations of each of the 12 bullet lands.
With the groove locations detected, we proceed as before by using LOESS to smooth out the curvature of the surface and focus on the striation marks. Figure [3\.13](bullets.html#fig:bullets-cssigs) shows us the raw signatures of the 12 lands. The striation marks are much more visible now.
```
bullets <- bullets %>% mutate(sigs = purrr::map2(.x = ccdata, .y = grooves,
.f = function(x, y) {
cc_get_signature(ccdata = x, grooves = y, span1 = 0.75, span2 = 0.03)
}))
signatures <- bullets %>% select(source, sigs) %>% tidyr::unnest()
bullet_info <- bullets %>% select(source, bullet, land)
signatures %>% filter(!is.na(sig), !is.na(raw_sig)) %>% left_join(bullet_info,
by = "source") %>% ggplot(aes(x = x)) + geom_line(aes(y = raw_sig), colour = "grey70") +
geom_line(aes(y = sig), colour = "grey30") + facet_grid(bullet ~ land, labeller = "label_both") +
ylab("value") + ylim(c(-5, 5)) + theme_bw()
```
Figure 3\.13: Signatures for the 12 bullet lands. Light gray lines show the raw data, while the dark gray lines are the smoothed signatures.
Because we are working with 12 signatures, our goal will be to align all pairwise comparisons (36 comparisons total) between the six lands in each bullet. Figure [3\.14](bullets.html#fig:bullets-csalign) shows the alignment of Bullet 2 Land 3 with Bullet 1 Land 2, two of the known matches. Immediately it is clear that the pattern of the signatures appears very similar between the two lands.
```
bullets$bulletland <- paste0(bullets$bullet, "-", bullets$land)
lands <- unique(bullets$bulletland)
comparisons <- data.frame(expand.grid(land1 = lands, land2 = lands), stringsAsFactors = FALSE)
comparisons <- comparisons %>% mutate(aligned = purrr::map2(.x = land1, .y = land2,
.f = function(xx, yy) {
land1 <- bullets$sigs[bullets$bulletland == xx][[1]]
land2 <- bullets$sigs[bullets$bulletland == yy][[1]]
land1$bullet <- "first-land"
land2$bullet <- "second-land"
sig_align(land1$sig, land2$sig)
}))
subset(comparisons, land1 == "2-3" & land2 == "1-2")$aligned[[1]]$lands %>%
mutate(`b2-l3` = sig1, `b1-l2` = sig2) %>% select(-sig1, -sig2) %>% tidyr::gather(sigs,
value, `b2-l3`, `b1-l2`) %>% ggplot(aes(x = x, y = value, colour = sigs)) +
geom_line() + theme_bw() + scale_color_brewer(palette = "Dark2")
```
Figure 3\.14: Alignment of two bullet lands (2\-3 \& 1\-2\)
Though the visual evidence is strong, we want to quantify the similarity. To do this, we’re going to use a number of functions which extract features from the aligned signatures of the bullets. We’ll extract the [cross\-correlation](glossary.html#def:crosscor) (`extract_feature_ccf`), the [matching striation count](glossary.html#def:matchingstria) (`bulletxtrctr:::extract_helper_feature_n_striae`), the [non\-matching striation count](glossary.html#def:nmatchingstria), and many more (`extract_feature_*`).
```
comparisons <- comparisons %>% mutate(ccf0 = aligned %>% purrr::map_dbl(.f = function(x) extract_feature_ccf(x$lands)),
lag0 = aligned %>% purrr::map_dbl(.f = function(x) extract_feature_lag(x$lands)),
D0 = aligned %>% purrr::map_dbl(.f = function(x) extract_feature_D(x$lands)),
length0 = aligned %>% purrr::map_dbl(.f = function(x) extract_feature_length(x$lands)),
overlap0 = aligned %>% purrr::map_dbl(.f = function(x) extract_feature_overlap(x$lands)),
striae = aligned %>% purrr::map(.f = sig_cms_max, span = 75), cms_per_mm = purrr::map2(striae,
aligned, .f = function(s, a) {
extract_feature_cms_per_mm(s$lines, a$lands, resolution = 1.5625)
}), matches0 = striae %>% purrr::map_dbl(.f = function(s) {
bulletxtrctr:::extract_helper_feature_n_striae(s$lines, type = "peak",
match = TRUE)
}), mismatches0 = striae %>% purrr::map_dbl(.f = function(s) {
bulletxtrctr:::extract_helper_feature_n_striae(s$lines, type = "peak",
match = FALSE)
}), bulletA = gsub("([1-2])-([1-6])", "\\1", land1), bulletB = gsub("([1-2])-([1-6])",
"\\1", land2), landA = gsub("([1-2])-([1-6])", "\\2", land1), landB = gsub("([1-2])-([1-6])",
"\\2", land2))
```
We are now ready to begin matching the bullets. We’ll start by looking at Figure [3\.15](bullets.html#fig:bullets-cscompare), which aligns the two bullets by bullet land and colors each of the cells (comparisons) by the [cross\-correlation function](glossary.html#def:ccfval) (CCF) value. Encouragingly, we see a diagonal pattern in the matrix, which is to be expected given the assumption that the bullet scans were collected by rotating the bullet and are stored in rotational order. Note that we are also comparing each land to itself (top left and bottom right) in two of the four panels, which as expected exhibit the highest CCF for matches.
```
comparisons <- comparisons %>% mutate(features = purrr::map2(.x = aligned, .y = striae,
.f = extract_features_all, resolution = 1.5625), legacy_features = purrr::map(striae,
extract_features_all_legacy, resolution = 1.5625)) %>% tidyr::unnest(legacy_features)
comparisons %>% ggplot(aes(x = landA, y = landB, fill = ccf)) + geom_tile() +
scale_fill_gradient2(low = "grey80", high = "darkorange", midpoint = 0.5) +
facet_grid(bulletB ~ bulletA, labeller = "label_both") + xlab("Land A") +
ylab("Land B") + theme(aspect.ratio = 1)
```
Figure 3\.15: Land\-to\-Land Comparison of the two bullets colored by the CCF.
We can improve upon these results by using a trained random forest, `bulletxtrctr::rtrees`, which was introduced in Hare, Hofmann, and Carriquiry ([2017](#ref-hare2017)) in order to assess the probability of a match between bullet lands. Figure [3\.16](bullets.html#fig:bullets-csrf) displays the random forest score, or match probability, of each of the land\-to\-land comparisons. The results are stronger than using only the CCF in this case.
```
comparisons$rfscore <- predict(bulletxtrctr::rtrees, newdata = comparisons,
type = "prob")[, 2]
comparisons %>% ggplot(aes(x = landA, y = landB, fill = rfscore)) + geom_tile() +
scale_fill_gradient2(low = "grey80", high = "darkorange", midpoint = 0.5) +
facet_grid(bulletB ~ bulletA, labeller = "label_both") + xlab("Land A") +
ylab("Land B") + theme(aspect.ratio = 1)
```
Figure 3\.16: Random forest matching probabilities of all land\-to\-land comparisons.
Finally, we can visualize the accuracy of our comparisons by highlighting the cells in which they were in fact matches (same\-source). Figure [3\.17](bullets.html#fig:bullets-csss) shows this, indicating that for the comparison between the two bullets, a couple of the lands didn’t exhibit a high match probability. With that said, given that the other four lands exhibited a strong probability, this is high evidence these bullets were in fact fired from the same barrel. Methods for bullet\-to\-bullet matching using the random forest results of land\-to\-land comparisons are still in development at CSAFE. Currently, sequence average matching (SAM) from Sensofar Metrology ([2016](#ref-sensofarsam)) is used in similar problems to compare the CCF values in sequence (by rotation of the bullet), and methods in development have been using SAM as a baseline.
Figure 3\.17: All Land\-to\-Land Comparisons of the bullets, highlighting same\-source lands.
[
| Field Specific |
sctyner.github.io | https://sctyner.github.io/OpenForSciR/casings.html |
Chapter 4 Firearms: casings
===========================
#### *Xiao Hui Tai*
4\.1 Introduction
-----------------
Marks are left on [cartridge cases](glossary.html#def:cartridges) due to the firing process of a gun, in a similar way that marks are left on bullets. In the case of cartridge cases, there are at least two types of marks that are of interest. First, the [firing pin](glossary.html#def:firingpin) hits the [primer](#def:primer) at the base of the cartridge, leaving a firing pin impression. The subsequent explosion (which launches the bullet) also causes the cartridge case to be pressed against the breech block of the gun, leaving impressed marks known as [breechface](glossary.html#def:breechface) marks. Both these types of marks are thought to [individualize](glossary.html#def:individualize) a gun, hence law enforcement officers frequently collect cartridge cases from crime scenes, in hopes of connecting these to retrieved guns, or connecting crimes where the same weapon was used.
In current practice, retrieved cartridge cases are entered into a national database called the National Integrated Ballistics Information Network ([NIBIN](https://www.atf.gov/firearms/national-integrated-ballistic-information-network-nibin)), through a computer\-based platform which was developed and is maintained by [Ultra Electronics Forensic Technology Inc.](https://www.ultra-forensictechnology.com/en/) (FTI). This platform captures an image of the “new” cartridge case and runs a [proprietary](glossary.html#def:proprietary) search algorithm, returning a list of top ranked potential matches from the database. Firearms examiners then examine this list and the associated images, to make a judgment about which potential matches warrant further investigation. The physical cartridge cases associated with these images are then located and examined under a comparison microscope. The firearms examiner decides if there are any matches, based on whether there is “sufficient agreement” between the marks (AFTE Criteria for Identification Committee [1992](#ref-AFTE1992)[a](#ref-AFTE1992)), and may bring this evidence to court.
There has been much public criticism in recent years about the current system. For example, PCAST (PCAST [2016](#ref-PCAST2016)) expressed concern that there had been insufficient studies establishing the reliability of conclusions made by examiners, and the associated error rates had not been adequately estimated. They suggested two directions for the path forward. The first is to “continue to improve firearms analysis as a subjective method,” and the second is to “convert firearms analysis from a subjective method to an objective method,” through the use of automated methods and image\-analysis algorithms (PCAST [2016](#ref-PCAST2016)).
There have been efforts by various groups, both commercial and academic, in line with this second recommendation. A full review is out of the scope of the current text, but we refer the interested reader to Roth et al. ([2015](#ref-Roth2015)), Geradts et al. ([2001](#ref-Geradts2001)), Thumwarin ([2008](#ref-Thumwarin2008)), Riva and Champod ([2014](#ref-Riva2014)), Vorburger et al. ([2007](#ref-Vorburger2007)), Song ([2013](#ref-Song2013)), and others. One point to note is that as far as we know, none of these methods are [open\-source](glossary.html#def:openss). We have developed methodology to process and compare cartridge cases in a fully automatic, open\-source manner, and in this chapter, we describe R packages to accomplish these tasks.
4\.2 Data
---------
NIST maintains a Ballistics Toolmark Research Database (<https://tsapps.nist.gov/NRBTD>), an open\-access research database of bullet and cartridge case toolmark data. The database contains images from test fires originating from studies conducted by various groups in the firearm and toolmark community. These cartridge cases were originally collected for different purposes, for example the Laura Lightstone study investigated whether firearms examiners were able to differentiate cartridge cases from consecutively manufactured pistol slides (Lightstone [2010](#ref-Lightstone2010)). The majority of available data are cartridge cases that were sent to NIST for imaging, but the website also allows users to upload their own data in a standardized format.
There are a total of 2,305 images (as of 3/4/2019\), and among these are data sets involving consecutively manufactured [pistol slides](glossary.html#def:slide), a large number of firings (termed persistence studies because they investigate the persistence of marks), as well as different makes and models of guns and ammunition. Gun manufacturers in the database include Glock, Hi\-Point, Ruger, Sig Sauer, and Smith \& Wesson, and ammunition brands include CCI, Federal, PMC, Remington, Speer, Wolf and Winchester.
Measurements are primarily made using a Leica FS M 2D reflectance microscope, and a Nanofocus uSurf disc scanning confocal microscope. The former captures photo images while the latter captures 3D topographies. Detailed metadata are available for each of these images, for example for photo images, the magnification was 2X with a lateral resolution of \\(2\.53 \\mu m\\), producing \\(2592 \\times 1944\\) pixel, 256\-grayscale PNG images. For 3D, various magnifications were used, for example an objective of 10X results in a lateral resolution of \\(3\.125 \\mu m\\), and images that are around \\(1200 \\times 1200\\). The 3D data are in x3p format, and more information about this file format can be found in Chapter [3](bullets.html#bullets).
Examples of images are in Section [4\.5](casings.html#casings-caseStudy).
4\.3 R Package(s)
-----------------
The goal of the analysis is to derive a measure of similarity between a pair of cartridge case images. There are a few steps involved in such an analysis. Broadly, we first need to process the images so that they are ready for analysis. This might involve selecting relevant marks or highlighting specific features. Next, given two images, they need to be aligned so that any similarity measure extracted is meaningful. The final step is to estimate the similarity score.
We have developed R packages to analyze images in the standard format in NIST’s database. [`cartridges`](https://github.com/xhtai/cartridges) analyzes 2D photo images, while [`cartridges3D`](https://github.com/xhtai/cartridges3D) analyzes 3D topographies. A complete description of methodology used in `cartridges` is in Tai and Eddy ([2018](#ref-Tai2018)). `cartridges3D` modifies this for 3D topographies, with the major difference being in pre\-processing. More details can be found in the package [README](https://github.com/xhtai/cartridges3D).
The primary functions of the package are `allPreprocess` and `calculateCCFmaxSearch`. The former performs all pre\-processing steps, while the latter does both alignment and computation of a similarity score. The corresponding function for processing 3D data is `allPreprocess3D`. The end result is a similarity score for a pair of images being compared.
4\.4 Drawing Conclusions
------------------------
Depending on the goal of the analysis, as well as the availability of data, there are a few ways in which conclusions may be drawn. The analysis produces a similarity score for a pair of images. This could be sufficient for the analysis, for example if we have two pairs of images being compared, the goal might be simply to estimate which of the two pairs are more similar to each other. This situation is straightforward, and we can make a conclusion such as “Comparison 1 contains images that are more similar than Comparison 2\.” If a set of comparisons are being done, the conclusion might be of the form “these are the top 10 pairs with highest similarity scores out of the 100 comparisons being made.” This could be used to generate investigative leads, where we select the top 10 (say) pairs for further investigation. A different context in which this type of conclusion could be used is by examiners for blind verification. This means that an examiner first comes to their own conclusion, and then verifies this using an automatic method, making a conclusion such as “Based on my experience and training, these two cartridge cases come from the same gun. This pair that I identified also had a score of .7, the highest similarity score returned by *\[\[insert algorithm]]* among *\[\[insert subset of pairs being considered]]*.”
In other situations, we might be interested in designating a similarity cutoff above which some action is taken. The selection of such a cutoff depends on the goal. For example, similar to the above situation, we might be interested in selecting pairs above a cutoff for further manual investigation, instead of simply picking the top 10 pairs. Alternatively, a cutoff could be used to decide if pairs are matches or non\-matches. This could be of interest in criminal cases, where a conclusion of match or non\-match is required to decide if a person should be implicated in a crime. In the first case a lower cutoff might be set to ensure high recall, while in the second case a much higher cutoff might be necessary.
Given appropriate data on the distribution of similarity scores for non\-matching pairs in some population of interest, a third type of conclusion that we can draw is to estimate a probability of getting a higher similarity score by chance. For example, if we obtain a similarity of .7 for the pair of interest, we compare .7 to some distribution of similarity scores for non\-matching pairs, that might have been obtained from prior studies. The probability of interest is the probability that a random draw from that distribution is larger than .7, say \\(p\_0\\). The conclusion that we can then draw is that if the pair was a non\-match, the probability of getting a score higher than .7 is \\(p\_0\\). If the value of \\(p\_0\\) is small, this provides evidence against the hypothesis that the pair of interest is a non\-matching pair. Such a probability can be used as a measure of the probative value of the evidence.
4\.5 Case Study
---------------
The following case study uses two 2D photo images from the NBIDE study in NIST’s database, coming from the same Ruger gun, firing PMC ammunition. Referring to the study metadata, these are cartridge cases RR054 and RR072, corresponding to the files “NBIDE R BF 054\.png” and “NBIDE R BF 072\.png”.
We first load the package:
```
library(cartridges)
```
We can read in and plot the images as follows. If using a user\-downloaded image, one can simply replace the file path with the location of the downloaded image.
```
exImage1 <- readCartridgeImage("./img/casings_NBIDE054.png")
plotImage(exImage1, type = "original")
```
```
exImage2 <- readCartridgeImage("./img/casings_NBIDE072.png")
plotImage(exImage2, type = "original")
```
Now, all the pre\-processing can be done using `allPreprocess`.
```
processedEx1 <- allPreprocess("./img/casings_NBIDE054.png")
processedEx2 <- allPreprocess("./img/casings_NBIDE072.png")
```
The processed images can be plotted using `plotImage`.
```
plotImage(processedEx1, type = "any")
```
```
plotImage(processedEx2, type = "any")
```
Now, to compare these two images, we use
```
calculateCCFmaxSearch(processedEx1, processedEx2)
```
This produces a score of .40\. As discussed in Section [4\.4](casings.html#casings-conclusions), the conclusions to be drawn depend on the goals of the analysis, as well as the availability of data. The first type of conclusion could be that this pair of images is more similar to each other than some other pair of images. The second type of conclusion could be that this score is high enough to warrant further manual investigation. Finally, if we have some prior information on some reference distribution of non\-matching scores, we can compute the probability of obtaining a higher score by chance as follows. Given a reference population of interest, one can perform the appropriate pairwise comparisons and obtain non\-match distributions empirically. Here we use a normal distribution for purposes of illustration.
```
set.seed(0)
computeProb(0.4, rnorm(50, 0.02, 0.3))
```
```
## [1] 0.08
```
The conclusion then is that the probability of obtaining a score higher than .40, for a non\-matching pair, is .08\.
The same type of analysis can be done with 3D topographies using the corresponding functions in `cartridges3D`.
[
#### *Xiao Hui Tai*
4\.1 Introduction
-----------------
Marks are left on [cartridge cases](glossary.html#def:cartridges) due to the firing process of a gun, in a similar way that marks are left on bullets. In the case of cartridge cases, there are at least two types of marks that are of interest. First, the [firing pin](glossary.html#def:firingpin) hits the [primer](#def:primer) at the base of the cartridge, leaving a firing pin impression. The subsequent explosion (which launches the bullet) also causes the cartridge case to be pressed against the breech block of the gun, leaving impressed marks known as [breechface](glossary.html#def:breechface) marks. Both these types of marks are thought to [individualize](glossary.html#def:individualize) a gun, hence law enforcement officers frequently collect cartridge cases from crime scenes, in hopes of connecting these to retrieved guns, or connecting crimes where the same weapon was used.
In current practice, retrieved cartridge cases are entered into a national database called the National Integrated Ballistics Information Network ([NIBIN](https://www.atf.gov/firearms/national-integrated-ballistic-information-network-nibin)), through a computer\-based platform which was developed and is maintained by [Ultra Electronics Forensic Technology Inc.](https://www.ultra-forensictechnology.com/en/) (FTI). This platform captures an image of the “new” cartridge case and runs a [proprietary](glossary.html#def:proprietary) search algorithm, returning a list of top ranked potential matches from the database. Firearms examiners then examine this list and the associated images, to make a judgment about which potential matches warrant further investigation. The physical cartridge cases associated with these images are then located and examined under a comparison microscope. The firearms examiner decides if there are any matches, based on whether there is “sufficient agreement” between the marks (AFTE Criteria for Identification Committee [1992](#ref-AFTE1992)[a](#ref-AFTE1992)), and may bring this evidence to court.
There has been much public criticism in recent years about the current system. For example, PCAST (PCAST [2016](#ref-PCAST2016)) expressed concern that there had been insufficient studies establishing the reliability of conclusions made by examiners, and the associated error rates had not been adequately estimated. They suggested two directions for the path forward. The first is to “continue to improve firearms analysis as a subjective method,” and the second is to “convert firearms analysis from a subjective method to an objective method,” through the use of automated methods and image\-analysis algorithms (PCAST [2016](#ref-PCAST2016)).
There have been efforts by various groups, both commercial and academic, in line with this second recommendation. A full review is out of the scope of the current text, but we refer the interested reader to Roth et al. ([2015](#ref-Roth2015)), Geradts et al. ([2001](#ref-Geradts2001)), Thumwarin ([2008](#ref-Thumwarin2008)), Riva and Champod ([2014](#ref-Riva2014)), Vorburger et al. ([2007](#ref-Vorburger2007)), Song ([2013](#ref-Song2013)), and others. One point to note is that as far as we know, none of these methods are [open\-source](glossary.html#def:openss). We have developed methodology to process and compare cartridge cases in a fully automatic, open\-source manner, and in this chapter, we describe R packages to accomplish these tasks.
4\.2 Data
---------
NIST maintains a Ballistics Toolmark Research Database (<https://tsapps.nist.gov/NRBTD>), an open\-access research database of bullet and cartridge case toolmark data. The database contains images from test fires originating from studies conducted by various groups in the firearm and toolmark community. These cartridge cases were originally collected for different purposes, for example the Laura Lightstone study investigated whether firearms examiners were able to differentiate cartridge cases from consecutively manufactured pistol slides (Lightstone [2010](#ref-Lightstone2010)). The majority of available data are cartridge cases that were sent to NIST for imaging, but the website also allows users to upload their own data in a standardized format.
There are a total of 2,305 images (as of 3/4/2019\), and among these are data sets involving consecutively manufactured [pistol slides](glossary.html#def:slide), a large number of firings (termed persistence studies because they investigate the persistence of marks), as well as different makes and models of guns and ammunition. Gun manufacturers in the database include Glock, Hi\-Point, Ruger, Sig Sauer, and Smith \& Wesson, and ammunition brands include CCI, Federal, PMC, Remington, Speer, Wolf and Winchester.
Measurements are primarily made using a Leica FS M 2D reflectance microscope, and a Nanofocus uSurf disc scanning confocal microscope. The former captures photo images while the latter captures 3D topographies. Detailed metadata are available for each of these images, for example for photo images, the magnification was 2X with a lateral resolution of \\(2\.53 \\mu m\\), producing \\(2592 \\times 1944\\) pixel, 256\-grayscale PNG images. For 3D, various magnifications were used, for example an objective of 10X results in a lateral resolution of \\(3\.125 \\mu m\\), and images that are around \\(1200 \\times 1200\\). The 3D data are in x3p format, and more information about this file format can be found in Chapter [3](bullets.html#bullets).
Examples of images are in Section [4\.5](casings.html#casings-caseStudy).
4\.3 R Package(s)
-----------------
The goal of the analysis is to derive a measure of similarity between a pair of cartridge case images. There are a few steps involved in such an analysis. Broadly, we first need to process the images so that they are ready for analysis. This might involve selecting relevant marks or highlighting specific features. Next, given two images, they need to be aligned so that any similarity measure extracted is meaningful. The final step is to estimate the similarity score.
We have developed R packages to analyze images in the standard format in NIST’s database. [`cartridges`](https://github.com/xhtai/cartridges) analyzes 2D photo images, while [`cartridges3D`](https://github.com/xhtai/cartridges3D) analyzes 3D topographies. A complete description of methodology used in `cartridges` is in Tai and Eddy ([2018](#ref-Tai2018)). `cartridges3D` modifies this for 3D topographies, with the major difference being in pre\-processing. More details can be found in the package [README](https://github.com/xhtai/cartridges3D).
The primary functions of the package are `allPreprocess` and `calculateCCFmaxSearch`. The former performs all pre\-processing steps, while the latter does both alignment and computation of a similarity score. The corresponding function for processing 3D data is `allPreprocess3D`. The end result is a similarity score for a pair of images being compared.
4\.4 Drawing Conclusions
------------------------
Depending on the goal of the analysis, as well as the availability of data, there are a few ways in which conclusions may be drawn. The analysis produces a similarity score for a pair of images. This could be sufficient for the analysis, for example if we have two pairs of images being compared, the goal might be simply to estimate which of the two pairs are more similar to each other. This situation is straightforward, and we can make a conclusion such as “Comparison 1 contains images that are more similar than Comparison 2\.” If a set of comparisons are being done, the conclusion might be of the form “these are the top 10 pairs with highest similarity scores out of the 100 comparisons being made.” This could be used to generate investigative leads, where we select the top 10 (say) pairs for further investigation. A different context in which this type of conclusion could be used is by examiners for blind verification. This means that an examiner first comes to their own conclusion, and then verifies this using an automatic method, making a conclusion such as “Based on my experience and training, these two cartridge cases come from the same gun. This pair that I identified also had a score of .7, the highest similarity score returned by *\[\[insert algorithm]]* among *\[\[insert subset of pairs being considered]]*.”
In other situations, we might be interested in designating a similarity cutoff above which some action is taken. The selection of such a cutoff depends on the goal. For example, similar to the above situation, we might be interested in selecting pairs above a cutoff for further manual investigation, instead of simply picking the top 10 pairs. Alternatively, a cutoff could be used to decide if pairs are matches or non\-matches. This could be of interest in criminal cases, where a conclusion of match or non\-match is required to decide if a person should be implicated in a crime. In the first case a lower cutoff might be set to ensure high recall, while in the second case a much higher cutoff might be necessary.
Given appropriate data on the distribution of similarity scores for non\-matching pairs in some population of interest, a third type of conclusion that we can draw is to estimate a probability of getting a higher similarity score by chance. For example, if we obtain a similarity of .7 for the pair of interest, we compare .7 to some distribution of similarity scores for non\-matching pairs, that might have been obtained from prior studies. The probability of interest is the probability that a random draw from that distribution is larger than .7, say \\(p\_0\\). The conclusion that we can then draw is that if the pair was a non\-match, the probability of getting a score higher than .7 is \\(p\_0\\). If the value of \\(p\_0\\) is small, this provides evidence against the hypothesis that the pair of interest is a non\-matching pair. Such a probability can be used as a measure of the probative value of the evidence.
4\.5 Case Study
---------------
The following case study uses two 2D photo images from the NBIDE study in NIST’s database, coming from the same Ruger gun, firing PMC ammunition. Referring to the study metadata, these are cartridge cases RR054 and RR072, corresponding to the files “NBIDE R BF 054\.png” and “NBIDE R BF 072\.png”.
We first load the package:
```
library(cartridges)
```
We can read in and plot the images as follows. If using a user\-downloaded image, one can simply replace the file path with the location of the downloaded image.
```
exImage1 <- readCartridgeImage("./img/casings_NBIDE054.png")
plotImage(exImage1, type = "original")
```
```
exImage2 <- readCartridgeImage("./img/casings_NBIDE072.png")
plotImage(exImage2, type = "original")
```
Now, all the pre\-processing can be done using `allPreprocess`.
```
processedEx1 <- allPreprocess("./img/casings_NBIDE054.png")
processedEx2 <- allPreprocess("./img/casings_NBIDE072.png")
```
The processed images can be plotted using `plotImage`.
```
plotImage(processedEx1, type = "any")
```
```
plotImage(processedEx2, type = "any")
```
Now, to compare these two images, we use
```
calculateCCFmaxSearch(processedEx1, processedEx2)
```
This produces a score of .40\. As discussed in Section [4\.4](casings.html#casings-conclusions), the conclusions to be drawn depend on the goals of the analysis, as well as the availability of data. The first type of conclusion could be that this pair of images is more similar to each other than some other pair of images. The second type of conclusion could be that this score is high enough to warrant further manual investigation. Finally, if we have some prior information on some reference distribution of non\-matching scores, we can compute the probability of obtaining a higher score by chance as follows. Given a reference population of interest, one can perform the appropriate pairwise comparisons and obtain non\-match distributions empirically. Here we use a normal distribution for purposes of illustration.
```
set.seed(0)
computeProb(0.4, rnorm(50, 0.02, 0.3))
```
```
## [1] 0.08
```
The conclusion then is that the probability of obtaining a score higher than .40, for a non\-matching pair, is .08\.
The same type of analysis can be done with 3D topographies using the corresponding functions in `cartridges3D`.
[
| Field Specific |
sctyner.github.io | https://sctyner.github.io/OpenForSciR/fingerprints.html |
Chapter 5 Latent Fingerprints
=============================
#### *Karen Kafadar, Karen Pan*
5\.1 Introduction
-----------------
[Latent fingerprints](glossary.html#def:latentfp) collected at crime scenes have been widely used for individual identification purposes, primarily because fingerprints have long been assumed to be unique to an individual. Thus it is assumed that some subset of features, or [*minutiae*](glossary.html#def:minutiae), on a print can be identified and will suffice to determine whether a latent print and a digital print from a database collected under controlled conditions (e.g., in a police laboratory) came from the “same source.” However, unlike digital fingerprints in a database, latent prints are generally of poor quality and incomplete, missing ridge structures or patterns. The quality of a latent print is needed for the “Analysis” and “Comparison” phases of the “ACE\-V” fingerprint identification process (Friction Ridge Analysis Study and Technology [2012](#ref-swgfast)). This quality is currently judged visually, based on clarity of the print and specific features of it for identification purposes. This process, though done by trained fingerprint examiners, is nevertheless subjective and qualitative, as opposed to objective and quantitative. Once the examiner identifies seemingly usable minutiae on a latent print, the print is entered into an Automated Fingerprint Identification System (AFIS) which uses the examiner’s minutiae to return likely “matches” from a fingerprint database. Thus, the accuracy of the identification depends first and foremost on the quality and usability of the minutiae in a latent. More independent, high\-quality features should lead to more accurate database “matches”.
To date, neither objective measures of quality in selected minutiae, nor dependence among features, nor the number needed for high\-accuracy calls, has been considered. Quoting from B. T. Ulery et al. ([2011](#ref-ulery_2011)), “No such absolute criteria exist for judging whether the evidence is sufficient to reach a conclusion as opposed to making an inconclusive or no\-value decision. The best information we have to evaluate the appropriateness of reaching a conclusion is the collective judgments of the experts.” A digital fingerprint acquisition system does provide a numerical “quality” score of an exemplar print at the time it is taken (to ensure adequate clarity for later comparison), but the “quality” of the latent fingerprint is typically assessed qualitatively by the examiner.
Some authors have proposed measures of overall latent print quality. Bond ([2008](#ref-bond_2008)) defines a five\-point scale primarily in terms of ridge continuity. Tabassi, Wilson, and Watson ([2004](#ref-tabassi_2004)) define a five\-point quality scale in terms of contrast and clarity of features. The five\-point scale reflects how the quality of the latent print impacts the ability of a matching algorithm to find and score matching prints: high \[low] quality is associated with good \[poor] match performance. Yoon, Liu, and Jain ([2012](#ref-yoon_2012)) define a latent fingerprint image quality (LFIQ) score from a user\-defined set of features based on clarity of ridges and features. Tabassi, Wilson, and Watson ([2004](#ref-tabassi_2004)) cite other latent print quality measures that have been proposed and conclude that, for all of them, “evaluating their quality measure is a subjective matter” (Tabassi, Wilson, and Watson [2004](#ref-tabassi_2004), 6\). Nonetheless, there remains “substantial variability in the attributes of latent prints, in the capabilities of latent print examiners, in the types of casework received by agencies, and the procedures used among agencies” (Ulery et al. [2012](#ref-ulery_2012)). Consequently, some procedure that offers an objective measure of minutiae quality is needed.
Different features (minutiae) on a latent print supply different amounts of information to an examiner. Our goal is to develop a quality metric for each feature, based on a measure of information in the feature. Visually, a feature on a print (ridge ending, bifurcation, etc.) is more recognizable when it is easily differentiated from the background around it. In subsequent sections we develop a quality metric for each latent fingerprint feature that quantifies its distinctiveness from its background value, and hence, how reliable a feature might be for purposes of comparison (step 2 of the ACE\-V process).
### 5\.1\.1 Contrast Gradient Quality Measurement
An algorithm introduced in Peskin and Kafadar ([n.d.](#ref-pk_unpub)) identifies and examines a small collection of pixels surrounding a given feature and assesses their distinctiveness from the background pixels. The underlying principle for this approach lies in recognizing that a forensic examiner can distinguish features in a fairly blurry latent print by recognizing:
* a gradient of intensity between the dark and light regions defining a minutia point, and
* an overall contrast of intensity values in the general neighborhood of a minutiae point.
The first step in the algorithm locates the pixel within a small neighborhood around the minutia location that produces the highest intensity gradient value. Further calculation is done around that pixel. The largest gradient in a neighborhood of 5 pixels in each direction from a feature is found. For each pixel in this neighborhood, we compare the pixel intensity, \\(i(x, y)\\), to a neighboring pixel intensity, \\(i(x \+ n, y \+ m)\\), where \\(n, m \\in \\{\-2, \-1, 0, 1, 2\\}\\), and then divide by the corresponding distance between the pixels to get a measure of the gradient at pixel location \\((x, y)\\), \\(g(x, y; n, m)\\):
\\\[ g(x, y; n, m) \= \\frac{i(x, y) \- i(x \+ n, y \+ m)}{ \\sqrt{n^2 \+ m^2}}, \\qquad n, m \= \-2, ..., 2\. \\]
Define \\(G\_5(x, y)\\) as the set of all 24 gradients in the \\(5 \\times 5\\) neighborhood of \\((x, y)\\). The value used for the quality measurement is the maximum value in the set \\(G\_5(x, y)\\). We define \\((x\_0, y\_0\)\\) as the point that produces the largest gradient.
The next step in the algorithm locates the largest contrast between the point \\((x\_0, y\_0\)\\) and its immediate \\(3 \\times 3\\) neighborhood, \\((x\_0 \+ n, y\_0 \+ m)\\), or \\(n\_3(x\_0, y\_0\)\\), where \\(n,m \\in \\{\-1, 0, 1\\}\\). The contrast factor is the largest intensity difference between \\(i(x\_0, y\_0\)\\) and any neighbor intensity in \\(n\_3(x\_0, y\_0\)\\), divided by the maximum intensity in the print, \\(I\_M\\), usually 255:
\\\[ contrast \= \\frac{max\\{abs(i(x\_0, y\_0\) \- i(x,y)\\}}{I\_M}, \\qquad (x, y) \\in n\_3(x\_0, y\_0\). \\]
The contrast measurement differs from the gradient measurement because it highlights the maximum change in intensity among all nine points surrounding the minutia point at \\((x\_0, y\_0\)\\), while the gradient reflects a change in intensity near the minutia point, divided by the distance over that intensity change. Though the calculations are relatively simple, they are able to approximate the two properties around minutiae seen by a forensic examiner.
### 5\.1\.2 Contrast Gradient Quality Measurement Illustration
To illustrate this measurement, we first look at the gradient values on a clear print with well defined minutiae. Figure [5\.1](fingerprints.html#fig:clearprint) shows such a print. Along with it are close\-up views of three minutiae from the print, where the ridge ending and bifurcations are clearly seen. One would expect these very clear minutiae to be at the high end of the quality scale when measuring latent minutiae quality. They are at pixel locations (97, 100\), (126, 167\), and (111, 68\), where the origin (0,0\) is the upper left corner of the image. The largest gradients at these 3 locations are 94\.0, 66\.5, and 88\.0 intensity units per grid unit. Looking at gradient values in each \\(5 \\times 5\\) neighborhood, we find slightly higher gradient values for each minutia: 107\.0, 122\.0, and 121\.0\. For each minutia, we shift our focus point to the new \\((x\_0, y\_0\)\\) corresponding to the slightly higher gradient value. For these \\(n\_3(x\_0, y\_0\)\\) neighborhoods, we find the largest intensity differences to get the contrast. Contrast measures for the three locations are, respectively, 0\.447, 0\.651, and 0\.561, yielding quality metrics 107\.0 \\(\\times\\) 0\.447 \= 47\.84, 122\.0 \\(\\times\\) 0\.651 \= 79\.42, and 121\.0 \\(\\times\\) 0\.561 \= 67\.85\. The quality metric ranges from 0\.133\-100\.0 (see Figure [5\.5](fingerprints.html#fig:table1)) for NIST’s SD27A latent fingerprint database, which has 15,008 defined minutia locations. A quality metric value cannot exceed 100\.0, and any larger scores are capped at 100\.0 (out of the 15,008 minutia, only 54 produced quality metrics over 100\.0\). We now turn to the issue of applying this quality metric to determine the usability of minutiae information in a latent print.
Figure 5\.1: (Figure 1 from Peskin and Kafadar ([n.d.](#ref-pk_unpub)).) Example of a clear fingerprint and close\-up views of three minutiae from this print. Each horizontal green line in the whole print ends at one of the minutia.
The high quality minutiae above are contrasted with those in a latent print. Previously available database NIST SD27A included latent prints classified as “good,” “bad,” and “ugly” by forensic examiners to which we apply our quality metric. Figure [5\.2](fingerprints.html#fig:gbugly1) shows three typical latent fingerprints from this data set, one from each class. Figure [5\.3](fingerprints.html#fig:gbugly2) shows a closer look at one of the highest quality minutiae on each of these three latent fingerprints.
Figure 5\.2: (Figure 2 from Peskin and Kafadar ([n.d.](#ref-pk_unpub)).) Examples of a good, bad, and ugly latent print from the SD27A set: G043, B132, and U296\.
Figure 5\.3: (Figure 3 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). Examples of good, bad, and ugly minutiae of approximately the same quality. G043: (502, 741\), quality \= 21\.9, B132: (553, 651\), quality \= 27\.3, and U296: (548, 655\), quality \= 24\.3\. Each minutia is located at the center of the 40 × 40 cropped image.
To understand how our quality measure compares with an examiner’s assessment, we select one of the latent prints and show minutiae that were selected by one examiner at different quality levels. Figure [5\.4](fingerprints.html#fig:badlatents) shows a “bad” latent and close\-up views of three minutiae with quality scores 5\.0, 15\.2, and 28\.8\. The gradient and contrast measures increase from left to right in these \\(20 \\times 20\\) pixel images, and the quality metric is calculated within the center quarter (\\(5 \\times 5\\)) of the image.
Figure 5\.4: (Figure 4 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). One of the fingerprints labeled as bad, B110, and an expanded view of three of the minutiae from one examiner, in order of quality: (742, 903\), quality \= 5\.0; (727, 741\), quality \= 15\.2; and (890, 405\), quality \= 28\.8\. For each minutiae point, the examiner\-marked location is at the center of a 20 × 20 square of pixels.
The SD27A set of latent prints is associated with 849 sets of forensic examiner data, each an ANSI/NIST formatted record, containing the locations of all marked minutiae. From these, we can compare our algorithm’s quality metric with the examiner’s assessment of “good,” “bad,” or “ugly.” In Figure [5\.5](fingerprints.html#fig:table1), we tabulate the number of minutiae that a forensic examiner located in each set, and the average quality metric of all minutiae in each set. In addition to calculating the average quality metric across all minutiae on the print, we also calculate the average for only those subsets of minutiae with quality metric exceeding 10\.0, 20\.0, and 30\.0\. In each set of results, latent prints labeled “good” have more higher\-scoring minutiae than prints labeled “bad,” and substantially more than prints labeled “ugly.” The average minutiae quality metric for each set is similar, suggesting that the assigned label of “good,” “bad,” or “ugly” is highly influenced by the number of distinguishable (high\-scoring) minutiae on the print.
Figure 5\.5: (Table 1 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))) Numbers of minutiae and average minutiae quality metric (\\(Q\\)) for three sets of minutia for: all sets combined, and subsets defined by \\(Q\\) (SD \= standard deviation).
We then compared our minutiae quality metric to the ridge quality map, which is provided in record 9\.308 of the American National Standard for Information Systems data format for the Interchange of Fingerprint, Facial, and other Biometric Information (NIST, [n.d.](#ref-nist_itl)). The 9\.308 record contains a small grid of scores for individual pixels on small sections of the latent print, ranked for the quality of a ridge present in that section, with 5 representing the highest score and 0 the lowest. From these grids of 0\-5 values, we obtained the ridge quality scores for individual minutia locations. We compare our (objective) quality metric scores with observer (subjective) ridge qualities for all 15,008 minutia in the database, as shown in Table [5\.1](fingerprints.html#tab:comparefp)
Table 5\.1: Means and standard deviations (SD) of quality metric values for all 15,008 prints by their ordinal ridge quality score.
| Ridge Quality Score | Frequency | Quality Metric Mean | Quality Metric SD |
| --- | --- | --- | --- |
| 1 | 76 | 18\.6 | 19\.9 |
| 2 | 10829 | 23\.7 | 17\.1 |
| 3 | 3940 | 26\.5 | 18\.2 |
| 4 | 144 | 43\.2 | 26\.1 |
| 5 | 17 | 39\.9 | 14\.8 |
### 5\.1\.3 Test for Usable Quality
We designed a procedure to identify a threshold for this quality metric, below which the feature is unreliable, and above which it may provide reliable information for comparison purposes. To identify this threshold value, clear fingerprint images are systematically degraded and quality metric scores calculated for the minutiae accompanying each degraded image. We start by recognizing that a typical clear print has foreground (ridge) intensity values of 255 (black) on a scale of 0\-255 (white\-black). Accordingly, we simulate different levels of image quality by decreasing the quality of clear prints by lowering the foreground intensity levels to levels lower than 255\. As the foreground quality decreases, contrast between the minutiae and background also decreases. In this way, we create a series of images from each clear print with different levels of minutiae quality. We can then ask experts to evaluate which minutiae are, in their judgments, sufficiently distinguishable to be useful in a fingerprint analysis, and then note their conclusions following an actual comparison (“correct match found” or “incorrect match found”). This way, a range of \\(Q\\) for minutiae that are highly correlated with accuracy of analysis can be estimated. Note that one cannot use actual latent prints for such a study, because the background on latent prints is not well characterized and ground truth is unknown.
Figure [5\.6](fingerprints.html#fig:fptrans) shows an example of a clear print with foreground (ridges) starting at 255 followed by a series of prints in which the ridge intensity is progressively lowered to 100\. Figure [5\.7](fingerprints.html#fig:fptranszoom) magnifies the region around three of the minutiae for foreground values equal to 255 and 100 to show the decreased visibility of the minutiae when the gradients and contrast are severely reduced.
Figure 5\.6: (Figure 6 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). A clear fingerprint on the left and a series of transformations with ridge intensity lowered from 255 to 220, 200, 180, 160, 140, 120, and 100\.
Figure 5\.7: (Figure 7 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). Three of the minutiae from Figure [5\.6](fingerprints.html#fig:fptrans) with foreground \= 255 (top row) and foreground \= 100 (bottom row).
Given a quality metric and a method for systematically decreasing the quality of a fingerprint, we can now design an experiment with different examiners of different abilities and correlate the results of their analyses with true outcomes. If accuracy exceeds, say, 95% only when the print has at least \\(n\_0\\) minutiae having quality metrics above a threshold \\(Q\_0\\), then the examiner has an objective criterion for the “A” (analysis) phase of the ACE\-V process.
5\.2 Data
---------
Figure 5\.8: Magnetic fingerprint powder clinging to a rod. Source: [Stapleton and Associates](http://www.stapletonandassociates.com/images/MagPowder.jpg)
A common method for recovering latent prints is dusting. Moisture clinging powder is gently brushed onto a surface using soft brushes to increase the visibility of fingerprints. After dusting, prints are usually recorded by lifting – placing a piece of transparent tape over the fingerprint then transferring the tape, along with the powder, onto a card of contrasting color. Photographs may also be taken of the powdered print. In place of dusting, prints may be chemically processed (super glue, ninhydrin, Rhodamine 6G (R6G), etc.). Clearly visible prints ([patent prints](glossary.html#def:patentprint)) such as those made by paint or blood may be photographed directly. If the fingerprint is left on an object that is easily transported, the object should be sent to the forensics lab and photographed to create a digital image in a controlled environment.
Figure 5\.9: Examples of powdered and lifted fingerprints using powders of different color. Source: [Minsei Matec Co.](https://www.kinseimatec.co.jp/en/?page_id=1868.)
Photographs and fingerprint cards are scanned or otherwise converted to digital format. Enhancements to contrast or color may be made using a photo editing software before producing a finalized grayscale image that may be entered into an AFIS database, directly compared to a suspects’ [exemplar prints](glossary.html#def:exemplar) (known fingerprints, e.g., those taken on a ten print card at a police station), or run through a quality metric algorithm.
The information contained in a fingerprint can be divided into three levels (Scientific Working Group on Friction Ridge Analysis and Technology [2011](#ref-doj_fingerprint_sourcebook)).
Level 1 detail is the most general, consisting of overall pattern type, friction ridge flow, and morphological information. While insufficient for individualization, level 1 detail may be used for exclusion. “Fingerprint pattern classification” refers to overall fingerprint patterns (e.g., loop, whorl) and their subcategories (e.g., left/right slanted) (Hicklin et al. [2011](#ref-hicklin_2011)).
Figure 5\.10: Level 1 detail in a fingerprint includes overall pattern type and ridge flow, such as loops, whorls, and arches. Source: [Duquesne University](https://fslweb.wordpress.com/category/fingerprints/)
Level 2 detail consists of minutiae and individual friction ridge paths. Minutiae are also called “Galton details” after Sir Francis Galton, the first to define and name specific minutiae (bifurcation, enclosure, ridge endings, island) (Galton [1892](#ref-galton_fingerprints)).
Figure 5\.11: Different types of Level 2 detail, or minutiae, in a fingerprint. Source: [syhrl.blogspot.com](http://syhrl.blogspot.com/2012/04/036-fyp-minutiae.html).
Level 3 detail is the most specific, including friction ridge dimensional attributes such as width, edge shapes, and pores. These details may or may not appear on an exemplar print and are the least reliable.
Figure 5\.12: Level 3 detail of a fingerprint. Source: [onin.com/fp/](http://onin.com/fp/level123.html).
### 5\.2\.1 ACE\-V and AFIS
Latent prints may be submitted to an AFIS database which will return the top *n* potential matches. Examiners can then perform the ACE\-V comparison process on these prints until a “match” is found:
* **A**nalysis: a digitized latent print is analyzed to determine if it is “suitable” for examination. During this process, latent print examiners (LPEs) mark clear, high quality sections of a print in green, moderate quality sections in yellow, and unclear or distorted sections in red. These colors correspond to features that can be used in comparison, may possibly be used, and will likely not be useful for comparison, respectfully. As level 3 detail is unreliable, LPEs use level 1 and 2 detail to determine if a print is suitable to continue to the comparison step. If not, the ACE\-V process ends here.
* **C**omparison: the latent and exemplar prints are compared side by side. In addition to overall pattern and ridge flow, examiners may look for the existence of target groups – unique clusters of minutiae – that correspond between a latent and exemplar. Additional features may be marked in orange.
* **E**valuation: a decision of *identification* (formerly “individualization”), *exclusion*, or *inconclusive* is made based on [OSAC standards](https://www.nist.gov/sites/default/files/documents/2016/10/26/swgfast_examinations-conclusions_2.0_130427.pdf). An inconclusive conclusion may be reached if either print does not contain enough information for a decision. Examiners may request consultation with a colleague before reaching a decision, who would perform an independent markup on the latent print.
* **V**erification: a decision may (or may not) be verified by another examiner who performs an independent markup of the latent print. If the second examiner does not know the first examiner’s decision, it is considered a blind verification.
IAFIS is the Integrated Automatic Fingerprint Identification System developed and maintained by the US FBI (Investigation, [n.d.](#ref-fbi_iafis)[a](#ref-fbi_iafis)). Implemented in 1999, it contains criminal history, photographs, and fingerprints for over 70 million individuals with criminal histories and fingerprints of 34 million civilians (Investigation, [n.d.](#ref-fbi_iafis)[a](#ref-fbi_iafis), @fbi\_ngi). The FBI’s Next Generation Identification (NGI) System was announced in 2014 to extend and improve the capabilities of IAFIS. INTERPOL also maintains a database of over 181,000 fingerprint records and almost 11,000 latent prints (INTERPOL, [n.d.](#ref-interpol_fingerprints)).
NIST maintains a series of Biometric Special Databases and Software (NIST, [n.d.](#ref-nist_biometric)), including several fingerprint databases. Special Database 302, not yet released, will contain realistic latent prints and their corresponding exemplars collected from the Intelligence Advanced Research Projects Activity (IARPA) Nail to Nail (N2N) Fingerprint Challenge (Fiumara [2018](#ref-fiumara_talk)). Annotations to these latent prints may be released at a future date. Special Database 27A, which has since been withdrawn, contained 258 latent and rolled mate pairs that two or more LPEs have agreed “match” (i.e. print pairs not guaranteed to be ground truth matches) (Watson [2015](#ref-watson_2015)). The latent prints in this database were classified into three categories: good, bad, and ugly, which allows for general testing of correspondence between overall fingerprint quality and quality scores.
5\.3 R Package
--------------
The R Package [`fingerprintr`](https://github.com/kdp4be/fingerprintr) implements the Contrast Gradient Quality Measurement for quantifying the quality of individual features, or minutiae, in a latent fingerprint (Kafadar and Pan [2019](#ref-fingerprintr)). The primary functions include reading in the fingerprint image into the correct format (`convert_image`) then calculating the quality scores (`quality_scores`). If desired, additional information on gradient and contrast can be output by setting `verbose = TRUE` in the `quality_scores` function. If one wishes to only see gradient and contrast values, these can be output by functions `find_maxgrad` and `find_contrast`, respectively.
```
library(bmp)
library(fingerprintr)
# this image is the first of two used in the simple latent case study below
temp_image <- read.bmp("simple_latent.bmp")
# minutiae information (one per row without semicolons): X,Y; 72,22; 72,32;
# 35,80; 90,85; 59,144
temp_min <- read.csv("simple_latent_min.txt", header = TRUE, sep = ",")
image_file <- convert_image(temp_image, "bmp")
min_file <- as.matrix(temp_min)
# quality scores
quality_scores(image_file, min_file)
quality_scores(image_file, min_file, verbose = TRUE)
# if image already in pixel array format, must be transposed before running
# quality_scores
text_image <- read.csv("simple_latent.txt", header = FALSE, sep = "\t")
quality_scores(t(text_image), min_file)
```
5\.4 Drawing Conclusions
------------------------
Presently, the first step in fingerprint comparison is the analysis phase, in which the examiner assesses a print for usable minutia that are judged to be sufficiently clear and distinctive for comparison with prints in a database. To reduce the subjectivity in this assessment, we have proposed a “minutia quality metric” for assessing the clarity, and hence usability, of minutia (e.g., ridge endings, bifurcations, islands) in a fingerprint. The metric is scaled between 0 (totally unusable) and 100 (perfectly clear); more high\-scoring minutia should lead to greater distinctiveness in the print and hence fewer false positives that can occur when trying to match latent prints to database prints using much lower\-quality minutia. We have shown in this chapter that our metric is both computationally efficient and correlates well with image quality: by systematically (via image algorithms) reducing the image quality of a print (and hence of the minutia), the quality metric decreases accordingly. We also show, using NIST SD27A fingerprint images, that the existence of more high\-quality minutia correlates well with the experts’ three\-category assessment of fingerprint images (good, bad, ugly).
In future work, we plan to evaluate the value of this algorithm for real practice, and estimate false positive and false negative rates with and without the quality metric. We report on this work in a forthcoming article.
5\.5 Case Study
---------------
Figure 5\.13: Five simple fingerprints on a clean glass plate.
Five simple fingerprints were created on a clean glass plate using a single finger with differing levels of pressure. These levels range from one to five, with one indicating the largest amount of pressure applied. The glass plate was dusted using a black magnetic powder and the revealed prints lifted with tape onto a white fingerprint card. As expected, as pressure decreases, the latent prints decrease in overall quality (visually, the ridges are less thick and lose some continuity) and amount of friction ridge area captured. The entire fingerprint card was scanned and each print was cropped into its own grayscale image file. These cropped images were converted into their equivalent 2D pixel array values (0 to 255, black to white).
Five features (ridge endings (E) and bifurcations (B)) from the second and third prints were analyzed using the Contrast Gradient Quality Metric. As the metric is interested in contrast and gradients, the darker, blacker ridges in the Figure [5\.14](fingerprints.html#fig:markedtwo) receive overall higher scores than the lighter gray ridges in the Figure [5\.15](fingerprints.html#fig:markedthree) (feature 3 is an exception).
Figure 5\.14: Second print
| Feature | Type | Score |
| --- | --- | --- |
| 1 | E | 70\.27 |
| 2 | B | 57\.76 |
| 3 | E | 54\.99 |
| 4 | E | 58\.32 |
| 5 | B | 54\.78 |
Figure 5\.15: Third print
| Feature | Type | Score |
| --- | --- | --- |
| 1 | E | 29\.46 |
| 2 | B | 19\.99 |
| 3 | E | 79\.86 |
| 4 | E | 20\.97 |
| 5 | B | 23\.79 |
Although only five features are marked in the latent prints above, many more would be identified in casework. In cases where large number of minutiae are identified, minutiae quality scores could allow examiners to focus first on those with higher quality, from which presumably the most reliable information may be obtained. The minutiae in the first image above are relatively clear even to an inexperienced observer, and clearly of better contrast than minutiae in the second image, which the quality scores reflect. Using the quality scores, examiners may be able to focus on high scoring features first, or ignore minutiae scoring below a certain threshold.
### Acknowledgements
We would like to thank Dr. Adele Peskin for vital discussion and conversation leading to the creation and development of the Contrast Gradient Algorithm as well as an unpublished manuscript (Peskin and Kafadar, [n.d.](#ref-pk_unpub)).
[
#### *Karen Kafadar, Karen Pan*
5\.1 Introduction
-----------------
[Latent fingerprints](glossary.html#def:latentfp) collected at crime scenes have been widely used for individual identification purposes, primarily because fingerprints have long been assumed to be unique to an individual. Thus it is assumed that some subset of features, or [*minutiae*](glossary.html#def:minutiae), on a print can be identified and will suffice to determine whether a latent print and a digital print from a database collected under controlled conditions (e.g., in a police laboratory) came from the “same source.” However, unlike digital fingerprints in a database, latent prints are generally of poor quality and incomplete, missing ridge structures or patterns. The quality of a latent print is needed for the “Analysis” and “Comparison” phases of the “ACE\-V” fingerprint identification process (Friction Ridge Analysis Study and Technology [2012](#ref-swgfast)). This quality is currently judged visually, based on clarity of the print and specific features of it for identification purposes. This process, though done by trained fingerprint examiners, is nevertheless subjective and qualitative, as opposed to objective and quantitative. Once the examiner identifies seemingly usable minutiae on a latent print, the print is entered into an Automated Fingerprint Identification System (AFIS) which uses the examiner’s minutiae to return likely “matches” from a fingerprint database. Thus, the accuracy of the identification depends first and foremost on the quality and usability of the minutiae in a latent. More independent, high\-quality features should lead to more accurate database “matches”.
To date, neither objective measures of quality in selected minutiae, nor dependence among features, nor the number needed for high\-accuracy calls, has been considered. Quoting from B. T. Ulery et al. ([2011](#ref-ulery_2011)), “No such absolute criteria exist for judging whether the evidence is sufficient to reach a conclusion as opposed to making an inconclusive or no\-value decision. The best information we have to evaluate the appropriateness of reaching a conclusion is the collective judgments of the experts.” A digital fingerprint acquisition system does provide a numerical “quality” score of an exemplar print at the time it is taken (to ensure adequate clarity for later comparison), but the “quality” of the latent fingerprint is typically assessed qualitatively by the examiner.
Some authors have proposed measures of overall latent print quality. Bond ([2008](#ref-bond_2008)) defines a five\-point scale primarily in terms of ridge continuity. Tabassi, Wilson, and Watson ([2004](#ref-tabassi_2004)) define a five\-point quality scale in terms of contrast and clarity of features. The five\-point scale reflects how the quality of the latent print impacts the ability of a matching algorithm to find and score matching prints: high \[low] quality is associated with good \[poor] match performance. Yoon, Liu, and Jain ([2012](#ref-yoon_2012)) define a latent fingerprint image quality (LFIQ) score from a user\-defined set of features based on clarity of ridges and features. Tabassi, Wilson, and Watson ([2004](#ref-tabassi_2004)) cite other latent print quality measures that have been proposed and conclude that, for all of them, “evaluating their quality measure is a subjective matter” (Tabassi, Wilson, and Watson [2004](#ref-tabassi_2004), 6\). Nonetheless, there remains “substantial variability in the attributes of latent prints, in the capabilities of latent print examiners, in the types of casework received by agencies, and the procedures used among agencies” (Ulery et al. [2012](#ref-ulery_2012)). Consequently, some procedure that offers an objective measure of minutiae quality is needed.
Different features (minutiae) on a latent print supply different amounts of information to an examiner. Our goal is to develop a quality metric for each feature, based on a measure of information in the feature. Visually, a feature on a print (ridge ending, bifurcation, etc.) is more recognizable when it is easily differentiated from the background around it. In subsequent sections we develop a quality metric for each latent fingerprint feature that quantifies its distinctiveness from its background value, and hence, how reliable a feature might be for purposes of comparison (step 2 of the ACE\-V process).
### 5\.1\.1 Contrast Gradient Quality Measurement
An algorithm introduced in Peskin and Kafadar ([n.d.](#ref-pk_unpub)) identifies and examines a small collection of pixels surrounding a given feature and assesses their distinctiveness from the background pixels. The underlying principle for this approach lies in recognizing that a forensic examiner can distinguish features in a fairly blurry latent print by recognizing:
* a gradient of intensity between the dark and light regions defining a minutia point, and
* an overall contrast of intensity values in the general neighborhood of a minutiae point.
The first step in the algorithm locates the pixel within a small neighborhood around the minutia location that produces the highest intensity gradient value. Further calculation is done around that pixel. The largest gradient in a neighborhood of 5 pixels in each direction from a feature is found. For each pixel in this neighborhood, we compare the pixel intensity, \\(i(x, y)\\), to a neighboring pixel intensity, \\(i(x \+ n, y \+ m)\\), where \\(n, m \\in \\{\-2, \-1, 0, 1, 2\\}\\), and then divide by the corresponding distance between the pixels to get a measure of the gradient at pixel location \\((x, y)\\), \\(g(x, y; n, m)\\):
\\\[ g(x, y; n, m) \= \\frac{i(x, y) \- i(x \+ n, y \+ m)}{ \\sqrt{n^2 \+ m^2}}, \\qquad n, m \= \-2, ..., 2\. \\]
Define \\(G\_5(x, y)\\) as the set of all 24 gradients in the \\(5 \\times 5\\) neighborhood of \\((x, y)\\). The value used for the quality measurement is the maximum value in the set \\(G\_5(x, y)\\). We define \\((x\_0, y\_0\)\\) as the point that produces the largest gradient.
The next step in the algorithm locates the largest contrast between the point \\((x\_0, y\_0\)\\) and its immediate \\(3 \\times 3\\) neighborhood, \\((x\_0 \+ n, y\_0 \+ m)\\), or \\(n\_3(x\_0, y\_0\)\\), where \\(n,m \\in \\{\-1, 0, 1\\}\\). The contrast factor is the largest intensity difference between \\(i(x\_0, y\_0\)\\) and any neighbor intensity in \\(n\_3(x\_0, y\_0\)\\), divided by the maximum intensity in the print, \\(I\_M\\), usually 255:
\\\[ contrast \= \\frac{max\\{abs(i(x\_0, y\_0\) \- i(x,y)\\}}{I\_M}, \\qquad (x, y) \\in n\_3(x\_0, y\_0\). \\]
The contrast measurement differs from the gradient measurement because it highlights the maximum change in intensity among all nine points surrounding the minutia point at \\((x\_0, y\_0\)\\), while the gradient reflects a change in intensity near the minutia point, divided by the distance over that intensity change. Though the calculations are relatively simple, they are able to approximate the two properties around minutiae seen by a forensic examiner.
### 5\.1\.2 Contrast Gradient Quality Measurement Illustration
To illustrate this measurement, we first look at the gradient values on a clear print with well defined minutiae. Figure [5\.1](fingerprints.html#fig:clearprint) shows such a print. Along with it are close\-up views of three minutiae from the print, where the ridge ending and bifurcations are clearly seen. One would expect these very clear minutiae to be at the high end of the quality scale when measuring latent minutiae quality. They are at pixel locations (97, 100\), (126, 167\), and (111, 68\), where the origin (0,0\) is the upper left corner of the image. The largest gradients at these 3 locations are 94\.0, 66\.5, and 88\.0 intensity units per grid unit. Looking at gradient values in each \\(5 \\times 5\\) neighborhood, we find slightly higher gradient values for each minutia: 107\.0, 122\.0, and 121\.0\. For each minutia, we shift our focus point to the new \\((x\_0, y\_0\)\\) corresponding to the slightly higher gradient value. For these \\(n\_3(x\_0, y\_0\)\\) neighborhoods, we find the largest intensity differences to get the contrast. Contrast measures for the three locations are, respectively, 0\.447, 0\.651, and 0\.561, yielding quality metrics 107\.0 \\(\\times\\) 0\.447 \= 47\.84, 122\.0 \\(\\times\\) 0\.651 \= 79\.42, and 121\.0 \\(\\times\\) 0\.561 \= 67\.85\. The quality metric ranges from 0\.133\-100\.0 (see Figure [5\.5](fingerprints.html#fig:table1)) for NIST’s SD27A latent fingerprint database, which has 15,008 defined minutia locations. A quality metric value cannot exceed 100\.0, and any larger scores are capped at 100\.0 (out of the 15,008 minutia, only 54 produced quality metrics over 100\.0\). We now turn to the issue of applying this quality metric to determine the usability of minutiae information in a latent print.
Figure 5\.1: (Figure 1 from Peskin and Kafadar ([n.d.](#ref-pk_unpub)).) Example of a clear fingerprint and close\-up views of three minutiae from this print. Each horizontal green line in the whole print ends at one of the minutia.
The high quality minutiae above are contrasted with those in a latent print. Previously available database NIST SD27A included latent prints classified as “good,” “bad,” and “ugly” by forensic examiners to which we apply our quality metric. Figure [5\.2](fingerprints.html#fig:gbugly1) shows three typical latent fingerprints from this data set, one from each class. Figure [5\.3](fingerprints.html#fig:gbugly2) shows a closer look at one of the highest quality minutiae on each of these three latent fingerprints.
Figure 5\.2: (Figure 2 from Peskin and Kafadar ([n.d.](#ref-pk_unpub)).) Examples of a good, bad, and ugly latent print from the SD27A set: G043, B132, and U296\.
Figure 5\.3: (Figure 3 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). Examples of good, bad, and ugly minutiae of approximately the same quality. G043: (502, 741\), quality \= 21\.9, B132: (553, 651\), quality \= 27\.3, and U296: (548, 655\), quality \= 24\.3\. Each minutia is located at the center of the 40 × 40 cropped image.
To understand how our quality measure compares with an examiner’s assessment, we select one of the latent prints and show minutiae that were selected by one examiner at different quality levels. Figure [5\.4](fingerprints.html#fig:badlatents) shows a “bad” latent and close\-up views of three minutiae with quality scores 5\.0, 15\.2, and 28\.8\. The gradient and contrast measures increase from left to right in these \\(20 \\times 20\\) pixel images, and the quality metric is calculated within the center quarter (\\(5 \\times 5\\)) of the image.
Figure 5\.4: (Figure 4 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). One of the fingerprints labeled as bad, B110, and an expanded view of three of the minutiae from one examiner, in order of quality: (742, 903\), quality \= 5\.0; (727, 741\), quality \= 15\.2; and (890, 405\), quality \= 28\.8\. For each minutiae point, the examiner\-marked location is at the center of a 20 × 20 square of pixels.
The SD27A set of latent prints is associated with 849 sets of forensic examiner data, each an ANSI/NIST formatted record, containing the locations of all marked minutiae. From these, we can compare our algorithm’s quality metric with the examiner’s assessment of “good,” “bad,” or “ugly.” In Figure [5\.5](fingerprints.html#fig:table1), we tabulate the number of minutiae that a forensic examiner located in each set, and the average quality metric of all minutiae in each set. In addition to calculating the average quality metric across all minutiae on the print, we also calculate the average for only those subsets of minutiae with quality metric exceeding 10\.0, 20\.0, and 30\.0\. In each set of results, latent prints labeled “good” have more higher\-scoring minutiae than prints labeled “bad,” and substantially more than prints labeled “ugly.” The average minutiae quality metric for each set is similar, suggesting that the assigned label of “good,” “bad,” or “ugly” is highly influenced by the number of distinguishable (high\-scoring) minutiae on the print.
Figure 5\.5: (Table 1 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))) Numbers of minutiae and average minutiae quality metric (\\(Q\\)) for three sets of minutia for: all sets combined, and subsets defined by \\(Q\\) (SD \= standard deviation).
We then compared our minutiae quality metric to the ridge quality map, which is provided in record 9\.308 of the American National Standard for Information Systems data format for the Interchange of Fingerprint, Facial, and other Biometric Information (NIST, [n.d.](#ref-nist_itl)). The 9\.308 record contains a small grid of scores for individual pixels on small sections of the latent print, ranked for the quality of a ridge present in that section, with 5 representing the highest score and 0 the lowest. From these grids of 0\-5 values, we obtained the ridge quality scores for individual minutia locations. We compare our (objective) quality metric scores with observer (subjective) ridge qualities for all 15,008 minutia in the database, as shown in Table [5\.1](fingerprints.html#tab:comparefp)
Table 5\.1: Means and standard deviations (SD) of quality metric values for all 15,008 prints by their ordinal ridge quality score.
| Ridge Quality Score | Frequency | Quality Metric Mean | Quality Metric SD |
| --- | --- | --- | --- |
| 1 | 76 | 18\.6 | 19\.9 |
| 2 | 10829 | 23\.7 | 17\.1 |
| 3 | 3940 | 26\.5 | 18\.2 |
| 4 | 144 | 43\.2 | 26\.1 |
| 5 | 17 | 39\.9 | 14\.8 |
### 5\.1\.3 Test for Usable Quality
We designed a procedure to identify a threshold for this quality metric, below which the feature is unreliable, and above which it may provide reliable information for comparison purposes. To identify this threshold value, clear fingerprint images are systematically degraded and quality metric scores calculated for the minutiae accompanying each degraded image. We start by recognizing that a typical clear print has foreground (ridge) intensity values of 255 (black) on a scale of 0\-255 (white\-black). Accordingly, we simulate different levels of image quality by decreasing the quality of clear prints by lowering the foreground intensity levels to levels lower than 255\. As the foreground quality decreases, contrast between the minutiae and background also decreases. In this way, we create a series of images from each clear print with different levels of minutiae quality. We can then ask experts to evaluate which minutiae are, in their judgments, sufficiently distinguishable to be useful in a fingerprint analysis, and then note their conclusions following an actual comparison (“correct match found” or “incorrect match found”). This way, a range of \\(Q\\) for minutiae that are highly correlated with accuracy of analysis can be estimated. Note that one cannot use actual latent prints for such a study, because the background on latent prints is not well characterized and ground truth is unknown.
Figure [5\.6](fingerprints.html#fig:fptrans) shows an example of a clear print with foreground (ridges) starting at 255 followed by a series of prints in which the ridge intensity is progressively lowered to 100\. Figure [5\.7](fingerprints.html#fig:fptranszoom) magnifies the region around three of the minutiae for foreground values equal to 255 and 100 to show the decreased visibility of the minutiae when the gradients and contrast are severely reduced.
Figure 5\.6: (Figure 6 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). A clear fingerprint on the left and a series of transformations with ridge intensity lowered from 255 to 220, 200, 180, 160, 140, 120, and 100\.
Figure 5\.7: (Figure 7 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). Three of the minutiae from Figure [5\.6](fingerprints.html#fig:fptrans) with foreground \= 255 (top row) and foreground \= 100 (bottom row).
Given a quality metric and a method for systematically decreasing the quality of a fingerprint, we can now design an experiment with different examiners of different abilities and correlate the results of their analyses with true outcomes. If accuracy exceeds, say, 95% only when the print has at least \\(n\_0\\) minutiae having quality metrics above a threshold \\(Q\_0\\), then the examiner has an objective criterion for the “A” (analysis) phase of the ACE\-V process.
### 5\.1\.1 Contrast Gradient Quality Measurement
An algorithm introduced in Peskin and Kafadar ([n.d.](#ref-pk_unpub)) identifies and examines a small collection of pixels surrounding a given feature and assesses their distinctiveness from the background pixels. The underlying principle for this approach lies in recognizing that a forensic examiner can distinguish features in a fairly blurry latent print by recognizing:
* a gradient of intensity between the dark and light regions defining a minutia point, and
* an overall contrast of intensity values in the general neighborhood of a minutiae point.
The first step in the algorithm locates the pixel within a small neighborhood around the minutia location that produces the highest intensity gradient value. Further calculation is done around that pixel. The largest gradient in a neighborhood of 5 pixels in each direction from a feature is found. For each pixel in this neighborhood, we compare the pixel intensity, \\(i(x, y)\\), to a neighboring pixel intensity, \\(i(x \+ n, y \+ m)\\), where \\(n, m \\in \\{\-2, \-1, 0, 1, 2\\}\\), and then divide by the corresponding distance between the pixels to get a measure of the gradient at pixel location \\((x, y)\\), \\(g(x, y; n, m)\\):
\\\[ g(x, y; n, m) \= \\frac{i(x, y) \- i(x \+ n, y \+ m)}{ \\sqrt{n^2 \+ m^2}}, \\qquad n, m \= \-2, ..., 2\. \\]
Define \\(G\_5(x, y)\\) as the set of all 24 gradients in the \\(5 \\times 5\\) neighborhood of \\((x, y)\\). The value used for the quality measurement is the maximum value in the set \\(G\_5(x, y)\\). We define \\((x\_0, y\_0\)\\) as the point that produces the largest gradient.
The next step in the algorithm locates the largest contrast between the point \\((x\_0, y\_0\)\\) and its immediate \\(3 \\times 3\\) neighborhood, \\((x\_0 \+ n, y\_0 \+ m)\\), or \\(n\_3(x\_0, y\_0\)\\), where \\(n,m \\in \\{\-1, 0, 1\\}\\). The contrast factor is the largest intensity difference between \\(i(x\_0, y\_0\)\\) and any neighbor intensity in \\(n\_3(x\_0, y\_0\)\\), divided by the maximum intensity in the print, \\(I\_M\\), usually 255:
\\\[ contrast \= \\frac{max\\{abs(i(x\_0, y\_0\) \- i(x,y)\\}}{I\_M}, \\qquad (x, y) \\in n\_3(x\_0, y\_0\). \\]
The contrast measurement differs from the gradient measurement because it highlights the maximum change in intensity among all nine points surrounding the minutia point at \\((x\_0, y\_0\)\\), while the gradient reflects a change in intensity near the minutia point, divided by the distance over that intensity change. Though the calculations are relatively simple, they are able to approximate the two properties around minutiae seen by a forensic examiner.
### 5\.1\.2 Contrast Gradient Quality Measurement Illustration
To illustrate this measurement, we first look at the gradient values on a clear print with well defined minutiae. Figure [5\.1](fingerprints.html#fig:clearprint) shows such a print. Along with it are close\-up views of three minutiae from the print, where the ridge ending and bifurcations are clearly seen. One would expect these very clear minutiae to be at the high end of the quality scale when measuring latent minutiae quality. They are at pixel locations (97, 100\), (126, 167\), and (111, 68\), where the origin (0,0\) is the upper left corner of the image. The largest gradients at these 3 locations are 94\.0, 66\.5, and 88\.0 intensity units per grid unit. Looking at gradient values in each \\(5 \\times 5\\) neighborhood, we find slightly higher gradient values for each minutia: 107\.0, 122\.0, and 121\.0\. For each minutia, we shift our focus point to the new \\((x\_0, y\_0\)\\) corresponding to the slightly higher gradient value. For these \\(n\_3(x\_0, y\_0\)\\) neighborhoods, we find the largest intensity differences to get the contrast. Contrast measures for the three locations are, respectively, 0\.447, 0\.651, and 0\.561, yielding quality metrics 107\.0 \\(\\times\\) 0\.447 \= 47\.84, 122\.0 \\(\\times\\) 0\.651 \= 79\.42, and 121\.0 \\(\\times\\) 0\.561 \= 67\.85\. The quality metric ranges from 0\.133\-100\.0 (see Figure [5\.5](fingerprints.html#fig:table1)) for NIST’s SD27A latent fingerprint database, which has 15,008 defined minutia locations. A quality metric value cannot exceed 100\.0, and any larger scores are capped at 100\.0 (out of the 15,008 minutia, only 54 produced quality metrics over 100\.0\). We now turn to the issue of applying this quality metric to determine the usability of minutiae information in a latent print.
Figure 5\.1: (Figure 1 from Peskin and Kafadar ([n.d.](#ref-pk_unpub)).) Example of a clear fingerprint and close\-up views of three minutiae from this print. Each horizontal green line in the whole print ends at one of the minutia.
The high quality minutiae above are contrasted with those in a latent print. Previously available database NIST SD27A included latent prints classified as “good,” “bad,” and “ugly” by forensic examiners to which we apply our quality metric. Figure [5\.2](fingerprints.html#fig:gbugly1) shows three typical latent fingerprints from this data set, one from each class. Figure [5\.3](fingerprints.html#fig:gbugly2) shows a closer look at one of the highest quality minutiae on each of these three latent fingerprints.
Figure 5\.2: (Figure 2 from Peskin and Kafadar ([n.d.](#ref-pk_unpub)).) Examples of a good, bad, and ugly latent print from the SD27A set: G043, B132, and U296\.
Figure 5\.3: (Figure 3 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). Examples of good, bad, and ugly minutiae of approximately the same quality. G043: (502, 741\), quality \= 21\.9, B132: (553, 651\), quality \= 27\.3, and U296: (548, 655\), quality \= 24\.3\. Each minutia is located at the center of the 40 × 40 cropped image.
To understand how our quality measure compares with an examiner’s assessment, we select one of the latent prints and show minutiae that were selected by one examiner at different quality levels. Figure [5\.4](fingerprints.html#fig:badlatents) shows a “bad” latent and close\-up views of three minutiae with quality scores 5\.0, 15\.2, and 28\.8\. The gradient and contrast measures increase from left to right in these \\(20 \\times 20\\) pixel images, and the quality metric is calculated within the center quarter (\\(5 \\times 5\\)) of the image.
Figure 5\.4: (Figure 4 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). One of the fingerprints labeled as bad, B110, and an expanded view of three of the minutiae from one examiner, in order of quality: (742, 903\), quality \= 5\.0; (727, 741\), quality \= 15\.2; and (890, 405\), quality \= 28\.8\. For each minutiae point, the examiner\-marked location is at the center of a 20 × 20 square of pixels.
The SD27A set of latent prints is associated with 849 sets of forensic examiner data, each an ANSI/NIST formatted record, containing the locations of all marked minutiae. From these, we can compare our algorithm’s quality metric with the examiner’s assessment of “good,” “bad,” or “ugly.” In Figure [5\.5](fingerprints.html#fig:table1), we tabulate the number of minutiae that a forensic examiner located in each set, and the average quality metric of all minutiae in each set. In addition to calculating the average quality metric across all minutiae on the print, we also calculate the average for only those subsets of minutiae with quality metric exceeding 10\.0, 20\.0, and 30\.0\. In each set of results, latent prints labeled “good” have more higher\-scoring minutiae than prints labeled “bad,” and substantially more than prints labeled “ugly.” The average minutiae quality metric for each set is similar, suggesting that the assigned label of “good,” “bad,” or “ugly” is highly influenced by the number of distinguishable (high\-scoring) minutiae on the print.
Figure 5\.5: (Table 1 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))) Numbers of minutiae and average minutiae quality metric (\\(Q\\)) for three sets of minutia for: all sets combined, and subsets defined by \\(Q\\) (SD \= standard deviation).
We then compared our minutiae quality metric to the ridge quality map, which is provided in record 9\.308 of the American National Standard for Information Systems data format for the Interchange of Fingerprint, Facial, and other Biometric Information (NIST, [n.d.](#ref-nist_itl)). The 9\.308 record contains a small grid of scores for individual pixels on small sections of the latent print, ranked for the quality of a ridge present in that section, with 5 representing the highest score and 0 the lowest. From these grids of 0\-5 values, we obtained the ridge quality scores for individual minutia locations. We compare our (objective) quality metric scores with observer (subjective) ridge qualities for all 15,008 minutia in the database, as shown in Table [5\.1](fingerprints.html#tab:comparefp)
Table 5\.1: Means and standard deviations (SD) of quality metric values for all 15,008 prints by their ordinal ridge quality score.
| Ridge Quality Score | Frequency | Quality Metric Mean | Quality Metric SD |
| --- | --- | --- | --- |
| 1 | 76 | 18\.6 | 19\.9 |
| 2 | 10829 | 23\.7 | 17\.1 |
| 3 | 3940 | 26\.5 | 18\.2 |
| 4 | 144 | 43\.2 | 26\.1 |
| 5 | 17 | 39\.9 | 14\.8 |
### 5\.1\.3 Test for Usable Quality
We designed a procedure to identify a threshold for this quality metric, below which the feature is unreliable, and above which it may provide reliable information for comparison purposes. To identify this threshold value, clear fingerprint images are systematically degraded and quality metric scores calculated for the minutiae accompanying each degraded image. We start by recognizing that a typical clear print has foreground (ridge) intensity values of 255 (black) on a scale of 0\-255 (white\-black). Accordingly, we simulate different levels of image quality by decreasing the quality of clear prints by lowering the foreground intensity levels to levels lower than 255\. As the foreground quality decreases, contrast between the minutiae and background also decreases. In this way, we create a series of images from each clear print with different levels of minutiae quality. We can then ask experts to evaluate which minutiae are, in their judgments, sufficiently distinguishable to be useful in a fingerprint analysis, and then note their conclusions following an actual comparison (“correct match found” or “incorrect match found”). This way, a range of \\(Q\\) for minutiae that are highly correlated with accuracy of analysis can be estimated. Note that one cannot use actual latent prints for such a study, because the background on latent prints is not well characterized and ground truth is unknown.
Figure [5\.6](fingerprints.html#fig:fptrans) shows an example of a clear print with foreground (ridges) starting at 255 followed by a series of prints in which the ridge intensity is progressively lowered to 100\. Figure [5\.7](fingerprints.html#fig:fptranszoom) magnifies the region around three of the minutiae for foreground values equal to 255 and 100 to show the decreased visibility of the minutiae when the gradients and contrast are severely reduced.
Figure 5\.6: (Figure 6 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). A clear fingerprint on the left and a series of transformations with ridge intensity lowered from 255 to 220, 200, 180, 160, 140, 120, and 100\.
Figure 5\.7: (Figure 7 from Peskin and Kafadar ([n.d.](#ref-pk_unpub))). Three of the minutiae from Figure [5\.6](fingerprints.html#fig:fptrans) with foreground \= 255 (top row) and foreground \= 100 (bottom row).
Given a quality metric and a method for systematically decreasing the quality of a fingerprint, we can now design an experiment with different examiners of different abilities and correlate the results of their analyses with true outcomes. If accuracy exceeds, say, 95% only when the print has at least \\(n\_0\\) minutiae having quality metrics above a threshold \\(Q\_0\\), then the examiner has an objective criterion for the “A” (analysis) phase of the ACE\-V process.
5\.2 Data
---------
Figure 5\.8: Magnetic fingerprint powder clinging to a rod. Source: [Stapleton and Associates](http://www.stapletonandassociates.com/images/MagPowder.jpg)
A common method for recovering latent prints is dusting. Moisture clinging powder is gently brushed onto a surface using soft brushes to increase the visibility of fingerprints. After dusting, prints are usually recorded by lifting – placing a piece of transparent tape over the fingerprint then transferring the tape, along with the powder, onto a card of contrasting color. Photographs may also be taken of the powdered print. In place of dusting, prints may be chemically processed (super glue, ninhydrin, Rhodamine 6G (R6G), etc.). Clearly visible prints ([patent prints](glossary.html#def:patentprint)) such as those made by paint or blood may be photographed directly. If the fingerprint is left on an object that is easily transported, the object should be sent to the forensics lab and photographed to create a digital image in a controlled environment.
Figure 5\.9: Examples of powdered and lifted fingerprints using powders of different color. Source: [Minsei Matec Co.](https://www.kinseimatec.co.jp/en/?page_id=1868.)
Photographs and fingerprint cards are scanned or otherwise converted to digital format. Enhancements to contrast or color may be made using a photo editing software before producing a finalized grayscale image that may be entered into an AFIS database, directly compared to a suspects’ [exemplar prints](glossary.html#def:exemplar) (known fingerprints, e.g., those taken on a ten print card at a police station), or run through a quality metric algorithm.
The information contained in a fingerprint can be divided into three levels (Scientific Working Group on Friction Ridge Analysis and Technology [2011](#ref-doj_fingerprint_sourcebook)).
Level 1 detail is the most general, consisting of overall pattern type, friction ridge flow, and morphological information. While insufficient for individualization, level 1 detail may be used for exclusion. “Fingerprint pattern classification” refers to overall fingerprint patterns (e.g., loop, whorl) and their subcategories (e.g., left/right slanted) (Hicklin et al. [2011](#ref-hicklin_2011)).
Figure 5\.10: Level 1 detail in a fingerprint includes overall pattern type and ridge flow, such as loops, whorls, and arches. Source: [Duquesne University](https://fslweb.wordpress.com/category/fingerprints/)
Level 2 detail consists of minutiae and individual friction ridge paths. Minutiae are also called “Galton details” after Sir Francis Galton, the first to define and name specific minutiae (bifurcation, enclosure, ridge endings, island) (Galton [1892](#ref-galton_fingerprints)).
Figure 5\.11: Different types of Level 2 detail, or minutiae, in a fingerprint. Source: [syhrl.blogspot.com](http://syhrl.blogspot.com/2012/04/036-fyp-minutiae.html).
Level 3 detail is the most specific, including friction ridge dimensional attributes such as width, edge shapes, and pores. These details may or may not appear on an exemplar print and are the least reliable.
Figure 5\.12: Level 3 detail of a fingerprint. Source: [onin.com/fp/](http://onin.com/fp/level123.html).
### 5\.2\.1 ACE\-V and AFIS
Latent prints may be submitted to an AFIS database which will return the top *n* potential matches. Examiners can then perform the ACE\-V comparison process on these prints until a “match” is found:
* **A**nalysis: a digitized latent print is analyzed to determine if it is “suitable” for examination. During this process, latent print examiners (LPEs) mark clear, high quality sections of a print in green, moderate quality sections in yellow, and unclear or distorted sections in red. These colors correspond to features that can be used in comparison, may possibly be used, and will likely not be useful for comparison, respectfully. As level 3 detail is unreliable, LPEs use level 1 and 2 detail to determine if a print is suitable to continue to the comparison step. If not, the ACE\-V process ends here.
* **C**omparison: the latent and exemplar prints are compared side by side. In addition to overall pattern and ridge flow, examiners may look for the existence of target groups – unique clusters of minutiae – that correspond between a latent and exemplar. Additional features may be marked in orange.
* **E**valuation: a decision of *identification* (formerly “individualization”), *exclusion*, or *inconclusive* is made based on [OSAC standards](https://www.nist.gov/sites/default/files/documents/2016/10/26/swgfast_examinations-conclusions_2.0_130427.pdf). An inconclusive conclusion may be reached if either print does not contain enough information for a decision. Examiners may request consultation with a colleague before reaching a decision, who would perform an independent markup on the latent print.
* **V**erification: a decision may (or may not) be verified by another examiner who performs an independent markup of the latent print. If the second examiner does not know the first examiner’s decision, it is considered a blind verification.
IAFIS is the Integrated Automatic Fingerprint Identification System developed and maintained by the US FBI (Investigation, [n.d.](#ref-fbi_iafis)[a](#ref-fbi_iafis)). Implemented in 1999, it contains criminal history, photographs, and fingerprints for over 70 million individuals with criminal histories and fingerprints of 34 million civilians (Investigation, [n.d.](#ref-fbi_iafis)[a](#ref-fbi_iafis), @fbi\_ngi). The FBI’s Next Generation Identification (NGI) System was announced in 2014 to extend and improve the capabilities of IAFIS. INTERPOL also maintains a database of over 181,000 fingerprint records and almost 11,000 latent prints (INTERPOL, [n.d.](#ref-interpol_fingerprints)).
NIST maintains a series of Biometric Special Databases and Software (NIST, [n.d.](#ref-nist_biometric)), including several fingerprint databases. Special Database 302, not yet released, will contain realistic latent prints and their corresponding exemplars collected from the Intelligence Advanced Research Projects Activity (IARPA) Nail to Nail (N2N) Fingerprint Challenge (Fiumara [2018](#ref-fiumara_talk)). Annotations to these latent prints may be released at a future date. Special Database 27A, which has since been withdrawn, contained 258 latent and rolled mate pairs that two or more LPEs have agreed “match” (i.e. print pairs not guaranteed to be ground truth matches) (Watson [2015](#ref-watson_2015)). The latent prints in this database were classified into three categories: good, bad, and ugly, which allows for general testing of correspondence between overall fingerprint quality and quality scores.
### 5\.2\.1 ACE\-V and AFIS
Latent prints may be submitted to an AFIS database which will return the top *n* potential matches. Examiners can then perform the ACE\-V comparison process on these prints until a “match” is found:
* **A**nalysis: a digitized latent print is analyzed to determine if it is “suitable” for examination. During this process, latent print examiners (LPEs) mark clear, high quality sections of a print in green, moderate quality sections in yellow, and unclear or distorted sections in red. These colors correspond to features that can be used in comparison, may possibly be used, and will likely not be useful for comparison, respectfully. As level 3 detail is unreliable, LPEs use level 1 and 2 detail to determine if a print is suitable to continue to the comparison step. If not, the ACE\-V process ends here.
* **C**omparison: the latent and exemplar prints are compared side by side. In addition to overall pattern and ridge flow, examiners may look for the existence of target groups – unique clusters of minutiae – that correspond between a latent and exemplar. Additional features may be marked in orange.
* **E**valuation: a decision of *identification* (formerly “individualization”), *exclusion*, or *inconclusive* is made based on [OSAC standards](https://www.nist.gov/sites/default/files/documents/2016/10/26/swgfast_examinations-conclusions_2.0_130427.pdf). An inconclusive conclusion may be reached if either print does not contain enough information for a decision. Examiners may request consultation with a colleague before reaching a decision, who would perform an independent markup on the latent print.
* **V**erification: a decision may (or may not) be verified by another examiner who performs an independent markup of the latent print. If the second examiner does not know the first examiner’s decision, it is considered a blind verification.
IAFIS is the Integrated Automatic Fingerprint Identification System developed and maintained by the US FBI (Investigation, [n.d.](#ref-fbi_iafis)[a](#ref-fbi_iafis)). Implemented in 1999, it contains criminal history, photographs, and fingerprints for over 70 million individuals with criminal histories and fingerprints of 34 million civilians (Investigation, [n.d.](#ref-fbi_iafis)[a](#ref-fbi_iafis), @fbi\_ngi). The FBI’s Next Generation Identification (NGI) System was announced in 2014 to extend and improve the capabilities of IAFIS. INTERPOL also maintains a database of over 181,000 fingerprint records and almost 11,000 latent prints (INTERPOL, [n.d.](#ref-interpol_fingerprints)).
NIST maintains a series of Biometric Special Databases and Software (NIST, [n.d.](#ref-nist_biometric)), including several fingerprint databases. Special Database 302, not yet released, will contain realistic latent prints and their corresponding exemplars collected from the Intelligence Advanced Research Projects Activity (IARPA) Nail to Nail (N2N) Fingerprint Challenge (Fiumara [2018](#ref-fiumara_talk)). Annotations to these latent prints may be released at a future date. Special Database 27A, which has since been withdrawn, contained 258 latent and rolled mate pairs that two or more LPEs have agreed “match” (i.e. print pairs not guaranteed to be ground truth matches) (Watson [2015](#ref-watson_2015)). The latent prints in this database were classified into three categories: good, bad, and ugly, which allows for general testing of correspondence between overall fingerprint quality and quality scores.
5\.3 R Package
--------------
The R Package [`fingerprintr`](https://github.com/kdp4be/fingerprintr) implements the Contrast Gradient Quality Measurement for quantifying the quality of individual features, or minutiae, in a latent fingerprint (Kafadar and Pan [2019](#ref-fingerprintr)). The primary functions include reading in the fingerprint image into the correct format (`convert_image`) then calculating the quality scores (`quality_scores`). If desired, additional information on gradient and contrast can be output by setting `verbose = TRUE` in the `quality_scores` function. If one wishes to only see gradient and contrast values, these can be output by functions `find_maxgrad` and `find_contrast`, respectively.
```
library(bmp)
library(fingerprintr)
# this image is the first of two used in the simple latent case study below
temp_image <- read.bmp("simple_latent.bmp")
# minutiae information (one per row without semicolons): X,Y; 72,22; 72,32;
# 35,80; 90,85; 59,144
temp_min <- read.csv("simple_latent_min.txt", header = TRUE, sep = ",")
image_file <- convert_image(temp_image, "bmp")
min_file <- as.matrix(temp_min)
# quality scores
quality_scores(image_file, min_file)
quality_scores(image_file, min_file, verbose = TRUE)
# if image already in pixel array format, must be transposed before running
# quality_scores
text_image <- read.csv("simple_latent.txt", header = FALSE, sep = "\t")
quality_scores(t(text_image), min_file)
```
5\.4 Drawing Conclusions
------------------------
Presently, the first step in fingerprint comparison is the analysis phase, in which the examiner assesses a print for usable minutia that are judged to be sufficiently clear and distinctive for comparison with prints in a database. To reduce the subjectivity in this assessment, we have proposed a “minutia quality metric” for assessing the clarity, and hence usability, of minutia (e.g., ridge endings, bifurcations, islands) in a fingerprint. The metric is scaled between 0 (totally unusable) and 100 (perfectly clear); more high\-scoring minutia should lead to greater distinctiveness in the print and hence fewer false positives that can occur when trying to match latent prints to database prints using much lower\-quality minutia. We have shown in this chapter that our metric is both computationally efficient and correlates well with image quality: by systematically (via image algorithms) reducing the image quality of a print (and hence of the minutia), the quality metric decreases accordingly. We also show, using NIST SD27A fingerprint images, that the existence of more high\-quality minutia correlates well with the experts’ three\-category assessment of fingerprint images (good, bad, ugly).
In future work, we plan to evaluate the value of this algorithm for real practice, and estimate false positive and false negative rates with and without the quality metric. We report on this work in a forthcoming article.
5\.5 Case Study
---------------
Figure 5\.13: Five simple fingerprints on a clean glass plate.
Five simple fingerprints were created on a clean glass plate using a single finger with differing levels of pressure. These levels range from one to five, with one indicating the largest amount of pressure applied. The glass plate was dusted using a black magnetic powder and the revealed prints lifted with tape onto a white fingerprint card. As expected, as pressure decreases, the latent prints decrease in overall quality (visually, the ridges are less thick and lose some continuity) and amount of friction ridge area captured. The entire fingerprint card was scanned and each print was cropped into its own grayscale image file. These cropped images were converted into their equivalent 2D pixel array values (0 to 255, black to white).
Five features (ridge endings (E) and bifurcations (B)) from the second and third prints were analyzed using the Contrast Gradient Quality Metric. As the metric is interested in contrast and gradients, the darker, blacker ridges in the Figure [5\.14](fingerprints.html#fig:markedtwo) receive overall higher scores than the lighter gray ridges in the Figure [5\.15](fingerprints.html#fig:markedthree) (feature 3 is an exception).
Figure 5\.14: Second print
| Feature | Type | Score |
| --- | --- | --- |
| 1 | E | 70\.27 |
| 2 | B | 57\.76 |
| 3 | E | 54\.99 |
| 4 | E | 58\.32 |
| 5 | B | 54\.78 |
Figure 5\.15: Third print
| Feature | Type | Score |
| --- | --- | --- |
| 1 | E | 29\.46 |
| 2 | B | 19\.99 |
| 3 | E | 79\.86 |
| 4 | E | 20\.97 |
| 5 | B | 23\.79 |
Although only five features are marked in the latent prints above, many more would be identified in casework. In cases where large number of minutiae are identified, minutiae quality scores could allow examiners to focus first on those with higher quality, from which presumably the most reliable information may be obtained. The minutiae in the first image above are relatively clear even to an inexperienced observer, and clearly of better contrast than minutiae in the second image, which the quality scores reflect. Using the quality scores, examiners may be able to focus on high scoring features first, or ignore minutiae scoring below a certain threshold.
### Acknowledgements
We would like to thank Dr. Adele Peskin for vital discussion and conversation leading to the creation and development of the Contrast Gradient Algorithm as well as an unpublished manuscript (Peskin and Kafadar, [n.d.](#ref-pk_unpub)).
[
### Acknowledgements
We would like to thank Dr. Adele Peskin for vital discussion and conversation leading to the creation and development of the Contrast Gradient Algorithm as well as an unpublished manuscript (Peskin and Kafadar, [n.d.](#ref-pk_unpub)).
[
| Field Specific |
sctyner.github.io | https://sctyner.github.io/OpenForSciR/shoe.html |
Chapter 6 Shoe Outsole Impression Evidence
==========================================
#### *Soyoung Park, Sam Tyner*
6\.1 Introduction
-----------------
In the mess of a crime scene, one of the most abundant pieces of evidence is a [shoe outsole](glossary.html#def:shoeoutsole) impression (Bodziak [2017](#ref-bodziak2017footwear)). A shoe outsole impression is the trace of a shoe that is left behind when the shoe comes in contact with walking surface. Shoe outsole impressions are often left in pliable materials such as sand, dirt, snow, or blood. Crime scene impressions are lifted using [adhesive](glossary.html#def:adhesivelift) or [electrostatic](glossary.html#def:electrolift) lifting, or [casting](glossary.html#def:casting) to obtain the print left behind.
When a shoe outsole impression is found at a crime scene, the question of interest is, “Which shoe left that impression?” Suppose we have a database of shoe outsole impressions. We want to find the close images in the database to the questioned shoe impression to determine the shoe brand, size, and other [class characteristics](glossary.html#def:classchar). Alternatively, if we have information about potential suspects’ shoes, then we need to investigate whether the questioned shoe impressions share characteristics. The [summary statistic](glossary.html#def:summstat) we need is the degree of correspondence between the questioned shoe outsole impression from the crime scene (\\(Q\\)) and the known shoeprint from a database or a suspect (\\(K\\)). If the similarity between \\(Q\\) and \\(K\\) is high enough, then we may conclude that the source for \\(Q\\) and \\(K\\) is the same. Thus, the goal is to quantify the degree of correspondence between two shoe outsole impressions.
Note that \\(Q\\) and \\(K\\) have different origins for shoe than they do for [trace glass evidence](glass.html#glass): for glass evidence, sample \\(Q\\) comes from a suspect and \\(K\\) comes from the crime scene, while for shoe outsole impression evidence, sample \\(Q\\) comes from the crime scene and sample \\(K\\) comes from a suspect or a [database](glossary.html#def:database).
### 6\.1\.1 Sources of variability in shoe outsole impressions
There are many characteristics of shoes that are examined. First, the size of the shoe is determined, then the style and manufacturer. These characteristics are known as [*class characteristics*](glossary.html#def:classchar): there are large numbers of shoes that share these characteristics, most other shoes that do not share class characteristics with the impression are easily excluded. For instance, a very popular shoe in the United States is the Nike Air Force one, pictured below (Smith [2009](#ref-fbishoes)). So, seeing a Nike logo and concentric circles in an impression from a Men’s size 13 instantly excludes all shoes that are not Nike Air Force Ones in Men’s size 13\.
Figure 6\.1: The outsole of a Nike Air Force One Shoe. This pattern is common across all shoes in the Air Force One model. Source: [nike.com](https://c.static-nike.com/a/images/t_PDP_1728_v1/f_auto/q3byyhcazwdatzj7si11/air-force-1-mid-07-womens-shoe-0nT1KyYW.jpg)
Next, [*subclass characteristics*](glossary.html#def:subclasschar) are examined. These characteristics are shared by a subset of elements in a class, but not by all elements in the class. In shoe impressions, subclass characteristics usually occur during the manufacturing process. For instance, air bubbles may form in one manufacturing run but not in another. Also, the different molds used to create the same style and size shoes can have slight differences, such as where pattern elements intersect (Bodziak [1986](#ref-mfrshoes)). Just like with class characteristics, subclass characteristics can be used to eliminate possible shoes very easily.
Finally, the most unique parts of a shoe outsole impression are the [randomly acquired characteristics](glossary.html#def:racs) (RACs) left behind. The RACs are smaller knicks, gouges, and debris in soles of shoes that are acquired over time, seemingly at random, as the shoes are worn. These are the *identifying* characteristics of shoe impressions, and examiners look for these irregularities in the crime scene impressions to make an identification.
### 6\.1\.2 Current practice
Footwear examiners compare \\(Q\\) and \\(K\\) by considering class, subclass and identifying characteristics on two impressions. The guideline for comparing footwear impressions from the Scientific Working Group for Shoeprint and Tire Tread Evidence (SWGTREAD), details seven possible conclusions for comparing \\(Q\\) and \\(K\\) (*Standard for Terminology Used for Forensic Footwear and Tire Impression Evidence* [2013](#ref-swgtreadconclude)):
1. Lacks sufficient detail (comparison not possible)
2. Exclusion
3. Indications of non\-association
4. Limited association of class characteristics
5. Association of class characteristics
6. High degree of association
7. Identification
Examiners rely on their knowledge, experience, and [guidelines](http://treadforensics.com/images/swgtread/standards/current/swgtread_08_examination_200603.pdf) from SWGTREAD to come to one of these seven conclusions for footwear impression examinations.
### 6\.1\.3 Goal of this chapter
In this chapter, we will show our algorithmic approach to measure the degree of similarity between two shoe outsole impression. For this, [CSAFE](https://forensicstats.org) collected shoe outsole impression data, developed the method for calculating similarity score between two shoe impressions and R package [`shoeprintr`](https://github.com/csafe-isu/shoeprintr) for the developed method (Park and Carriquiry [2019](#ref-R-shoeprintr)[b](#ref-R-shoeprintr)).
6\.2 Data
---------
Crime scene data that we can utilize to develop and test a comparison algorithm for footwear comparison is very limited for several reasons:
1. Shoe impressions found at crime scenes are confidential because they are a part of cases involving real people.
2. Although some real, anonymized crime scene data is available, we typically don’t know the true source of the impression.
3. Most shoe outsole impressions found at the crime scene are partial, degraded, or otherwise imperfect. They are not appropriate to develop and test an algorithm.
### 6\.2\.1 Data collection
CSAFE collected shoe impressions using the two\-dimensional EverOS footwear scanner from [Evident, Inc.](https://www.shopevident.com/product/everos-laboratory-footwear-scanner). The scanner is designed to scan the shoe outsole as the shoe wearer steps onto the scanner. As more pressure is put on the scanner, there will be more detailed patterns detected from the outsole. The resulting images have resolution of 300 DPI. In addition, the images show a ruler to allow analysts to measure the size of the scanned impression. Figure [6\.2](shoe.html#fig:everosx) shows examples of impression images from the EverOS scanner. On the left, there are two replicates of the left shoe from one shoe style, while on the right, there are two replicates from the right shoe from a different shoe stle. The repeated impressions are very similar, but not exact because there are some differences in the amount and location of pressure from the shoe wearer while scanning.
Figure 6\.2: Examples of images from the EverOS scanner. At left, two replicates of one left shoe. At right, two replicates from the right shoe of another pair of shoes.
This scanner enables us to collect shoe impressions and make comparisons where ground truth is known. By collecting repeated impressions from the same shoe, we can construct comparisons between known mates (same shoe) and known non\-mates (different shoes). CSAFE has made available a large collection of shoe outsole impressions from the EverOS scanner (CSAFE [2019](#ref-shoedb)).
### 6\.2\.2 Transformation of shoe image
The resulting image from the EverOS scanner is very large, making comparisons of images very time\-consuming. To speed up the comparison process, we used [MATLAB](https://www.mathworks.com/products/matlab.html)
to downsample all of the images at 20%. Next, we apply [edge detection](https://en.wikipedia.org/wiki/Edge_detection) to the images, which leaves us with outlines of important patterns of shoe outsoles. We extract the edges of the outsole impression using the [Prewitt operator](https://en.wikipedia.org/wiki/Prewitt_operator) to obtain the (\\(x\\),\\(y\\)) coordinates of the shoe patterns. All class and subclass characteristics, as well as possible RACs on the outsole impression are retained using this method.
The final data we use for comparison are the \\(x\\) and \\(y\\) coordinate values of the impression area edges that we extracted from the original image in the \\(Q\\) shoe and the \\(K\\) shoe, as shown in Figure [6\.3](shoe.html#fig:datshoe).
```
shoeQ <- read.csv("dat/Q_03L_01.csv")
shoeK <- read.csv("dat/K_03L_05.csv")
shoeQ$source <- "Q"
shoeK$source <- "K"
library(tidyverse)
bind_rows(shoeQ, shoeK) %>% ggplot() + geom_point(aes(x = x, y = y), size = 0.05) +
facet_wrap(~source, scales = "free")
```
Figure 6\.3: Outsole impressions of two shoes, \\(K\\) and \\(Q\\).
### 6\.2\.3 Definition of classes
We use the data CSAFE collected to construct pairs of known mates (KM) and known non\-mates (KNM) for comparison. The KMs are two repeated impressions from the same shoe, while the KNMs are from two different shoes. For the known non\-mates, the shoes are identical in size, style, and brand, but they come from two different pairs of shoes worn by two different people. We want to train a comparison method using this data because shoes that are the same size, style, and brand are the most similar to each other, and thus the detection of differences will be very hard. By training a method on the hardest comparisons, we make all comparisons stronger.
6\.3 R Package(s)
-----------------
The R package `shoeprintr` was created to perform shoe outsole comparisons (Chapter 3 in Park ([2018](#ref-park2018learning))).
To begin, the package can be installed from GitHub using [`devtools`](https://devtools.r-lib.org/):
```
devtools::install_github("CSAFE-ISU/shoeprintr", force = TRUE)
```
We then attach it and other packages below:
```
library(shoeprintr)
library(patchwork)
```
We use the following method to quantify the similarity between two impressions:
1. Select “interesting” sub\-areas in the \\(Q\\) impression found at the crime scene.
2. Find the closest corresponding sub\-areas in the \\(K\\) impression.
3. Overlay sub\-areas in \\(Q\\) with the closest corresponding areas in \\(K\\).
4. Define similarity features we can measure to create an outsole signature.
5. Combine those features into one single score.
To begin our comparison, we choose three circular sub\-areas of interest in \\(Q\\) by giving the information of their centers and their radii. These areas can be selected “by hand” by examiners, as we do here, or by computer, as we do in Section [6\.5](shoe.html#shoe-case-study). The three circles of interest are given in `input_circles`
```
input_circ <- matrix(c(75.25, 110, 170, 600.4, 150, 470, 50, 50, 50), nrow = 3,
ncol = 3)
input_circ
```
```
## [,1] [,2] [,3]
## [1,] 75.25 600.4 50
## [2,] 110.00 150.0 50
## [3,] 170.00 470.0 50
```
For circle \\(q\_1\\), we select the center of (75\.25, 600\.4\) and the radius of 50\. The second and the third rows in this matrix show center and radius for circle \\(q\_2, q\_3\\).
```
start_plot(shoeQ[, -3], shoeK[, -3], input_circ)
```
Here, we can draw the graph of coordinates of edges from \\(Q\\) and \\(K\\). For \\(Q\\), we use the function `start_plot` draw the \\(Q\\) impressions colored by the three circular areas we chose. We call the red circle, \\(q\_1\\), orange \\(q\_2\\), and green \\(q\_3\\). The goal here is to find the closest areas to \\(q\_1\\), \\(q\_2\\), and \\(q\_3\\) in shoe \\(K\\).
To find the closest areas of \\(q\_1\\), \\(q\_2\\), and \\(q\_3\\) in shoe \\(K\\), we use the function `match_print_subarea`. In this particular comparisons with full impressions of \\(Q\\) and \\(K\\), we know that circle \\(q\_1\\) is located in left toe area of \\(Q\\). Thus, the algorithm confines the searching area in \\(K\\) into upper left area of \\(K\\).
```
match_print_subarea(shoeQ, shoeK, input_circ)
```
Figure 6\.4: Final match result between shoe Q and shoe K
The function `match_print_subarea` will produce the best matching areas in shoe \\(K\\) for circles of \\(q\_1\\), \\(q\_2\\), and \\(q\_3\\) in shoe \\(Q\\), which we fixed. Figure [6\.4](shoe.html#fig:matchgraph) shows the final result from the function `match_print_subarea`. Three circles of red, orange, green colors in the right panel in Figure [6\.4](shoe.html#fig:matchgraph) indicate that those circles that the algorithm found showed the best overlap with circles of \\(q\_1\\), \\(q\_2\\), and \\(q\_3\\) in shoe \\(Q\\).
Next, we explore how the function `match_print_subarea` finds the corresponding areas between two shoe impressions. Too find the corresponding areas for circle \\(q\_1\\) in shoe \\(K\\), the underlying algorithm first finds many candidate circles in shoe \\(K\\), as shown in Figure [6\.5](shoe.html#fig:step1plot).
Figure 6\.5: Circle \\(q\_11\\) in \\(Q\\) (left) and candidate circles in \\(K\\)
For the comparison, the function selects many candidate circles in \\(K\\) labeled as circle \\(k\_1, \\dots, k\_9\\) in Figure [6\.5](shoe.html#fig:step1plot). For candidate circles in \\(K\\), we use larger radius than circle \\(q\_1\\) because we want any candidate circles to fully contain circle \\(q\_1\\) if they are mates. Ideally, the union of the candidate circles in \\(K\\) should contain all edge points in \\(K\\). The algorithm compares all candidate circles in shoe \\(K\\) with the fixed circle \\(q\_1\\)and picks the closest one as the area corresponding to \\(q\_1\\) in shoe \\(K\\).
### 6\.3\.1 One subarea matching
In this section, we show one example of the matching process by examining the comparison between circle \\(q\_1\\) and circle \\(k\_8\\) in Figure [6\.5](shoe.html#fig:step1plot). We selected \\(k\_8\\) because it is a close match to circle \\(q\_1\\) well.
```
nseg = 360
circle_q1 <- data.frame(int_inside_center(data.frame(shoeQ), 50, nseg, 75.25,
600.4))
circle_k8 <- data.frame(int_inside_center(data.frame(shoeK), 65, nseg, 49, 700))
match_q1_k8 <- boosted_clique(circle_q1, circle_k8, ncross_in_bins = 30, xbins_in = 20,
ncross_in_bin_size = 1, ncross_ref_bins = NULL, xbins_ref = 30, ncross_ref_bin_size = NULL,
eps = 0.75, seed = 1, num_cores = parallel::detectCores(), plot = TRUE,
verbose = FALSE, cl = NULL)
```
The `shoeprinter` function `boosted_clique` finds the subset of pixels (the [maximal clique](glossary.html#def:maxclique)) that can be used for alignment of circle \\(q\_1\\) and circle \\(k\_8\\). Using corresponding pixels, the function computes the rotation angle and translation metric which result the best alignment between them. Circle \\(q\_1\\) is transformed to be on top of the circle \\(k\_8\\), using the calculated alignment information. The function then produces summary plots as shown in Figure [6\.6](shoe.html#fig:MCresult) and a table of similarity features as shown in Table [6\.1](shoe.html#tab:resulttable).
Figure 6\.6: Resulting plot when comparing circle \\(q\_1\\) (blue points) and circle \\(k\_8\\) (red points).
Figure [6\.6](shoe.html#fig:MCresult) has four sections. In the first row, first column, the distances between all points in the maximal clique are shown: the x\-axis is the distance between points in circle \\(k\_8\\) and the y\-axis is the distance between points in circle \\(q\_1\\). These values should fall on or near the \\(y\=x\\) diagonal, because for identical circles, the points in the maximal clique are the same. The second plot in the first row of Figure [6\.6](shoe.html#fig:MCresult) shows the \\((x,y)\\) values of the points in the maximal clique. Red points are from \\(k\_8\\), blue circles are from \\(q\_1\\). The bottom two plots show all points in the two circles after alignment. We can see that the two circles share a large area of overlap: the blue points are almost perfectly overlapping with the red points.
Table 6\.1: Resulting table from the matching between q1 and k8
| Clique size | Rot. angle | Overlap on k8 | Overlap on q1 | Median distance | center x | center y | radius |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 18 | 12\.05 | 0\.75 | 0\.97 | 0\.3 | 54\.5 | 688\.5 | 50\.28 |
Table [6\.1](shoe.html#tab:resulttable) contains the similarity measures and other information about \\(k\_8\\), the closest matching circle to \\(q\_1\\). The first five features measure the degree of similarity between circle \\(q\_1\\) and circle \\(k\_8\\), after aligning them. There are 18 pixels in the maximal clique, and the rotation angle between them is 12\.05\\(^o\\). The two overlap features indicate that 75% of circle \\(k\_8\\) pixels are overlapped with circle \\(q\_1\\) and 97% of circle \\(q\_1\\) pixels are overlapped with circle \\(k\_8\\) after alignment. The median value of the distance between two close points after aligning the circles is 0\.3\. For the clique size and overlap metrics, larger values indicate more similar circles, while the opposite is true for distance metrics. The last three columns are about information of the found circle in \\(K\\) that is matched the best with circle \\(q\_1\\). Thus, the center (54\.5, 688\.5\) and the radius 50\.28 in \\(K\\) is the best matching circle to \\(q\_1\\).
Figure 6\.7: Circles \\(q\_1\\) and \\(k\_8\\) in context.
Figure [6\.7](shoe.html#fig:graphq1) shows the two matching circles as part of the larger shoe impression. By fixing circle \\(q\_1\\) (in blue), we found the closest circle (in red) in shoe \\(K\\). The blue circle in \\(Q\\) and red circle in \\(K\\) look pretty similar. This process would be repeated for the two other circles that we fixed in \\(Q\\), using the function `match_print_subarea`.
6\.4 Drawing Conclusions
------------------------
To determine whether impressions \\(Q\\) and \\(K\\) were made by the same shoe or not, we need to define the signature of the questioned shoe outsole impression. We define the signature of shoe \\(Q\\) as the three circular areas of the shoe after edge detection.
Figure 6\.8: Comparing the lengths of the blue lines is important for determining if the same shoe made impression \\(Q\\) and impression \\(K\\).
Figure [6\.8](shoe.html#fig:signature) shows the signature in \\(Q\\) and the corresponding areas in \\(K\\) found by the `match_print_subarea` function. We use the average of the five similarity features from the three circle matchings to draw a conclusion. If the circles in \\(K\\) match the circle in \\(Q\\), then the corresponding geometry between the three circles in \\(Q\\) and \\(K\\) should be very similar. The blue lines in Figure [6\.8](shoe.html#fig:signature) form a triangle whose vertices are the centers of the three circles in each shoe impression. The difference of the side lengths (numbered 1,2,3 in the figure) between the two triangles are an additional feature used to determine if \\(Q\\) and \\(K\\) were made my the same shoe or not. If the two triangles in Figure [6\.8](shoe.html#fig:signature) have similar side lengths, this is evidence that shoe \\(Q\\) and shoe \\(K\\) are from the same source.
Table 6\.2: Resulting table comparing Q and K.
| Comparison | Clique size | Rot. angle | Overlap on k\* | Overlap on q | Median distance | Diff in length from Triangle |
| --- | --- | --- | --- | --- | --- | --- |
| \\(q\_1\\)\-\\(k^\*\_1\\) | 18 | 12\.05 | 0\.75 | 0\.97 | 0\.3 | 0\.58 |
| \\(q\_2\\)\-\\(k^\*\_2\\) | 17 | 10\.57 | 0\.53 | 0\.91 | 0\.43 | 0\.55 |
| \\(q\_3\\)\-\\(k^\*\_3\\) | 20 | 12\.14 | 0\.63 | 1 | 0\.24 | 1\.03 |
Table [6\.2](shoe.html#tab:finaltable) contains the summary features from comparing the signature with three circles in \\(Q\\) to the corresponding areas in \\(K\\) which are the closest match. There is one row for each comparison between \\(q\_i\\) and \\(k^{\*}\_i\\) for \\(i\=1,2,3\\). The last column is the absolute value of difference of the triangle side lengths from three circles in \\(Q\\) and \\(K\\). Smaller differences indicate more similarity between \\(Q\\) and \\(K\\). Finally, we take the average of all features except rotation angle over the three circle. For the rotation angle, we take the standard deviation of the three measurements instead of the mean. If the two prints come from the same shoe, the rotation angle will be very similar in all three circles, so we expect small values of the standard deviation to indicate mates, while large values indicate non\-mates.
6\.5 Case Study
---------------
Comparing two impressions with totally different outsole patterns is a very easy problem for humans and computers. But what if we are tasked to compare two impressions from two different shoes from different people that have the same brand, model, and size?
Suppose we have one questioned shoe outsole impression (\\(Q\\)) and two known shoe impressions (\\(K\_1, K\_2\\)) from two different shoes. All three impressions share the same [class characteristics](glossary.html#def:classchar). How can we conclude if the source of \\(Q\\) is the same as \\(K\_1\\) or \\(K\_2\\)?
Figure 6\.9: Example images of shoe outsole impressions. The questioned impression (\\(Q\\)) and two known impressions (\\(K\_1\\), \\(K\_2\\)). Does \\(Q\\) have the same source as \\(K\_1\\) or \\(K\_2\\)?
Figure [6\.9](shoe.html#fig:eximgs) displays three outsole impressions. With a cursory glance, it looks like impression \\(Q\\) could have been made by the same shoe as either \\(K\_1\\) or \\(K\_2\\). The two known impressions are from the same brand and model of shoes, which were worn by two different people for about six months. Since the three impressions all share class and subclass characteristics, it is a very hard comparison. There are some differences among images \\(Q, K\_1, K\_2\\) but it is hard to determine if these differences are due to measurement errors or variation in the EverOS scanner or to wear and tear of the outsole or RACs.
### 6\.5\.1 Compare \\(Q\\) and \\(K\_1\\)
Let’s compare \\(Q\\) and \\(K\_1\\) first. In \\(Q\\), we select three circular areas. For this, we use the function `initial_circle` to select three centers within the (30%, 80%), (20%, 40%), (70%, 70%) quantiles of the \\((x, y)\\) ranges of the coordinates in \\(Q\\). The function automatically generates corresponding centers of the circles with radius of 50\.
```
input_circles_Q <- initial_circle(imgQ)
input_circles_Q
```
```
## [,1] [,2] [,3]
## [1,] 89.7 578.2 50
## [2,] 67.8 296.6 50
## [3,] 177.3 507.8 50
```
```
start_plot(imgQ, imgK1, input_circles_Q)
```
Figure 6\.10: The input circles and the shoe \\(Q\\) impression (left) and the shoe \\(K\_1\\) impression (right).
Next, run `match_print_subarea()` to find circles in \\(K\\) that most closely correspond to the circles in \\(Q\\). The result is shown in Figure [6\.11](shoe.html#fig:exKM). In \\(K\_1\\), the algorithm finds the closest circles which show the most overlap with each of the circles that we fixed in \\(Q\\). The corresponding circle information and summary features are shown in Tables [6\.3](shoe.html#tab:KMtable1) and [6\.4](shoe.html#tab:KMtable2), respectively. Each row in Table [6\.4](shoe.html#tab:KMtable2) shows the similarity features from circle \\(q\_i\\) and circle \\(k^{1}\_i\\) when \\(i\=1,2,3\\).
```
match_print_subarea(imgQ, imgK1, input_circles_Q)
```
Figure 6\.11: The input circles (\\(q\_1, q\_2, q\_3\\)) and the shoe \\(Q\\) impression (left) and the closest matching circles (\\(k\_1^1, k\_2^1, k\_3^1\\)) and shoe \\(K\_1\\) impression (right), which are the result of `match_print_subarea`.
Table 6\.3: Centers and radius information for circles in \\(K\_1\\) that are the closest match to the circles in \\(Q\\) according to the matching algorithm.
| Reference\_X | Reference\_Y | Reference\_radius |
| --- | --- | --- |
| 71\.0 | 567\.5 | 52\.28 |
| 85\.5 | 286\.0 | 56\.32 |
| 182\.5 | 495\.0 | 54\.15 |
Table 6\.4: Similarity features from comparing circles in \\(Q\\) to the best matching circles in \\(K\_1\\).
| clique\_size | rotation\_angle | reference\_overlap | input\_overlap | med\_dist\_euc | diff |
| --- | --- | --- | --- | --- | --- |
| 15 | 1\.23 | 0\.55 | 0\.86 | 0\.49 | 0\.58 |
| 18 | 1\.88 | 0\.42 | 1\.00 | 0\.09 | 20\.62 |
| 17 | 2\.68 | 0\.65 | 0\.96 | 0\.31 | 7\.49 |
### 6\.5\.2 Compare \\(Q\\) and \\(K\_2\\)
We repeat the process from Section [6\.5\.1](shoe.html#sec:shoeqk1) to find the best matching circles in \\(K\_2\\).
```
match_print_subarea(imgQ, imgK2, input_circles_Q)
```
Figure 6\.12: The input circles (\\(q\_1, q\_2, q\_3\\)) and the shoe \\(Q\\) impression (left) and the closest matching circles (\\(k\_1^2, k\_2^2, k\_3^2\\)) and shoe \\(K\_2\\) impression (right), which are the result of `match_print_subarea`.
Figure [6\.12](shoe.html#fig:exKNM) shows the circles from the matching algorithm. In this comparison, \\(k\_3^2\\) is in a very different position compared to \\(q\_{3}\\).
Table 6\.5: Centers and radius information that the matching algorithm found as the best overlap circle in \\(K\_2\\) for fixed circles in \\(Q\\)
| Reference\_X | Reference\_Y | Reference\_radius |
| --- | --- | --- |
| 66\.0 | 582\.5 | 51\.20 |
| 66\.0 | 302\.0 | 55\.61 |
| 199\.5 | 431\.0 | 51\.44 |
Table 6\.6: Similarity features by comparing circles in \\(Q\\) to the best matching circles in \\(K\_2\\).
| clique\_size | rotation\_angle | reference\_overlap | input\_overlap | med\_dist\_euc | diff |
| --- | --- | --- | --- | --- | --- |
| 18 | 1\.87 | 0\.64 | 0\.89 | 0\.53 | 1\.95 |
| 14 | 7\.16 | 0\.39 | 0\.92 | 0\.52 | 89\.54 |
| 16 | 32\.93 | 0\.26 | 0\.51 | 0\.58 | 52\.26 |
Table [6\.5](shoe.html#tab:KNMtable1) and Table [6\.5](shoe.html#tab:KNMtable1) show the location information of matching circles in \\(K\_2\\) for fixed circles in \\(Q\\) and their corresponding similarity features, respectively.
### 6\.5\.3 Interpreting Comparisons between \\(Q, K\_1\\) and \\(Q, K\_2\\)
To summarize similarity features from the match of the two impressions, we will take the average of each similarity feature across the three circle matchings, and the standard deviation of the rotation angle estimates. For features such as clique size and overlap, larger values indicate more similar impressions. For the features median distance and absolute value of differences of lengths from two triangles in two impressions, smaller values indicate more similar impressions. For the rotation angle estimation, we use the standard deviation of the three values because we are interested in comparing the three values to each other, not in the value of the angles themselves. If all rotations are about the same, the algorithm is consistently finding those patterns, and the prints are likely from the same shoe, even if the image has been rotated . If the rotation angles are very different, however, the algorithm is forcing similarity where it doesn’t exist, and the prints are likely from different shoes.
Table 6\.7: Summary table of two comparisons of \\(Q\-K\_1\\) and \\(Q\-K\_2\\). Similarity features are averaged but rotation angles are summarized as standard deviation.
| Match | clique\_size | sd\_rot\_angle | overlap\_k | overlap\_q | med\_dist | diff\_length |
| --- | --- | --- | --- | --- | --- | --- |
| Q, \\(K\_1\\) | 16\.67 | 0\.73 | 0\.54 | 0\.94 | 0\.30 | 9\.56 |
| Q,\\(K\_2\\) | 16\.00 | 16\.62 | 0\.43 | 0\.77 | 0\.54 | 47\.92 |
Table [6\.7](shoe.html#tab:summarytable) shows the summarized similarity features from two comparisons we did for \\(Q,K\_1\\) and \\(Q, K\_2\\). In both comparisons, the average clique size and median distance look similar. The standard deviation (SD) of rotation angle estimations and difference in lengths from triangles, however, are very different. The comparison \\(Q,K\_1\\) results in similar rotation estimates (1\.23, 1\.88, 2\.68\), while the comparison \\(Q,K\_2\\) results in very different rotation estimates (1\.87, 7\.16, 32\.93\) in Table [6\.4](shoe.html#tab:KMtable2) and Table [6\.6](shoe.html#tab:KNMtable2). The length comparison from \\(Q,K\_2\\) is five times larger than that from the \\(Q,K\_1\\) comparison. In addition, \\(Q,K\_1\\) has high overlap average chance of about 94% while \\(Q\-K\_2\\) has about a 77% average. Therefore, we have evidence that prints \\(Q\\) and \\(K\_1\\) were left by the same shoe, while prints \\(Q\\) and \\(K\_2\\) were left by different shoes.
Finally, we reveal the truth: the impressions \\(Q\\) and \\(K\_1\\) are from the same shoe. The impression \\(K\_2\\) is scanned from different shoe of the same size, brand, and style. Because the two shoes share class characteristics, the comparisons we did here were the hardest cases. To see what typical values we will have for mates and non\-mates, we need to do more analyses to get similarity features from many comparisons. In other analyses, for instance in Park ([2018](#ref-park2018learning)), we use random forests to classify mates and non\-mates using the definitions of clique size, overlap, etc. as the features. The empirical probability from the random forest could serve as a score, similar to what was done in Chapter [7](glass.html#glass).
[
#### *Soyoung Park, Sam Tyner*
6\.1 Introduction
-----------------
In the mess of a crime scene, one of the most abundant pieces of evidence is a [shoe outsole](glossary.html#def:shoeoutsole) impression (Bodziak [2017](#ref-bodziak2017footwear)). A shoe outsole impression is the trace of a shoe that is left behind when the shoe comes in contact with walking surface. Shoe outsole impressions are often left in pliable materials such as sand, dirt, snow, or blood. Crime scene impressions are lifted using [adhesive](glossary.html#def:adhesivelift) or [electrostatic](glossary.html#def:electrolift) lifting, or [casting](glossary.html#def:casting) to obtain the print left behind.
When a shoe outsole impression is found at a crime scene, the question of interest is, “Which shoe left that impression?” Suppose we have a database of shoe outsole impressions. We want to find the close images in the database to the questioned shoe impression to determine the shoe brand, size, and other [class characteristics](glossary.html#def:classchar). Alternatively, if we have information about potential suspects’ shoes, then we need to investigate whether the questioned shoe impressions share characteristics. The [summary statistic](glossary.html#def:summstat) we need is the degree of correspondence between the questioned shoe outsole impression from the crime scene (\\(Q\\)) and the known shoeprint from a database or a suspect (\\(K\\)). If the similarity between \\(Q\\) and \\(K\\) is high enough, then we may conclude that the source for \\(Q\\) and \\(K\\) is the same. Thus, the goal is to quantify the degree of correspondence between two shoe outsole impressions.
Note that \\(Q\\) and \\(K\\) have different origins for shoe than they do for [trace glass evidence](glass.html#glass): for glass evidence, sample \\(Q\\) comes from a suspect and \\(K\\) comes from the crime scene, while for shoe outsole impression evidence, sample \\(Q\\) comes from the crime scene and sample \\(K\\) comes from a suspect or a [database](glossary.html#def:database).
### 6\.1\.1 Sources of variability in shoe outsole impressions
There are many characteristics of shoes that are examined. First, the size of the shoe is determined, then the style and manufacturer. These characteristics are known as [*class characteristics*](glossary.html#def:classchar): there are large numbers of shoes that share these characteristics, most other shoes that do not share class characteristics with the impression are easily excluded. For instance, a very popular shoe in the United States is the Nike Air Force one, pictured below (Smith [2009](#ref-fbishoes)). So, seeing a Nike logo and concentric circles in an impression from a Men’s size 13 instantly excludes all shoes that are not Nike Air Force Ones in Men’s size 13\.
Figure 6\.1: The outsole of a Nike Air Force One Shoe. This pattern is common across all shoes in the Air Force One model. Source: [nike.com](https://c.static-nike.com/a/images/t_PDP_1728_v1/f_auto/q3byyhcazwdatzj7si11/air-force-1-mid-07-womens-shoe-0nT1KyYW.jpg)
Next, [*subclass characteristics*](glossary.html#def:subclasschar) are examined. These characteristics are shared by a subset of elements in a class, but not by all elements in the class. In shoe impressions, subclass characteristics usually occur during the manufacturing process. For instance, air bubbles may form in one manufacturing run but not in another. Also, the different molds used to create the same style and size shoes can have slight differences, such as where pattern elements intersect (Bodziak [1986](#ref-mfrshoes)). Just like with class characteristics, subclass characteristics can be used to eliminate possible shoes very easily.
Finally, the most unique parts of a shoe outsole impression are the [randomly acquired characteristics](glossary.html#def:racs) (RACs) left behind. The RACs are smaller knicks, gouges, and debris in soles of shoes that are acquired over time, seemingly at random, as the shoes are worn. These are the *identifying* characteristics of shoe impressions, and examiners look for these irregularities in the crime scene impressions to make an identification.
### 6\.1\.2 Current practice
Footwear examiners compare \\(Q\\) and \\(K\\) by considering class, subclass and identifying characteristics on two impressions. The guideline for comparing footwear impressions from the Scientific Working Group for Shoeprint and Tire Tread Evidence (SWGTREAD), details seven possible conclusions for comparing \\(Q\\) and \\(K\\) (*Standard for Terminology Used for Forensic Footwear and Tire Impression Evidence* [2013](#ref-swgtreadconclude)):
1. Lacks sufficient detail (comparison not possible)
2. Exclusion
3. Indications of non\-association
4. Limited association of class characteristics
5. Association of class characteristics
6. High degree of association
7. Identification
Examiners rely on their knowledge, experience, and [guidelines](http://treadforensics.com/images/swgtread/standards/current/swgtread_08_examination_200603.pdf) from SWGTREAD to come to one of these seven conclusions for footwear impression examinations.
### 6\.1\.3 Goal of this chapter
In this chapter, we will show our algorithmic approach to measure the degree of similarity between two shoe outsole impression. For this, [CSAFE](https://forensicstats.org) collected shoe outsole impression data, developed the method for calculating similarity score between two shoe impressions and R package [`shoeprintr`](https://github.com/csafe-isu/shoeprintr) for the developed method (Park and Carriquiry [2019](#ref-R-shoeprintr)[b](#ref-R-shoeprintr)).
### 6\.1\.1 Sources of variability in shoe outsole impressions
There are many characteristics of shoes that are examined. First, the size of the shoe is determined, then the style and manufacturer. These characteristics are known as [*class characteristics*](glossary.html#def:classchar): there are large numbers of shoes that share these characteristics, most other shoes that do not share class characteristics with the impression are easily excluded. For instance, a very popular shoe in the United States is the Nike Air Force one, pictured below (Smith [2009](#ref-fbishoes)). So, seeing a Nike logo and concentric circles in an impression from a Men’s size 13 instantly excludes all shoes that are not Nike Air Force Ones in Men’s size 13\.
Figure 6\.1: The outsole of a Nike Air Force One Shoe. This pattern is common across all shoes in the Air Force One model. Source: [nike.com](https://c.static-nike.com/a/images/t_PDP_1728_v1/f_auto/q3byyhcazwdatzj7si11/air-force-1-mid-07-womens-shoe-0nT1KyYW.jpg)
Next, [*subclass characteristics*](glossary.html#def:subclasschar) are examined. These characteristics are shared by a subset of elements in a class, but not by all elements in the class. In shoe impressions, subclass characteristics usually occur during the manufacturing process. For instance, air bubbles may form in one manufacturing run but not in another. Also, the different molds used to create the same style and size shoes can have slight differences, such as where pattern elements intersect (Bodziak [1986](#ref-mfrshoes)). Just like with class characteristics, subclass characteristics can be used to eliminate possible shoes very easily.
Finally, the most unique parts of a shoe outsole impression are the [randomly acquired characteristics](glossary.html#def:racs) (RACs) left behind. The RACs are smaller knicks, gouges, and debris in soles of shoes that are acquired over time, seemingly at random, as the shoes are worn. These are the *identifying* characteristics of shoe impressions, and examiners look for these irregularities in the crime scene impressions to make an identification.
### 6\.1\.2 Current practice
Footwear examiners compare \\(Q\\) and \\(K\\) by considering class, subclass and identifying characteristics on two impressions. The guideline for comparing footwear impressions from the Scientific Working Group for Shoeprint and Tire Tread Evidence (SWGTREAD), details seven possible conclusions for comparing \\(Q\\) and \\(K\\) (*Standard for Terminology Used for Forensic Footwear and Tire Impression Evidence* [2013](#ref-swgtreadconclude)):
1. Lacks sufficient detail (comparison not possible)
2. Exclusion
3. Indications of non\-association
4. Limited association of class characteristics
5. Association of class characteristics
6. High degree of association
7. Identification
Examiners rely on their knowledge, experience, and [guidelines](http://treadforensics.com/images/swgtread/standards/current/swgtread_08_examination_200603.pdf) from SWGTREAD to come to one of these seven conclusions for footwear impression examinations.
### 6\.1\.3 Goal of this chapter
In this chapter, we will show our algorithmic approach to measure the degree of similarity between two shoe outsole impression. For this, [CSAFE](https://forensicstats.org) collected shoe outsole impression data, developed the method for calculating similarity score between two shoe impressions and R package [`shoeprintr`](https://github.com/csafe-isu/shoeprintr) for the developed method (Park and Carriquiry [2019](#ref-R-shoeprintr)[b](#ref-R-shoeprintr)).
6\.2 Data
---------
Crime scene data that we can utilize to develop and test a comparison algorithm for footwear comparison is very limited for several reasons:
1. Shoe impressions found at crime scenes are confidential because they are a part of cases involving real people.
2. Although some real, anonymized crime scene data is available, we typically don’t know the true source of the impression.
3. Most shoe outsole impressions found at the crime scene are partial, degraded, or otherwise imperfect. They are not appropriate to develop and test an algorithm.
### 6\.2\.1 Data collection
CSAFE collected shoe impressions using the two\-dimensional EverOS footwear scanner from [Evident, Inc.](https://www.shopevident.com/product/everos-laboratory-footwear-scanner). The scanner is designed to scan the shoe outsole as the shoe wearer steps onto the scanner. As more pressure is put on the scanner, there will be more detailed patterns detected from the outsole. The resulting images have resolution of 300 DPI. In addition, the images show a ruler to allow analysts to measure the size of the scanned impression. Figure [6\.2](shoe.html#fig:everosx) shows examples of impression images from the EverOS scanner. On the left, there are two replicates of the left shoe from one shoe style, while on the right, there are two replicates from the right shoe from a different shoe stle. The repeated impressions are very similar, but not exact because there are some differences in the amount and location of pressure from the shoe wearer while scanning.
Figure 6\.2: Examples of images from the EverOS scanner. At left, two replicates of one left shoe. At right, two replicates from the right shoe of another pair of shoes.
This scanner enables us to collect shoe impressions and make comparisons where ground truth is known. By collecting repeated impressions from the same shoe, we can construct comparisons between known mates (same shoe) and known non\-mates (different shoes). CSAFE has made available a large collection of shoe outsole impressions from the EverOS scanner (CSAFE [2019](#ref-shoedb)).
### 6\.2\.2 Transformation of shoe image
The resulting image from the EverOS scanner is very large, making comparisons of images very time\-consuming. To speed up the comparison process, we used [MATLAB](https://www.mathworks.com/products/matlab.html)
to downsample all of the images at 20%. Next, we apply [edge detection](https://en.wikipedia.org/wiki/Edge_detection) to the images, which leaves us with outlines of important patterns of shoe outsoles. We extract the edges of the outsole impression using the [Prewitt operator](https://en.wikipedia.org/wiki/Prewitt_operator) to obtain the (\\(x\\),\\(y\\)) coordinates of the shoe patterns. All class and subclass characteristics, as well as possible RACs on the outsole impression are retained using this method.
The final data we use for comparison are the \\(x\\) and \\(y\\) coordinate values of the impression area edges that we extracted from the original image in the \\(Q\\) shoe and the \\(K\\) shoe, as shown in Figure [6\.3](shoe.html#fig:datshoe).
```
shoeQ <- read.csv("dat/Q_03L_01.csv")
shoeK <- read.csv("dat/K_03L_05.csv")
shoeQ$source <- "Q"
shoeK$source <- "K"
library(tidyverse)
bind_rows(shoeQ, shoeK) %>% ggplot() + geom_point(aes(x = x, y = y), size = 0.05) +
facet_wrap(~source, scales = "free")
```
Figure 6\.3: Outsole impressions of two shoes, \\(K\\) and \\(Q\\).
### 6\.2\.3 Definition of classes
We use the data CSAFE collected to construct pairs of known mates (KM) and known non\-mates (KNM) for comparison. The KMs are two repeated impressions from the same shoe, while the KNMs are from two different shoes. For the known non\-mates, the shoes are identical in size, style, and brand, but they come from two different pairs of shoes worn by two different people. We want to train a comparison method using this data because shoes that are the same size, style, and brand are the most similar to each other, and thus the detection of differences will be very hard. By training a method on the hardest comparisons, we make all comparisons stronger.
### 6\.2\.1 Data collection
CSAFE collected shoe impressions using the two\-dimensional EverOS footwear scanner from [Evident, Inc.](https://www.shopevident.com/product/everos-laboratory-footwear-scanner). The scanner is designed to scan the shoe outsole as the shoe wearer steps onto the scanner. As more pressure is put on the scanner, there will be more detailed patterns detected from the outsole. The resulting images have resolution of 300 DPI. In addition, the images show a ruler to allow analysts to measure the size of the scanned impression. Figure [6\.2](shoe.html#fig:everosx) shows examples of impression images from the EverOS scanner. On the left, there are two replicates of the left shoe from one shoe style, while on the right, there are two replicates from the right shoe from a different shoe stle. The repeated impressions are very similar, but not exact because there are some differences in the amount and location of pressure from the shoe wearer while scanning.
Figure 6\.2: Examples of images from the EverOS scanner. At left, two replicates of one left shoe. At right, two replicates from the right shoe of another pair of shoes.
This scanner enables us to collect shoe impressions and make comparisons where ground truth is known. By collecting repeated impressions from the same shoe, we can construct comparisons between known mates (same shoe) and known non\-mates (different shoes). CSAFE has made available a large collection of shoe outsole impressions from the EverOS scanner (CSAFE [2019](#ref-shoedb)).
### 6\.2\.2 Transformation of shoe image
The resulting image from the EverOS scanner is very large, making comparisons of images very time\-consuming. To speed up the comparison process, we used [MATLAB](https://www.mathworks.com/products/matlab.html)
to downsample all of the images at 20%. Next, we apply [edge detection](https://en.wikipedia.org/wiki/Edge_detection) to the images, which leaves us with outlines of important patterns of shoe outsoles. We extract the edges of the outsole impression using the [Prewitt operator](https://en.wikipedia.org/wiki/Prewitt_operator) to obtain the (\\(x\\),\\(y\\)) coordinates of the shoe patterns. All class and subclass characteristics, as well as possible RACs on the outsole impression are retained using this method.
The final data we use for comparison are the \\(x\\) and \\(y\\) coordinate values of the impression area edges that we extracted from the original image in the \\(Q\\) shoe and the \\(K\\) shoe, as shown in Figure [6\.3](shoe.html#fig:datshoe).
```
shoeQ <- read.csv("dat/Q_03L_01.csv")
shoeK <- read.csv("dat/K_03L_05.csv")
shoeQ$source <- "Q"
shoeK$source <- "K"
library(tidyverse)
bind_rows(shoeQ, shoeK) %>% ggplot() + geom_point(aes(x = x, y = y), size = 0.05) +
facet_wrap(~source, scales = "free")
```
Figure 6\.3: Outsole impressions of two shoes, \\(K\\) and \\(Q\\).
### 6\.2\.3 Definition of classes
We use the data CSAFE collected to construct pairs of known mates (KM) and known non\-mates (KNM) for comparison. The KMs are two repeated impressions from the same shoe, while the KNMs are from two different shoes. For the known non\-mates, the shoes are identical in size, style, and brand, but they come from two different pairs of shoes worn by two different people. We want to train a comparison method using this data because shoes that are the same size, style, and brand are the most similar to each other, and thus the detection of differences will be very hard. By training a method on the hardest comparisons, we make all comparisons stronger.
6\.3 R Package(s)
-----------------
The R package `shoeprintr` was created to perform shoe outsole comparisons (Chapter 3 in Park ([2018](#ref-park2018learning))).
To begin, the package can be installed from GitHub using [`devtools`](https://devtools.r-lib.org/):
```
devtools::install_github("CSAFE-ISU/shoeprintr", force = TRUE)
```
We then attach it and other packages below:
```
library(shoeprintr)
library(patchwork)
```
We use the following method to quantify the similarity between two impressions:
1. Select “interesting” sub\-areas in the \\(Q\\) impression found at the crime scene.
2. Find the closest corresponding sub\-areas in the \\(K\\) impression.
3. Overlay sub\-areas in \\(Q\\) with the closest corresponding areas in \\(K\\).
4. Define similarity features we can measure to create an outsole signature.
5. Combine those features into one single score.
To begin our comparison, we choose three circular sub\-areas of interest in \\(Q\\) by giving the information of their centers and their radii. These areas can be selected “by hand” by examiners, as we do here, or by computer, as we do in Section [6\.5](shoe.html#shoe-case-study). The three circles of interest are given in `input_circles`
```
input_circ <- matrix(c(75.25, 110, 170, 600.4, 150, 470, 50, 50, 50), nrow = 3,
ncol = 3)
input_circ
```
```
## [,1] [,2] [,3]
## [1,] 75.25 600.4 50
## [2,] 110.00 150.0 50
## [3,] 170.00 470.0 50
```
For circle \\(q\_1\\), we select the center of (75\.25, 600\.4\) and the radius of 50\. The second and the third rows in this matrix show center and radius for circle \\(q\_2, q\_3\\).
```
start_plot(shoeQ[, -3], shoeK[, -3], input_circ)
```
Here, we can draw the graph of coordinates of edges from \\(Q\\) and \\(K\\). For \\(Q\\), we use the function `start_plot` draw the \\(Q\\) impressions colored by the three circular areas we chose. We call the red circle, \\(q\_1\\), orange \\(q\_2\\), and green \\(q\_3\\). The goal here is to find the closest areas to \\(q\_1\\), \\(q\_2\\), and \\(q\_3\\) in shoe \\(K\\).
To find the closest areas of \\(q\_1\\), \\(q\_2\\), and \\(q\_3\\) in shoe \\(K\\), we use the function `match_print_subarea`. In this particular comparisons with full impressions of \\(Q\\) and \\(K\\), we know that circle \\(q\_1\\) is located in left toe area of \\(Q\\). Thus, the algorithm confines the searching area in \\(K\\) into upper left area of \\(K\\).
```
match_print_subarea(shoeQ, shoeK, input_circ)
```
Figure 6\.4: Final match result between shoe Q and shoe K
The function `match_print_subarea` will produce the best matching areas in shoe \\(K\\) for circles of \\(q\_1\\), \\(q\_2\\), and \\(q\_3\\) in shoe \\(Q\\), which we fixed. Figure [6\.4](shoe.html#fig:matchgraph) shows the final result from the function `match_print_subarea`. Three circles of red, orange, green colors in the right panel in Figure [6\.4](shoe.html#fig:matchgraph) indicate that those circles that the algorithm found showed the best overlap with circles of \\(q\_1\\), \\(q\_2\\), and \\(q\_3\\) in shoe \\(Q\\).
Next, we explore how the function `match_print_subarea` finds the corresponding areas between two shoe impressions. Too find the corresponding areas for circle \\(q\_1\\) in shoe \\(K\\), the underlying algorithm first finds many candidate circles in shoe \\(K\\), as shown in Figure [6\.5](shoe.html#fig:step1plot).
Figure 6\.5: Circle \\(q\_11\\) in \\(Q\\) (left) and candidate circles in \\(K\\)
For the comparison, the function selects many candidate circles in \\(K\\) labeled as circle \\(k\_1, \\dots, k\_9\\) in Figure [6\.5](shoe.html#fig:step1plot). For candidate circles in \\(K\\), we use larger radius than circle \\(q\_1\\) because we want any candidate circles to fully contain circle \\(q\_1\\) if they are mates. Ideally, the union of the candidate circles in \\(K\\) should contain all edge points in \\(K\\). The algorithm compares all candidate circles in shoe \\(K\\) with the fixed circle \\(q\_1\\)and picks the closest one as the area corresponding to \\(q\_1\\) in shoe \\(K\\).
### 6\.3\.1 One subarea matching
In this section, we show one example of the matching process by examining the comparison between circle \\(q\_1\\) and circle \\(k\_8\\) in Figure [6\.5](shoe.html#fig:step1plot). We selected \\(k\_8\\) because it is a close match to circle \\(q\_1\\) well.
```
nseg = 360
circle_q1 <- data.frame(int_inside_center(data.frame(shoeQ), 50, nseg, 75.25,
600.4))
circle_k8 <- data.frame(int_inside_center(data.frame(shoeK), 65, nseg, 49, 700))
match_q1_k8 <- boosted_clique(circle_q1, circle_k8, ncross_in_bins = 30, xbins_in = 20,
ncross_in_bin_size = 1, ncross_ref_bins = NULL, xbins_ref = 30, ncross_ref_bin_size = NULL,
eps = 0.75, seed = 1, num_cores = parallel::detectCores(), plot = TRUE,
verbose = FALSE, cl = NULL)
```
The `shoeprinter` function `boosted_clique` finds the subset of pixels (the [maximal clique](glossary.html#def:maxclique)) that can be used for alignment of circle \\(q\_1\\) and circle \\(k\_8\\). Using corresponding pixels, the function computes the rotation angle and translation metric which result the best alignment between them. Circle \\(q\_1\\) is transformed to be on top of the circle \\(k\_8\\), using the calculated alignment information. The function then produces summary plots as shown in Figure [6\.6](shoe.html#fig:MCresult) and a table of similarity features as shown in Table [6\.1](shoe.html#tab:resulttable).
Figure 6\.6: Resulting plot when comparing circle \\(q\_1\\) (blue points) and circle \\(k\_8\\) (red points).
Figure [6\.6](shoe.html#fig:MCresult) has four sections. In the first row, first column, the distances between all points in the maximal clique are shown: the x\-axis is the distance between points in circle \\(k\_8\\) and the y\-axis is the distance between points in circle \\(q\_1\\). These values should fall on or near the \\(y\=x\\) diagonal, because for identical circles, the points in the maximal clique are the same. The second plot in the first row of Figure [6\.6](shoe.html#fig:MCresult) shows the \\((x,y)\\) values of the points in the maximal clique. Red points are from \\(k\_8\\), blue circles are from \\(q\_1\\). The bottom two plots show all points in the two circles after alignment. We can see that the two circles share a large area of overlap: the blue points are almost perfectly overlapping with the red points.
Table 6\.1: Resulting table from the matching between q1 and k8
| Clique size | Rot. angle | Overlap on k8 | Overlap on q1 | Median distance | center x | center y | radius |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 18 | 12\.05 | 0\.75 | 0\.97 | 0\.3 | 54\.5 | 688\.5 | 50\.28 |
Table [6\.1](shoe.html#tab:resulttable) contains the similarity measures and other information about \\(k\_8\\), the closest matching circle to \\(q\_1\\). The first five features measure the degree of similarity between circle \\(q\_1\\) and circle \\(k\_8\\), after aligning them. There are 18 pixels in the maximal clique, and the rotation angle between them is 12\.05\\(^o\\). The two overlap features indicate that 75% of circle \\(k\_8\\) pixels are overlapped with circle \\(q\_1\\) and 97% of circle \\(q\_1\\) pixels are overlapped with circle \\(k\_8\\) after alignment. The median value of the distance between two close points after aligning the circles is 0\.3\. For the clique size and overlap metrics, larger values indicate more similar circles, while the opposite is true for distance metrics. The last three columns are about information of the found circle in \\(K\\) that is matched the best with circle \\(q\_1\\). Thus, the center (54\.5, 688\.5\) and the radius 50\.28 in \\(K\\) is the best matching circle to \\(q\_1\\).
Figure 6\.7: Circles \\(q\_1\\) and \\(k\_8\\) in context.
Figure [6\.7](shoe.html#fig:graphq1) shows the two matching circles as part of the larger shoe impression. By fixing circle \\(q\_1\\) (in blue), we found the closest circle (in red) in shoe \\(K\\). The blue circle in \\(Q\\) and red circle in \\(K\\) look pretty similar. This process would be repeated for the two other circles that we fixed in \\(Q\\), using the function `match_print_subarea`.
### 6\.3\.1 One subarea matching
In this section, we show one example of the matching process by examining the comparison between circle \\(q\_1\\) and circle \\(k\_8\\) in Figure [6\.5](shoe.html#fig:step1plot). We selected \\(k\_8\\) because it is a close match to circle \\(q\_1\\) well.
```
nseg = 360
circle_q1 <- data.frame(int_inside_center(data.frame(shoeQ), 50, nseg, 75.25,
600.4))
circle_k8 <- data.frame(int_inside_center(data.frame(shoeK), 65, nseg, 49, 700))
match_q1_k8 <- boosted_clique(circle_q1, circle_k8, ncross_in_bins = 30, xbins_in = 20,
ncross_in_bin_size = 1, ncross_ref_bins = NULL, xbins_ref = 30, ncross_ref_bin_size = NULL,
eps = 0.75, seed = 1, num_cores = parallel::detectCores(), plot = TRUE,
verbose = FALSE, cl = NULL)
```
The `shoeprinter` function `boosted_clique` finds the subset of pixels (the [maximal clique](glossary.html#def:maxclique)) that can be used for alignment of circle \\(q\_1\\) and circle \\(k\_8\\). Using corresponding pixels, the function computes the rotation angle and translation metric which result the best alignment between them. Circle \\(q\_1\\) is transformed to be on top of the circle \\(k\_8\\), using the calculated alignment information. The function then produces summary plots as shown in Figure [6\.6](shoe.html#fig:MCresult) and a table of similarity features as shown in Table [6\.1](shoe.html#tab:resulttable).
Figure 6\.6: Resulting plot when comparing circle \\(q\_1\\) (blue points) and circle \\(k\_8\\) (red points).
Figure [6\.6](shoe.html#fig:MCresult) has four sections. In the first row, first column, the distances between all points in the maximal clique are shown: the x\-axis is the distance between points in circle \\(k\_8\\) and the y\-axis is the distance between points in circle \\(q\_1\\). These values should fall on or near the \\(y\=x\\) diagonal, because for identical circles, the points in the maximal clique are the same. The second plot in the first row of Figure [6\.6](shoe.html#fig:MCresult) shows the \\((x,y)\\) values of the points in the maximal clique. Red points are from \\(k\_8\\), blue circles are from \\(q\_1\\). The bottom two plots show all points in the two circles after alignment. We can see that the two circles share a large area of overlap: the blue points are almost perfectly overlapping with the red points.
Table 6\.1: Resulting table from the matching between q1 and k8
| Clique size | Rot. angle | Overlap on k8 | Overlap on q1 | Median distance | center x | center y | radius |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 18 | 12\.05 | 0\.75 | 0\.97 | 0\.3 | 54\.5 | 688\.5 | 50\.28 |
Table [6\.1](shoe.html#tab:resulttable) contains the similarity measures and other information about \\(k\_8\\), the closest matching circle to \\(q\_1\\). The first five features measure the degree of similarity between circle \\(q\_1\\) and circle \\(k\_8\\), after aligning them. There are 18 pixels in the maximal clique, and the rotation angle between them is 12\.05\\(^o\\). The two overlap features indicate that 75% of circle \\(k\_8\\) pixels are overlapped with circle \\(q\_1\\) and 97% of circle \\(q\_1\\) pixels are overlapped with circle \\(k\_8\\) after alignment. The median value of the distance between two close points after aligning the circles is 0\.3\. For the clique size and overlap metrics, larger values indicate more similar circles, while the opposite is true for distance metrics. The last three columns are about information of the found circle in \\(K\\) that is matched the best with circle \\(q\_1\\). Thus, the center (54\.5, 688\.5\) and the radius 50\.28 in \\(K\\) is the best matching circle to \\(q\_1\\).
Figure 6\.7: Circles \\(q\_1\\) and \\(k\_8\\) in context.
Figure [6\.7](shoe.html#fig:graphq1) shows the two matching circles as part of the larger shoe impression. By fixing circle \\(q\_1\\) (in blue), we found the closest circle (in red) in shoe \\(K\\). The blue circle in \\(Q\\) and red circle in \\(K\\) look pretty similar. This process would be repeated for the two other circles that we fixed in \\(Q\\), using the function `match_print_subarea`.
6\.4 Drawing Conclusions
------------------------
To determine whether impressions \\(Q\\) and \\(K\\) were made by the same shoe or not, we need to define the signature of the questioned shoe outsole impression. We define the signature of shoe \\(Q\\) as the three circular areas of the shoe after edge detection.
Figure 6\.8: Comparing the lengths of the blue lines is important for determining if the same shoe made impression \\(Q\\) and impression \\(K\\).
Figure [6\.8](shoe.html#fig:signature) shows the signature in \\(Q\\) and the corresponding areas in \\(K\\) found by the `match_print_subarea` function. We use the average of the five similarity features from the three circle matchings to draw a conclusion. If the circles in \\(K\\) match the circle in \\(Q\\), then the corresponding geometry between the three circles in \\(Q\\) and \\(K\\) should be very similar. The blue lines in Figure [6\.8](shoe.html#fig:signature) form a triangle whose vertices are the centers of the three circles in each shoe impression. The difference of the side lengths (numbered 1,2,3 in the figure) between the two triangles are an additional feature used to determine if \\(Q\\) and \\(K\\) were made my the same shoe or not. If the two triangles in Figure [6\.8](shoe.html#fig:signature) have similar side lengths, this is evidence that shoe \\(Q\\) and shoe \\(K\\) are from the same source.
Table 6\.2: Resulting table comparing Q and K.
| Comparison | Clique size | Rot. angle | Overlap on k\* | Overlap on q | Median distance | Diff in length from Triangle |
| --- | --- | --- | --- | --- | --- | --- |
| \\(q\_1\\)\-\\(k^\*\_1\\) | 18 | 12\.05 | 0\.75 | 0\.97 | 0\.3 | 0\.58 |
| \\(q\_2\\)\-\\(k^\*\_2\\) | 17 | 10\.57 | 0\.53 | 0\.91 | 0\.43 | 0\.55 |
| \\(q\_3\\)\-\\(k^\*\_3\\) | 20 | 12\.14 | 0\.63 | 1 | 0\.24 | 1\.03 |
Table [6\.2](shoe.html#tab:finaltable) contains the summary features from comparing the signature with three circles in \\(Q\\) to the corresponding areas in \\(K\\) which are the closest match. There is one row for each comparison between \\(q\_i\\) and \\(k^{\*}\_i\\) for \\(i\=1,2,3\\). The last column is the absolute value of difference of the triangle side lengths from three circles in \\(Q\\) and \\(K\\). Smaller differences indicate more similarity between \\(Q\\) and \\(K\\). Finally, we take the average of all features except rotation angle over the three circle. For the rotation angle, we take the standard deviation of the three measurements instead of the mean. If the two prints come from the same shoe, the rotation angle will be very similar in all three circles, so we expect small values of the standard deviation to indicate mates, while large values indicate non\-mates.
6\.5 Case Study
---------------
Comparing two impressions with totally different outsole patterns is a very easy problem for humans and computers. But what if we are tasked to compare two impressions from two different shoes from different people that have the same brand, model, and size?
Suppose we have one questioned shoe outsole impression (\\(Q\\)) and two known shoe impressions (\\(K\_1, K\_2\\)) from two different shoes. All three impressions share the same [class characteristics](glossary.html#def:classchar). How can we conclude if the source of \\(Q\\) is the same as \\(K\_1\\) or \\(K\_2\\)?
Figure 6\.9: Example images of shoe outsole impressions. The questioned impression (\\(Q\\)) and two known impressions (\\(K\_1\\), \\(K\_2\\)). Does \\(Q\\) have the same source as \\(K\_1\\) or \\(K\_2\\)?
Figure [6\.9](shoe.html#fig:eximgs) displays three outsole impressions. With a cursory glance, it looks like impression \\(Q\\) could have been made by the same shoe as either \\(K\_1\\) or \\(K\_2\\). The two known impressions are from the same brand and model of shoes, which were worn by two different people for about six months. Since the three impressions all share class and subclass characteristics, it is a very hard comparison. There are some differences among images \\(Q, K\_1, K\_2\\) but it is hard to determine if these differences are due to measurement errors or variation in the EverOS scanner or to wear and tear of the outsole or RACs.
### 6\.5\.1 Compare \\(Q\\) and \\(K\_1\\)
Let’s compare \\(Q\\) and \\(K\_1\\) first. In \\(Q\\), we select three circular areas. For this, we use the function `initial_circle` to select three centers within the (30%, 80%), (20%, 40%), (70%, 70%) quantiles of the \\((x, y)\\) ranges of the coordinates in \\(Q\\). The function automatically generates corresponding centers of the circles with radius of 50\.
```
input_circles_Q <- initial_circle(imgQ)
input_circles_Q
```
```
## [,1] [,2] [,3]
## [1,] 89.7 578.2 50
## [2,] 67.8 296.6 50
## [3,] 177.3 507.8 50
```
```
start_plot(imgQ, imgK1, input_circles_Q)
```
Figure 6\.10: The input circles and the shoe \\(Q\\) impression (left) and the shoe \\(K\_1\\) impression (right).
Next, run `match_print_subarea()` to find circles in \\(K\\) that most closely correspond to the circles in \\(Q\\). The result is shown in Figure [6\.11](shoe.html#fig:exKM). In \\(K\_1\\), the algorithm finds the closest circles which show the most overlap with each of the circles that we fixed in \\(Q\\). The corresponding circle information and summary features are shown in Tables [6\.3](shoe.html#tab:KMtable1) and [6\.4](shoe.html#tab:KMtable2), respectively. Each row in Table [6\.4](shoe.html#tab:KMtable2) shows the similarity features from circle \\(q\_i\\) and circle \\(k^{1}\_i\\) when \\(i\=1,2,3\\).
```
match_print_subarea(imgQ, imgK1, input_circles_Q)
```
Figure 6\.11: The input circles (\\(q\_1, q\_2, q\_3\\)) and the shoe \\(Q\\) impression (left) and the closest matching circles (\\(k\_1^1, k\_2^1, k\_3^1\\)) and shoe \\(K\_1\\) impression (right), which are the result of `match_print_subarea`.
Table 6\.3: Centers and radius information for circles in \\(K\_1\\) that are the closest match to the circles in \\(Q\\) according to the matching algorithm.
| Reference\_X | Reference\_Y | Reference\_radius |
| --- | --- | --- |
| 71\.0 | 567\.5 | 52\.28 |
| 85\.5 | 286\.0 | 56\.32 |
| 182\.5 | 495\.0 | 54\.15 |
Table 6\.4: Similarity features from comparing circles in \\(Q\\) to the best matching circles in \\(K\_1\\).
| clique\_size | rotation\_angle | reference\_overlap | input\_overlap | med\_dist\_euc | diff |
| --- | --- | --- | --- | --- | --- |
| 15 | 1\.23 | 0\.55 | 0\.86 | 0\.49 | 0\.58 |
| 18 | 1\.88 | 0\.42 | 1\.00 | 0\.09 | 20\.62 |
| 17 | 2\.68 | 0\.65 | 0\.96 | 0\.31 | 7\.49 |
### 6\.5\.2 Compare \\(Q\\) and \\(K\_2\\)
We repeat the process from Section [6\.5\.1](shoe.html#sec:shoeqk1) to find the best matching circles in \\(K\_2\\).
```
match_print_subarea(imgQ, imgK2, input_circles_Q)
```
Figure 6\.12: The input circles (\\(q\_1, q\_2, q\_3\\)) and the shoe \\(Q\\) impression (left) and the closest matching circles (\\(k\_1^2, k\_2^2, k\_3^2\\)) and shoe \\(K\_2\\) impression (right), which are the result of `match_print_subarea`.
Figure [6\.12](shoe.html#fig:exKNM) shows the circles from the matching algorithm. In this comparison, \\(k\_3^2\\) is in a very different position compared to \\(q\_{3}\\).
Table 6\.5: Centers and radius information that the matching algorithm found as the best overlap circle in \\(K\_2\\) for fixed circles in \\(Q\\)
| Reference\_X | Reference\_Y | Reference\_radius |
| --- | --- | --- |
| 66\.0 | 582\.5 | 51\.20 |
| 66\.0 | 302\.0 | 55\.61 |
| 199\.5 | 431\.0 | 51\.44 |
Table 6\.6: Similarity features by comparing circles in \\(Q\\) to the best matching circles in \\(K\_2\\).
| clique\_size | rotation\_angle | reference\_overlap | input\_overlap | med\_dist\_euc | diff |
| --- | --- | --- | --- | --- | --- |
| 18 | 1\.87 | 0\.64 | 0\.89 | 0\.53 | 1\.95 |
| 14 | 7\.16 | 0\.39 | 0\.92 | 0\.52 | 89\.54 |
| 16 | 32\.93 | 0\.26 | 0\.51 | 0\.58 | 52\.26 |
Table [6\.5](shoe.html#tab:KNMtable1) and Table [6\.5](shoe.html#tab:KNMtable1) show the location information of matching circles in \\(K\_2\\) for fixed circles in \\(Q\\) and their corresponding similarity features, respectively.
### 6\.5\.3 Interpreting Comparisons between \\(Q, K\_1\\) and \\(Q, K\_2\\)
To summarize similarity features from the match of the two impressions, we will take the average of each similarity feature across the three circle matchings, and the standard deviation of the rotation angle estimates. For features such as clique size and overlap, larger values indicate more similar impressions. For the features median distance and absolute value of differences of lengths from two triangles in two impressions, smaller values indicate more similar impressions. For the rotation angle estimation, we use the standard deviation of the three values because we are interested in comparing the three values to each other, not in the value of the angles themselves. If all rotations are about the same, the algorithm is consistently finding those patterns, and the prints are likely from the same shoe, even if the image has been rotated . If the rotation angles are very different, however, the algorithm is forcing similarity where it doesn’t exist, and the prints are likely from different shoes.
Table 6\.7: Summary table of two comparisons of \\(Q\-K\_1\\) and \\(Q\-K\_2\\). Similarity features are averaged but rotation angles are summarized as standard deviation.
| Match | clique\_size | sd\_rot\_angle | overlap\_k | overlap\_q | med\_dist | diff\_length |
| --- | --- | --- | --- | --- | --- | --- |
| Q, \\(K\_1\\) | 16\.67 | 0\.73 | 0\.54 | 0\.94 | 0\.30 | 9\.56 |
| Q,\\(K\_2\\) | 16\.00 | 16\.62 | 0\.43 | 0\.77 | 0\.54 | 47\.92 |
Table [6\.7](shoe.html#tab:summarytable) shows the summarized similarity features from two comparisons we did for \\(Q,K\_1\\) and \\(Q, K\_2\\). In both comparisons, the average clique size and median distance look similar. The standard deviation (SD) of rotation angle estimations and difference in lengths from triangles, however, are very different. The comparison \\(Q,K\_1\\) results in similar rotation estimates (1\.23, 1\.88, 2\.68\), while the comparison \\(Q,K\_2\\) results in very different rotation estimates (1\.87, 7\.16, 32\.93\) in Table [6\.4](shoe.html#tab:KMtable2) and Table [6\.6](shoe.html#tab:KNMtable2). The length comparison from \\(Q,K\_2\\) is five times larger than that from the \\(Q,K\_1\\) comparison. In addition, \\(Q,K\_1\\) has high overlap average chance of about 94% while \\(Q\-K\_2\\) has about a 77% average. Therefore, we have evidence that prints \\(Q\\) and \\(K\_1\\) were left by the same shoe, while prints \\(Q\\) and \\(K\_2\\) were left by different shoes.
Finally, we reveal the truth: the impressions \\(Q\\) and \\(K\_1\\) are from the same shoe. The impression \\(K\_2\\) is scanned from different shoe of the same size, brand, and style. Because the two shoes share class characteristics, the comparisons we did here were the hardest cases. To see what typical values we will have for mates and non\-mates, we need to do more analyses to get similarity features from many comparisons. In other analyses, for instance in Park ([2018](#ref-park2018learning)), we use random forests to classify mates and non\-mates using the definitions of clique size, overlap, etc. as the features. The empirical probability from the random forest could serve as a score, similar to what was done in Chapter [7](glass.html#glass).
[
### 6\.5\.1 Compare \\(Q\\) and \\(K\_1\\)
Let’s compare \\(Q\\) and \\(K\_1\\) first. In \\(Q\\), we select three circular areas. For this, we use the function `initial_circle` to select three centers within the (30%, 80%), (20%, 40%), (70%, 70%) quantiles of the \\((x, y)\\) ranges of the coordinates in \\(Q\\). The function automatically generates corresponding centers of the circles with radius of 50\.
```
input_circles_Q <- initial_circle(imgQ)
input_circles_Q
```
```
## [,1] [,2] [,3]
## [1,] 89.7 578.2 50
## [2,] 67.8 296.6 50
## [3,] 177.3 507.8 50
```
```
start_plot(imgQ, imgK1, input_circles_Q)
```
Figure 6\.10: The input circles and the shoe \\(Q\\) impression (left) and the shoe \\(K\_1\\) impression (right).
Next, run `match_print_subarea()` to find circles in \\(K\\) that most closely correspond to the circles in \\(Q\\). The result is shown in Figure [6\.11](shoe.html#fig:exKM). In \\(K\_1\\), the algorithm finds the closest circles which show the most overlap with each of the circles that we fixed in \\(Q\\). The corresponding circle information and summary features are shown in Tables [6\.3](shoe.html#tab:KMtable1) and [6\.4](shoe.html#tab:KMtable2), respectively. Each row in Table [6\.4](shoe.html#tab:KMtable2) shows the similarity features from circle \\(q\_i\\) and circle \\(k^{1}\_i\\) when \\(i\=1,2,3\\).
```
match_print_subarea(imgQ, imgK1, input_circles_Q)
```
Figure 6\.11: The input circles (\\(q\_1, q\_2, q\_3\\)) and the shoe \\(Q\\) impression (left) and the closest matching circles (\\(k\_1^1, k\_2^1, k\_3^1\\)) and shoe \\(K\_1\\) impression (right), which are the result of `match_print_subarea`.
Table 6\.3: Centers and radius information for circles in \\(K\_1\\) that are the closest match to the circles in \\(Q\\) according to the matching algorithm.
| Reference\_X | Reference\_Y | Reference\_radius |
| --- | --- | --- |
| 71\.0 | 567\.5 | 52\.28 |
| 85\.5 | 286\.0 | 56\.32 |
| 182\.5 | 495\.0 | 54\.15 |
Table 6\.4: Similarity features from comparing circles in \\(Q\\) to the best matching circles in \\(K\_1\\).
| clique\_size | rotation\_angle | reference\_overlap | input\_overlap | med\_dist\_euc | diff |
| --- | --- | --- | --- | --- | --- |
| 15 | 1\.23 | 0\.55 | 0\.86 | 0\.49 | 0\.58 |
| 18 | 1\.88 | 0\.42 | 1\.00 | 0\.09 | 20\.62 |
| 17 | 2\.68 | 0\.65 | 0\.96 | 0\.31 | 7\.49 |
### 6\.5\.2 Compare \\(Q\\) and \\(K\_2\\)
We repeat the process from Section [6\.5\.1](shoe.html#sec:shoeqk1) to find the best matching circles in \\(K\_2\\).
```
match_print_subarea(imgQ, imgK2, input_circles_Q)
```
Figure 6\.12: The input circles (\\(q\_1, q\_2, q\_3\\)) and the shoe \\(Q\\) impression (left) and the closest matching circles (\\(k\_1^2, k\_2^2, k\_3^2\\)) and shoe \\(K\_2\\) impression (right), which are the result of `match_print_subarea`.
Figure [6\.12](shoe.html#fig:exKNM) shows the circles from the matching algorithm. In this comparison, \\(k\_3^2\\) is in a very different position compared to \\(q\_{3}\\).
Table 6\.5: Centers and radius information that the matching algorithm found as the best overlap circle in \\(K\_2\\) for fixed circles in \\(Q\\)
| Reference\_X | Reference\_Y | Reference\_radius |
| --- | --- | --- |
| 66\.0 | 582\.5 | 51\.20 |
| 66\.0 | 302\.0 | 55\.61 |
| 199\.5 | 431\.0 | 51\.44 |
Table 6\.6: Similarity features by comparing circles in \\(Q\\) to the best matching circles in \\(K\_2\\).
| clique\_size | rotation\_angle | reference\_overlap | input\_overlap | med\_dist\_euc | diff |
| --- | --- | --- | --- | --- | --- |
| 18 | 1\.87 | 0\.64 | 0\.89 | 0\.53 | 1\.95 |
| 14 | 7\.16 | 0\.39 | 0\.92 | 0\.52 | 89\.54 |
| 16 | 32\.93 | 0\.26 | 0\.51 | 0\.58 | 52\.26 |
Table [6\.5](shoe.html#tab:KNMtable1) and Table [6\.5](shoe.html#tab:KNMtable1) show the location information of matching circles in \\(K\_2\\) for fixed circles in \\(Q\\) and their corresponding similarity features, respectively.
### 6\.5\.3 Interpreting Comparisons between \\(Q, K\_1\\) and \\(Q, K\_2\\)
To summarize similarity features from the match of the two impressions, we will take the average of each similarity feature across the three circle matchings, and the standard deviation of the rotation angle estimates. For features such as clique size and overlap, larger values indicate more similar impressions. For the features median distance and absolute value of differences of lengths from two triangles in two impressions, smaller values indicate more similar impressions. For the rotation angle estimation, we use the standard deviation of the three values because we are interested in comparing the three values to each other, not in the value of the angles themselves. If all rotations are about the same, the algorithm is consistently finding those patterns, and the prints are likely from the same shoe, even if the image has been rotated . If the rotation angles are very different, however, the algorithm is forcing similarity where it doesn’t exist, and the prints are likely from different shoes.
Table 6\.7: Summary table of two comparisons of \\(Q\-K\_1\\) and \\(Q\-K\_2\\). Similarity features are averaged but rotation angles are summarized as standard deviation.
| Match | clique\_size | sd\_rot\_angle | overlap\_k | overlap\_q | med\_dist | diff\_length |
| --- | --- | --- | --- | --- | --- | --- |
| Q, \\(K\_1\\) | 16\.67 | 0\.73 | 0\.54 | 0\.94 | 0\.30 | 9\.56 |
| Q,\\(K\_2\\) | 16\.00 | 16\.62 | 0\.43 | 0\.77 | 0\.54 | 47\.92 |
Table [6\.7](shoe.html#tab:summarytable) shows the summarized similarity features from two comparisons we did for \\(Q,K\_1\\) and \\(Q, K\_2\\). In both comparisons, the average clique size and median distance look similar. The standard deviation (SD) of rotation angle estimations and difference in lengths from triangles, however, are very different. The comparison \\(Q,K\_1\\) results in similar rotation estimates (1\.23, 1\.88, 2\.68\), while the comparison \\(Q,K\_2\\) results in very different rotation estimates (1\.87, 7\.16, 32\.93\) in Table [6\.4](shoe.html#tab:KMtable2) and Table [6\.6](shoe.html#tab:KNMtable2). The length comparison from \\(Q,K\_2\\) is five times larger than that from the \\(Q,K\_1\\) comparison. In addition, \\(Q,K\_1\\) has high overlap average chance of about 94% while \\(Q\-K\_2\\) has about a 77% average. Therefore, we have evidence that prints \\(Q\\) and \\(K\_1\\) were left by the same shoe, while prints \\(Q\\) and \\(K\_2\\) were left by different shoes.
Finally, we reveal the truth: the impressions \\(Q\\) and \\(K\_1\\) are from the same shoe. The impression \\(K\_2\\) is scanned from different shoe of the same size, brand, and style. Because the two shoes share class characteristics, the comparisons we did here were the hardest cases. To see what typical values we will have for mates and non\-mates, we need to do more analyses to get similarity features from many comparisons. In other analyses, for instance in Park ([2018](#ref-park2018learning)), we use random forests to classify mates and non\-mates using the definitions of clique size, overlap, etc. as the features. The empirical probability from the random forest could serve as a score, similar to what was done in Chapter [7](glass.html#glass).
[
| Field Specific |
sctyner.github.io | https://sctyner.github.io/OpenForSciR/glass.html |
Chapter 7 Trace glass evidence: chemical composition
====================================================
#### *Soyoung Park, Sam Tyner*
7\.1 Introduction
-----------------
It is easy to imagine a crime scene with glass fragments: a burglar may have broken a glass door, a glass bottle could have been used in an assault, or a domestic disturbance may involve throwing something through a window. During the commission of a crime, there are many ways that glass can break and be transferred from the scene. The study of glass fragments is important to forensic science because the glass broken at the scene can transfer to the perpetrator’s shoes, clothing, or even their hair (Curran, Hicks, and Buckleton [2000](#ref-curranbook)).
Crime scene investigators collect fragments of glass at the scene as a part of the evidence collection process, and the fragments are sent to the forensic science lab for processing. Similarly, evidence such as clothing and shoes are collected from a suspect, and if glass is found, the fragments are sent to the lab and compared to the fragments found at the scene. The question that the analyst usually tries to answer is, “Did these glass fragments come from the same source?” This is a *source level* question, meaning that the comparison of the fragments will only tell the investigators whether or not the fragments from the suspect and the fragments from the scene have the same origin. As discussed in Section [1\.3](intro.html#forscip), the forensic analysis will not inform investigators *how* the suspect came into contact (*activity level*) with the glass or if the suspect was the perpetrator of the crime (*offense level*) (Roger Cook et al. [1998](#ref-hop)).
### 7\.1\.1 Problems of interest
There are two key problems of interest in glass fragments comparison, but before defining them, we need to define the different glass involved in the investigation of a crime. Glass fragments found on the suspect, for example in their hair, shoes, or clothes, are [*questioned*](glossary.html#def:questioned) fragments, which we denote by \\(Q\\). Glass fragments found at the crime scene, for example in front of a broken window or taken from the broken window, are [*known*](glossary.html#def:known) fragments, which we denote \\(K\\). This brings us to a [specific source](glossary.html#def:specsource) question: Did the questioned fragments \\(Q\\) found on the suspect come from the same source of glass as the known fragments \\(K\\), which we know belong to a specific piece of glass at the scene? The goal is now to quantify the similarity between \\(Q\\) and \\(K\\). There are lots of ways measure similarity between two glass fragments, but the metric should be defined according to available databases of glass fragment measurements for which ground truth is known. For example, if we have elemental compositions measured in parts per million (ppm) as numerical values, the similarity can be quantified by the difference of the chemical compositions of \\(Q\\) and \\(K\\).
### 7\.1\.2 Current practice
There are many types of glass measurements such as color, thickness, [refractive index](glossary.html#def:ri) (RI) and chemical concentrations. In this chapter, we will focus on [float glass](glossary.html#def:floatglass) that is most frequently used in windows, doors and automobiles. For discussions of the other measurements see e.g. Curran, Hicks, and Buckleton ([2000](#ref-curranbook)). The elemental concentrations of float glass that we use here were obtained through inductively coupled mass spectrometry with a laser add\-on (LA\-ICP\-MS). In the current practice, there are two analysis guides from [ASTM International](#def:ASTM), (ASTM\-E2330\-12 [2012](#ref-ASTME233012)) and (ASTM\-E2927\-16 [2016](#ref-ASTME292716)). To determine the source of glass fragments according to these two guides, intervals around the mean concentrations are computed for each element, and if all elements’ intervals overlap, then the conclusion is that the fragments come from the same source. For more detail on these methods, see ASTM\-E2330\-12 ([2012](#ref-ASTME233012)) and ASTM\-E2927\-16 ([2016](#ref-ASTME292716)).
### 7\.1\.3 Comparing glass fragments
In order to determine if two glass fragments come from the same source, a forensic analyst considers many properties of the glass, including color, [fluorescence](glossary.html#def:fluor), thickness, surface features, curvature, and chemical composition. All methods for examining these properties, except for methods of chemical composition analysis, are non\-destructive. If the fragments are large, exclusion are easy to reach if the glass are of different colors because of the wide variety of glass colors possible in manufacturing. Typically, however, glass fragments are quite small and color determination is very difficult. Similarly, thickness of glass is dictated by the manufacturing process, which aims for uniform thickness, so if two glass fragments differ in thickness by more than 0\.25mm, an exclusion is made (Bottrell [2009](#ref-glassbackground)). For glass fragments of the same color and thickness, microscopic techniques for determining light absorption (fluorescence), curvature, surface features (such as coatings), are used before the destructive chemical composition analysis.
### 7\.1\.4 Goal of this chapter
In this chapter, we construct a new rule for making glass source conclusions using the [random forest](glossary.html#def:rfdef) algorithm to classify the source of glass fragments (Park and Carriquiry [2019](#ref-park2018)[a](#ref-park2018)).
7\.2 Data
---------
### 7\.2\.1 Chemical composition of glass
The process for determining the chemical composition of a glass fragment is given in great detail in ASTM\-E2330\-12 ([2012](#ref-ASTME233012)) and ASTM\-E2927\-16 ([2016](#ref-ASTME292716)). This destructive method determines elemental composition with Inductively Coupled Plasma Mass Spectrometry (ICP\-MS). Up to 40 elements can be detected in a glass fragment using this method. In Weis et al. ([2011](#ref-weisglass)), only 18 elements are used: calcium (Ca), sodium (NA) and magnesium (Mg) are the major elements, followed by aluminum (Al), potassium (K) and iron (Fe) as minor elements, and lithium (Li), titanium (Ti), manganese (Mn), rubidium (Rb), strontium (Sr), zirconium (Zr), barium (Ba), lanthanum (La), cerium (Ce), neodymium (Nd), hafnium (Hf), and lead (Pb) as the trace elements. The methods of Weis et al. ([2011](#ref-weisglass)) use standard deviations (\\(\\sigma\\)) of repeated measurements of the same fragment to create intervals around the measurements. Intervals of width \\(2\\sigma, 4\\sigma, 6\\sigma, 8\\sigma, 10\\sigma, 12\\sigma, 16\\sigma, 20\\sigma, 30\\sigma,\\) and \\(40\\sigma\\) are considered for overlap.
### 7\.2\.2 Data source
Dr. Alicia Carriquiry of Iowa State University commissioned the collection of a large database of chemical compositions of float glass samples. The details of this database are explained in Park and Carriquiry ([2019](#ref-park2018)[a](#ref-park2018)). The full database is available [here](https://github.com/CSAFE-ISU/AOAS-2018-glass-manuscript/tree/master/glassdata). The database includes 31 panes of float glass manufactured by Company A and 17 panes manufactured by Company B, both located in the United States. The Company A panes are labeled AA, AB, … , AAR, and the Company B panes are labeled BA, BB, … ,BR. The panes from Company A were produced during a three week period (January 3\-24, 2017\) and the panes from Company B were produced during a two week period (December 5\-16, 2016\).
To understand variability within a ribbon of glass, two glass panes were collected on almost all days in each company, one from the left side and one from the right side of the ribbon. Twenty four fragments were randomly sampled from each glass pane. Five replicate measurements were obtained for 21 of the 24 fragments in each pane; for the remaining three fragments in each pane, we obtained 20 replicate measurements. Therefore, each pane has 165 measurements for 18 elements. For example, see the heuristic in Figure [7\.1](glass.html#fig:fragsample). In some panes, there may be a fragment with fewer than five replicate measurements. The unit for all measurements is parts per million (ppm).
```
library(tidyverse)
pane <- expand.grid(x = 1:16, y = 1:4)
n <- nrow(pane)
pane$id <- 1:n
frags <- sample(n, 24)
rep20 <- sample(frags, 3)
rep5 <- frags[!(frags %in% rep20)]
pane_sample <- data.frame(frag = c(rep(rep5, each = 5), rep(rep20, each = 20)),
rep = c(rep(1:5, 21), rep(1:20, 3)))
sample_data <- left_join(pane, pane_sample, by = c(id = "frag"))
ggplot(data = sample_data, aes(x = x, y = y)) + geom_tile(fill = "white", color = "black") +
geom_jitter(aes(color = as.factor(rep)), alpha = 0.8, size = 0.5) + scale_color_manual(values = c(rep("black",
20))) + theme_void() + theme(legend.position = "none")
```
Figure 7\.1: An example of how the glass fragments were sampled, if the 64 squares are imagined to be randomly broken fragments within a pane.
### 7\.2\.3 Data structure
Next, we look at the glass data.
```
glass <- read.csv("dat/glass_raw_all.csv")
head(glass)
```
```
## pane fragment Rep mfr element ppm
## 1 AA 1 1 A Al 2678.000
## 2 AA 1 1 A Ba 10.800
## 3 AA 1 1 A Ca 63140.000
## 4 AA 1 1 A Ce 9.520
## 5 AA 1 1 A Fe 667.000
## 6 AA 1 1 A Hf 1.148
```
The elements have very different scales, as some (e.g. Ca) are major elements, some (e.g. Al) are minor elements, and others (e.g Hf) are trace elements. Thus, we take the natural log transformation of all measurements to put them on a similar scale.
```
# need to make sure the panes are shown in order of mfr date
pane_order <- c(paste0("A", LETTERS[c(1:13, 15, 22:25)]), paste0("AA", LETTERS[c(1:4,
6, 8:13, 17:18)]), names(table(glass$pane))[c(32:48)])
glass$pane <- ordered(glass$pane, levels = pane_order)
glass_log <- glass %>% mutate(log_ppm = log(ppm))
glass_log %>% select(-ppm) %>% spread(element, log_ppm) %>% select(mfr, pane,
fragment, Rep, Li, Na, Mg, Al, K, Ca) %>% head()
```
```
## mfr pane fragment Rep Li Na Mg Al K
## 1 A AA 1 1 0.8329091 11.54025 10.04715 7.892826 7.136483
## 2 A AA 1 2 0.5423243 11.53772 10.03671 7.883069 7.126248
## 3 A AA 1 3 0.7227060 11.53126 10.04499 7.909122 7.127694
## 4 A AA 1 4 0.7975072 11.54219 10.05449 7.912423 7.144407
## 5 A AA 1 5 0.7227060 11.52968 10.02260 7.871311 7.103322
## 6 A AA 2 1 0.5596158 11.52663 10.04107 7.898411 7.118826
## Ca
## 1 11.05311
## 2 11.04930
## 3 11.04580
## 4 11.06147
## 5 11.01023
## 6 11.03747
```
```
cols <- csafethemes:::csafe_cols_secondary[c(3, 12)]
glass_log %>% filter(element %in% c("Li", "Na", "Mg", "Al", "K", "Ca")) %>%
ggplot() + geom_density(aes(x = log_ppm, fill = mfr), alpha = 0.7) + scale_fill_manual(name = "Manufacturer",
values = cols) + facet_wrap(~element, scales = "free", nrow = 2) + labs(x = "Log concentration (ppm)",
y = "Density") + theme(legend.position = "top")
```
Figure 7\.2: Density estimation of selected chemical compositions, colored by manufacturers
Figure [7\.2](glass.html#fig:density) shows density plots of six chemical compositions of Al, Ca, K, Li, Mg, Na. Na and Ca (major elements) show density curves from two manufacturers are overlapped, while Al, K show clear separation between curves by manufacturers. This implies that glass fragments from different manufacturers will be very easy to distinguish.
```
glass_log %>% filter(element %in% c("Na", "Ti", "Zr", "Hf")) %>% ggplot() +
geom_boxplot(aes(x = pane, y = log_ppm, fill = mfr), alpha = 0.8, outlier.size = 0.5,
size = 0.1) + scale_fill_manual(name = "Manufacturer", values = cols) +
facet_wrap(~element, scales = "free", nrow = 2, labeller = label_both) +
theme_bw() + theme(legend.position = "none") + scale_x_discrete(labels = c("AA",
rep("", 30), "BA", rep("", 15), "BR")) + labs(x = "Pane (in order of manufacture)",
y = "Log concentration (ppm)")
```
Figure 7\.3: Box plot of four elements in 48 panes, ordered by date of production, from two manufacturers.
Figure @ref(fig:box plot) shows box plots of the measurements of four elements (Na, Ti, Zr, Hf) in each of the 48 panes in the database, colored by manufacturer. Boxes are ordered by date of production within manufacturer. We can confirm the between\- and within\- pane variability. Interestingly, element values of Zr and Hf in manufacturer A are both decreasing in time, which is evidence that the element measurements are highly correlated. To account for this relationship, we use methods that do not require independence of measurements.
```
col_data_AA <- glass_log_long[, -6] %>% filter(pane == "AA") %>% spread(element,
log_ppm) %>% group_by(fragment) %>% summarise_if(is.numeric, mean, na.rm = TRUE)
col_data_BA <- glass_log_long[, -6] %>% filter(pane == "BA") %>% spread(element,
log_ppm) %>% group_by(fragment) %>% summarise_if(is.numeric, mean, na.rm = TRUE)
P1 <- ggcorr(col_data_AA[, 3:20], geom = "blank", label = TRUE, hjust = 0.75) +
geom_point(size = 10, aes(color = coefficient > 0, alpha = abs(coefficient) >
0.5)) + scale_alpha_manual(values = c(`TRUE` = 0.25, `FALSE` = 0)) +
guides(color = FALSE, alpha = FALSE)
P2 <- ggcorr(col_data_BA[, 3:20], geom = "blank", label = TRUE, hjust = 0.75) +
geom_point(size = 10, aes(color = coefficient > 0, alpha = abs(coefficient) >
0.5)) + scale_alpha_manual(values = c(`TRUE` = 0.25, `FALSE` = 0)) +
guides(color = FALSE, alpha = FALSE)
P1 + P2
```
7\.3 R Packages
---------------
We propose a [machine learning](glossary.html#def:ml) method to quantify the similarity between two glass fragments \\(Q\\) and \\(K\\). The goal here is to construct a [classifier](glossary.html#def:classifier) that predicts, with low error, whether or not \\(Q\\) and \\(K\\) have the same source. Using the glass database, we construct many pairwise comparisons between two glass fragments with known sources, and we will record their source as either same source or different source.
To construct the set of comparisons, we
1. Take the natural log of all measurements of all glass fragments. (ppm to \\(\\log\\)(ppm))
2. Select one pane of glass in the database to be the “questioned” source. Sample one fragment from this pane, and call it \\(Q\\).
3. Select one pane of glass in the database to be the “known” source. Sample one fragment from this pane, and call it \\(K\\).
4. Construct the response variable: if the panes in 2 and 3 are the same, the response variable is KM for known mates. Otherwise, the response variable is KNM for known non\-mates.
5. Construct the features: take the mean of the repeated measurements of \\(Q\\) and \\(K\\) in each element, and take the absolute value of the difference between the mean of \\(Q\\) and the mean of \\(K\\).
6. Repeat 2\-5 until we have a data set suitable for training a classifier.
We use the R package [`caret`](https://topepo.github.io/caret/index.html) to train a random forest classifier to determine whether two glass fragments (\\(Q\\) and \\(K\\)) have the same source or have different sources (Jed Wing et al. [2018](#ref-R-caret)).
To begin, the package can be installed from CRAN:
```
install.packages("caret")
```
```
library(caret)
# other package used for plotting
library(GGally)
library(patchwork)
```
The R package `caret` (**C**lassification **A**nd **RE**gression **T**raining) is used for applied predictive modeling. The `caret` package allows us to run 238 different models as well as adjusting data splitting, preprocessing, feature selection, tuning parameter, and variable importance estimation. We use it here for fitting a random forest model to our data using cross\-validation and down\-sampling.
```
diff_Q_K_data <- readRDS("dat/rf_data_kfrags_1z.rds")
```
Table 7\.1: Differences of log values of concentrations (Li, Na, Mg, Al, K, Ca) from pairs of known mates (KM) and known non\-mates (KNM)
| Class | Li | Na | Mg | Al | K | Ca |
| --- | --- | --- | --- | --- | --- | --- |
| KM | 0\.0078 | 0\.0043 | 0\.0089 | 0\.0270 | 0\.0103 | 0\.0149 |
| KM | 0\.0845 | 0\.0065 | 0\.0042 | 0\.0049 | 0\.0126 | 0\.0032 |
| KM | 0\.0826 | 0\.0170 | 0\.0069 | 0\.0137 | 0\.0259 | 0\.0014 |
| KNM | 0\.5362 | 0\.0581 | 0\.0267 | 0\.0741 | 0\.0834 | 0\.0619 |
| KNM | 0\.2437 | 0\.0283 | 0\.0092 | 0\.0181 | 0\.0377 | 0\.0386 |
| KNM | 0\.2319 | 0\.0316 | 0\.0283 | 0\.0001 | 0\.0437 | 0\.0518 |
Table [7\.1](glass.html#tab:diffdata2) shows the example of pairwise differences among glass measurements. If we take the difference of two fragments from the same pane, then KM is assigned to the response variable of `Class`. If we take the difference of two fragments from two different panes, then KNM is assigned to variable `Class`. Each row has 18 differences and one variable `Class` indicating the source of two glass fragments. By taking pairwise differences, there are many more KNM pairs than KM pairs: we can construct 67,680 KNM pairs but only 1,440 KM pairs from the glass database.
```
diff_Q_K_data %>% gather(element, diff, Li:Pb) %>% filter(element %in% c("Zr",
"Li", "Hf", "Ca")) %>% ggplot() + geom_density(aes(x = diff, fill = class),
alpha = 0.7) + scale_fill_manual(name = "Class", values = cols) + facet_wrap(~element,
scales = "free", nrow = 2) + labs(x = "Difference between Q and K (log(ppm))") +
theme(legend.position = "top")
```
Figure 7\.4: Histogram of differences from four chemical elements among KM and KNM.
Figure [7\.4](glass.html#fig:diffhist) shows the distribution of differences for KM and KNM pairs in the Ca, Hf, Li, and Zr. Across all elements, distribution of differences from KNM pairs are more dispersed than differences of KM. The differences of KM have high density near zero, while the KNM measurements are shifted to the right and have a long tail. These differences for all 18 elements will be the [features](glossary.html#def:features) of the random forest.
Finally, we construct the random forest classifier. Since the data have 1,440 KM and 67,680 KNM observations, the response variable is imbalanced. With this large imbalance, the algorithm can simply predict KNM and have low error rate without learning anything about the properties of the KM class. You can find more ways to consider this imbalance problem for fitting the random forest in this glass fragment source prediction in (Park and Carriquiry [2019](#ref-park2018)[a](#ref-park2018)). In this chapter, we will down\-sample the KNM observations to equal the number of KM observations. That way, we will have 1,440 KM and 1,440 KNM comparisons. Then, we sample 70% of them to be the training set and the remaining 30% are the testing set.
```
diff_Q_K_data$class <- as.factor(diff_Q_K_data$class)
# Down sample the majority class to the same number of minority class
set.seed(123789)
down_diff_Q_K_data <- downSample(diff_Q_K_data[, c(1:18)], diff_Q_K_data$class)
down_diff_Q_K_data <- down_diff_Q_K_data %>% mutate(id = row_number())
names(down_diff_Q_K_data)[19] <- "class"
table(down_diff_Q_K_data$class)
```
```
##
## KM KNM
## 1440 1440
```
```
# Create training set
train_data <- down_diff_Q_K_data %>% sample_frac(0.7)
# Create test set
test_data <- anti_join(down_diff_Q_K_data, train_data, by = "id")
train_data <- train_data[, -20] #exclude id
test_data <- test_data[, -20] #exclude id
# dim(train_data) # 2016 19 dim(test_data) # 864 19
```
After down\-sampling and setting aside 30% of the data for testing, we can train the classifier. Below is the R code to fit the random forest, using `caret` (Jed Wing et al. [2018](#ref-R-caret)). The tuning parameter for the random forest algorithm is `mtry`, the number of variables available for splitting at each tree node. We use five values (1, 2, 3, 4, 5\) to tune `mtry` and pick the optimal one. For the classification, the default `mtry` is the square root of the number of predictor variables (it is \\(\\sqrt{18} \\approx 4\.2\\), in our study). To pick the optimal `mtry` parameter, the area under the [receiver operating characteristic](glossary.html#def:roc) (ROC) curve is calculated. The optimal value will result the highest area under the ROC curve (AUC). The data are also automatically centered and scaled for each predictor. We use 10\-fold [cross\-valildation](glossary.html#def:crossval) to evaluate the random forest algorithm and repeat entire process three times.
```
ctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 3, savePredictions = "final",
summaryFunction = twoClassSummary, classProbs = TRUE)
# mtry is recommended to use sqrt(# of variables)=sqrt(18) so 1:5 are tried
# to find the optimal one
RF_classifier <- train(class ~ ., train_data, method = "rf", tuneGrid = expand.grid(.mtry = c(1:5)),
metric = "ROC", preProc = c("center", "scale"), trControl = ctrl)
```
```
RF_classifier <- readRDS("dat/RF_classifier.RDS")
RF_classifier
```
```
## Random Forest
##
## 2016 samples
## 18 predictor
## 2 classes: 'KM', 'KNM'
##
## Pre-processing: centered (18), scaled (18)
## Resampling: Cross-Validated (10 fold, repeated 3 times)
## Summary of sample sizes: 1815, 1814, 1814, 1814, 1815, 1814, ...
## Resampling results across tuning parameters:
##
## mtry ROC Sens Spec
## 1 0.9637918 0.9826797 0.8323266
## 2 0.9641196 0.9807190 0.8467239
## 3 0.9625203 0.9758170 0.8457239
## 4 0.9624018 0.9738562 0.8467172
## 5 0.9614865 0.9728758 0.8473838
##
## ROC was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 2.
```
From the RF fitting result, it showed the AUC (ROC), Sens ([Sensitivity](glossary.html#def:sensitivity)) and Spec ([Specificity](glossary.html#def:specificity)) values in each mtry values. R\-package caret selects the best mtry value (it is 2, in our study) to give the highest AUC (ROC) value 0\.964, from 10\-fold cross\-validation and 3 repeated process.
The final performance of the random forest algorithm with the optimal mtry of 2 with 500 of number of trees. In the training set, the false negative rate is 0\.02 and the false positive rate is 0\.153 of false positive rate, which is the optimized classification result. We see the OOB estimate of error rate of 0\.085, which is the average error rate from the fold left out of the training set during cross\-validation.
```
RF_classifier$finalModel$confusion
```
```
## KM KNM class.error
## KM 1000 20 0.01960784
## KNM 152 844 0.15261044
```
```
imp <- varImp(RF_classifier)$importance
imp <- as.data.frame(imp)
imp$varnames <- rownames(imp) # row names to column
rownames(imp) <- NULL
imp <- imp %>% arrange(-Overall)
imp$varnames <- as.character(imp$varnames)
elements <- read_csv("dat/elements.csv")
imp %>% left_join(elements[, c(2, 4)], by = c(varnames = "symb")) %>% ggplot(aes(x = reorder(varnames,
Overall), y = Overall, fill = classification)) + geom_bar(stat = "identity",
color = "grey40") + scale_fill_brewer(name = "Element", palette = "Blues",
direction = -1) + labs(y = "Overall importance", x = "Variable") + scale_y_continuous(position = "right") +
coord_flip() + theme_bw()
```
Figure 7\.5: Variable importance from the RF classifier, colored by element types (major, minor or trace).
By fitting the random forest algorithm, we can also get a measure of variable importance. This metric ranks which elements out of 18 are most important to correctly predicting the source of the glass fragments. Figure [7\.5](glass.html#fig:varimp) shows that elements K, Ce, Zr, Rb, and Hf are five very important variables and Pb, Sr, Na, Ca, and Mg are five minimally important variables. All major elements are not important: this is not surprising because all glass is, broadly speaking, very similar chemically. Conversely, most of the trace elements are ranked in the top half.
7\.4 Drawing Conclusions
------------------------
The result of the random forest classifier is a probability of the observation belonging to the two classes, KM and KNM, and a predicted class. For an observation, if the class probability for KM is greater than 0\.5, then the class prediction is M (mated), otherwise it is NM (non\-mated). We use the random forest classifier to predict on the test set. Recall that we know the ground truth, KM and KNM for the test data.
```
# Get prediction from RF classifier in test data
pred_class <- predict(RF_classifier, test_data)
table_pred_class <- confusionMatrix(pred_class, test_data$class)$table
table2 <- data.frame(table_pred_class) %>% spread(Reference, Freq)
table2$Prediction <- c("M", "NM")
table2 %>% knitr::kable(format = "html", caption = "Classification result of test set",
longtable = FALSE, row.names = FALSE)
```
Table 7\.2: Classification result of test set
| Prediction | KM | KNM |
| --- | --- | --- |
| M | 409 | 69 |
| NM | 11 | 375 |
Table [7\.2](glass.html#tab:testtable) shows the classification results on the testing set from the random forest. There are 420 KM and 444 KNM comparisons in the test set. For the KM cases, the RF classifier correctly predicts the source as M in 97\.4% of cases and wrongly predicts the source as NM in 2\.6% of cases. This is the false negative rate, FNR. For the KNM cases, the RF correctly classifies them as NM 84\.5% of the time, while 15\.5% are incorrectly classified as M. We know that the method resulted in higher FPR (15\.5\) because the data we have are from many close panes made by the same manufacturer that are difficult to distinguish from one another.
```
# class probability for the same pane is used as similarity score for RF
prob_prediction <- predict(RF_classifier, test_data, type = "prob")[, "KM"]
test_data$prob <- prob_prediction
test_data %>% ggplot() + geom_density(aes(x = prob, fill = class), alpha = 0.7) +
scale_fill_manual(name = "Class", values = cols) + labs(x = "Empirical probability for KM from RF classifier",
y = "Density") + theme(legend.position = "top")
```
Figure 7\.6: Scores from the RF classifier for the test set. Ground truth (either KM or KNM) is known. Any observations above 0\.5 are declared same source, while all others are declared different source. Notice the long tail in the KNM cases.
Figure [7\.6](glass.html#fig:testscore) shows the distribution of the RF scores, colored by true classes in test data. The score is the empirical class probability that an observation belongs to the KM class. The modes of the the two classes are well separated, while there is still some overlap between the classes’ density curve. The tail of the distribution of scores for KNMs is more skewed than that of the KMs, which helps explain the higher false positive rate. Table [7\.2](glass.html#tab:testtable) is the predicted classification result using the cut\-off of 0\.5\. If the RF score is larger than 0\.5, then we will predict the pair of glass fragments have the same source (M). If not, then we declare they have different source (NM).
7\.5 Case Study
---------------
Here we introduce a new set of five pairs of comparisons and use the random forest classifier we trained in Section @ref(glass\_rpkgs) to determine if the two samples being compared are from the same source of glass or from different sources.
Suppose you are given the following data on eight glass samples, where each sample has been measured 5 times:
Your supervisor wants you to make five comparisons:
| compare\_id | sample\_id\_1 | sample\_id\_2 |
| --- | --- | --- |
| 1 | 1 | 2 |
| 2 | 1 | 3 |
| 3 | 1 | 5 |
| 4 | 4 | 6 |
| 5 | 7 | 8 |
First, you need to take the log of the measurements, then get the mean for each sample and each element.
```
# take the log, then the mean of each for comparison
new_samples <- mutate(new_samples, logppm = log(ppm)) %>%
select(-ppm) %>%
group_by(sample_id, element) %>%
summarise(mean_logppm = mean(logppm))
```
Next, make each of the five comparisons. We write a function, `make_compare()` to do so, then we use [`purrr::map2()`](https://purrr.tidyverse.org/reference/map2.html) to perform the comparison for each row in `compare_id`. The resulting compared data is below.
```
make_compare <- function(id1, id2) {
dat1 <- new_samples %>% filter(sample_id == id1) %>% ungroup() %>% select(element,
mean_logppm)
dat2 <- new_samples %>% filter(sample_id == id2) %>% ungroup() %>% ungroup() %>%
select(element, mean_logppm)
tibble(element = dat1$element, diff = abs(dat1$mean_logppm - dat2$mean_logppm)) %>%
spread(element, diff)
}
compared <- comparisons %>% mutate(compare = map2(sample_id_1, sample_id_2,
make_compare)) %>% unnest()
DT::datatable(compared) %>% DT::formatRound(4:21, 3)
```
Finally, we take the compared data and predict using the random forest object:
```
newpred <- predict(RF_classifier, newdata = compared, type = "prob")
```
```
# only take the id data from compare, and the M prediction from newpred
bind_cols(compared[, 1:3], score = newpred[, 1]) %>% mutate(prediction = ifelse(score >
0.5, "M", "NM")) %>% knitr::kable(format = "html", caption = "Predicted scores of new comparison cases from the random forest classifier",
longtable = FALSE, row.names = FALSE, digits = 3, col.names = c("Comparison ID",
"Sample ID 1", "Sample ID 2", "Score", "Decision"))
```
Table 7\.3: Predicted scores of new comparison cases from the random forest classifier
| Comparison ID | Sample ID 1 | Sample ID 2 | Score | Decision |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2 | 0\.812 | M |
| 2 | 1 | 3 | 0\.238 | NM |
| 3 | 1 | 5 | 0\.002 | NM |
| 4 | 4 | 6 | 0\.430 | NM |
| 5 | 7 | 8 | 0\.618 | M |
Table [7\.3](glass.html#tab:newtesttable) shows the RF score which is defined as the empirical class probability that the samples are from the same pane (M) by the RF algorithm[6](#fn6). Based on these scores, it appears that samples 1 and 2 are very likely from the same source, while samples 1 and 5 are very likely from different sources. Using a threshold of 0\.5, sample 4 and sample 6 are predicted to be from different sources. However, this score is so close to the threshold of 0\.5 that we may want to say this is inconclusive.[7](#fn7) Samples 1 and 3 are probably from different sources, though we are less confident in that determination than we are in the decision that samples 1 and 5 are from different sources. Finally, samples 7 and 8 are probably from the same source, but we are less certain of this than we are that samples 1 and 2 are from the same source.
More research into the appropriateness of this random forest method for making glass source conclusions is needed. Important points to consider are:
* What database should be used to train the algorithm?
* Could the random forest method over fit, reducing the generalizability of this method?
Nevertheless, the RF classifier makes good use of the glass fragment data, with its high dimension and small number of repeated measurements. More details on the RF classifier and additional discussion can be found in Park and Carriquiry ([2019](#ref-park2018)[a](#ref-park2018)).
[
#### *Soyoung Park, Sam Tyner*
7\.1 Introduction
-----------------
It is easy to imagine a crime scene with glass fragments: a burglar may have broken a glass door, a glass bottle could have been used in an assault, or a domestic disturbance may involve throwing something through a window. During the commission of a crime, there are many ways that glass can break and be transferred from the scene. The study of glass fragments is important to forensic science because the glass broken at the scene can transfer to the perpetrator’s shoes, clothing, or even their hair (Curran, Hicks, and Buckleton [2000](#ref-curranbook)).
Crime scene investigators collect fragments of glass at the scene as a part of the evidence collection process, and the fragments are sent to the forensic science lab for processing. Similarly, evidence such as clothing and shoes are collected from a suspect, and if glass is found, the fragments are sent to the lab and compared to the fragments found at the scene. The question that the analyst usually tries to answer is, “Did these glass fragments come from the same source?” This is a *source level* question, meaning that the comparison of the fragments will only tell the investigators whether or not the fragments from the suspect and the fragments from the scene have the same origin. As discussed in Section [1\.3](intro.html#forscip), the forensic analysis will not inform investigators *how* the suspect came into contact (*activity level*) with the glass or if the suspect was the perpetrator of the crime (*offense level*) (Roger Cook et al. [1998](#ref-hop)).
### 7\.1\.1 Problems of interest
There are two key problems of interest in glass fragments comparison, but before defining them, we need to define the different glass involved in the investigation of a crime. Glass fragments found on the suspect, for example in their hair, shoes, or clothes, are [*questioned*](glossary.html#def:questioned) fragments, which we denote by \\(Q\\). Glass fragments found at the crime scene, for example in front of a broken window or taken from the broken window, are [*known*](glossary.html#def:known) fragments, which we denote \\(K\\). This brings us to a [specific source](glossary.html#def:specsource) question: Did the questioned fragments \\(Q\\) found on the suspect come from the same source of glass as the known fragments \\(K\\), which we know belong to a specific piece of glass at the scene? The goal is now to quantify the similarity between \\(Q\\) and \\(K\\). There are lots of ways measure similarity between two glass fragments, but the metric should be defined according to available databases of glass fragment measurements for which ground truth is known. For example, if we have elemental compositions measured in parts per million (ppm) as numerical values, the similarity can be quantified by the difference of the chemical compositions of \\(Q\\) and \\(K\\).
### 7\.1\.2 Current practice
There are many types of glass measurements such as color, thickness, [refractive index](glossary.html#def:ri) (RI) and chemical concentrations. In this chapter, we will focus on [float glass](glossary.html#def:floatglass) that is most frequently used in windows, doors and automobiles. For discussions of the other measurements see e.g. Curran, Hicks, and Buckleton ([2000](#ref-curranbook)). The elemental concentrations of float glass that we use here were obtained through inductively coupled mass spectrometry with a laser add\-on (LA\-ICP\-MS). In the current practice, there are two analysis guides from [ASTM International](#def:ASTM), (ASTM\-E2330\-12 [2012](#ref-ASTME233012)) and (ASTM\-E2927\-16 [2016](#ref-ASTME292716)). To determine the source of glass fragments according to these two guides, intervals around the mean concentrations are computed for each element, and if all elements’ intervals overlap, then the conclusion is that the fragments come from the same source. For more detail on these methods, see ASTM\-E2330\-12 ([2012](#ref-ASTME233012)) and ASTM\-E2927\-16 ([2016](#ref-ASTME292716)).
### 7\.1\.3 Comparing glass fragments
In order to determine if two glass fragments come from the same source, a forensic analyst considers many properties of the glass, including color, [fluorescence](glossary.html#def:fluor), thickness, surface features, curvature, and chemical composition. All methods for examining these properties, except for methods of chemical composition analysis, are non\-destructive. If the fragments are large, exclusion are easy to reach if the glass are of different colors because of the wide variety of glass colors possible in manufacturing. Typically, however, glass fragments are quite small and color determination is very difficult. Similarly, thickness of glass is dictated by the manufacturing process, which aims for uniform thickness, so if two glass fragments differ in thickness by more than 0\.25mm, an exclusion is made (Bottrell [2009](#ref-glassbackground)). For glass fragments of the same color and thickness, microscopic techniques for determining light absorption (fluorescence), curvature, surface features (such as coatings), are used before the destructive chemical composition analysis.
### 7\.1\.4 Goal of this chapter
In this chapter, we construct a new rule for making glass source conclusions using the [random forest](glossary.html#def:rfdef) algorithm to classify the source of glass fragments (Park and Carriquiry [2019](#ref-park2018)[a](#ref-park2018)).
### 7\.1\.1 Problems of interest
There are two key problems of interest in glass fragments comparison, but before defining them, we need to define the different glass involved in the investigation of a crime. Glass fragments found on the suspect, for example in their hair, shoes, or clothes, are [*questioned*](glossary.html#def:questioned) fragments, which we denote by \\(Q\\). Glass fragments found at the crime scene, for example in front of a broken window or taken from the broken window, are [*known*](glossary.html#def:known) fragments, which we denote \\(K\\). This brings us to a [specific source](glossary.html#def:specsource) question: Did the questioned fragments \\(Q\\) found on the suspect come from the same source of glass as the known fragments \\(K\\), which we know belong to a specific piece of glass at the scene? The goal is now to quantify the similarity between \\(Q\\) and \\(K\\). There are lots of ways measure similarity between two glass fragments, but the metric should be defined according to available databases of glass fragment measurements for which ground truth is known. For example, if we have elemental compositions measured in parts per million (ppm) as numerical values, the similarity can be quantified by the difference of the chemical compositions of \\(Q\\) and \\(K\\).
### 7\.1\.2 Current practice
There are many types of glass measurements such as color, thickness, [refractive index](glossary.html#def:ri) (RI) and chemical concentrations. In this chapter, we will focus on [float glass](glossary.html#def:floatglass) that is most frequently used in windows, doors and automobiles. For discussions of the other measurements see e.g. Curran, Hicks, and Buckleton ([2000](#ref-curranbook)). The elemental concentrations of float glass that we use here were obtained through inductively coupled mass spectrometry with a laser add\-on (LA\-ICP\-MS). In the current practice, there are two analysis guides from [ASTM International](#def:ASTM), (ASTM\-E2330\-12 [2012](#ref-ASTME233012)) and (ASTM\-E2927\-16 [2016](#ref-ASTME292716)). To determine the source of glass fragments according to these two guides, intervals around the mean concentrations are computed for each element, and if all elements’ intervals overlap, then the conclusion is that the fragments come from the same source. For more detail on these methods, see ASTM\-E2330\-12 ([2012](#ref-ASTME233012)) and ASTM\-E2927\-16 ([2016](#ref-ASTME292716)).
### 7\.1\.3 Comparing glass fragments
In order to determine if two glass fragments come from the same source, a forensic analyst considers many properties of the glass, including color, [fluorescence](glossary.html#def:fluor), thickness, surface features, curvature, and chemical composition. All methods for examining these properties, except for methods of chemical composition analysis, are non\-destructive. If the fragments are large, exclusion are easy to reach if the glass are of different colors because of the wide variety of glass colors possible in manufacturing. Typically, however, glass fragments are quite small and color determination is very difficult. Similarly, thickness of glass is dictated by the manufacturing process, which aims for uniform thickness, so if two glass fragments differ in thickness by more than 0\.25mm, an exclusion is made (Bottrell [2009](#ref-glassbackground)). For glass fragments of the same color and thickness, microscopic techniques for determining light absorption (fluorescence), curvature, surface features (such as coatings), are used before the destructive chemical composition analysis.
### 7\.1\.4 Goal of this chapter
In this chapter, we construct a new rule for making glass source conclusions using the [random forest](glossary.html#def:rfdef) algorithm to classify the source of glass fragments (Park and Carriquiry [2019](#ref-park2018)[a](#ref-park2018)).
7\.2 Data
---------
### 7\.2\.1 Chemical composition of glass
The process for determining the chemical composition of a glass fragment is given in great detail in ASTM\-E2330\-12 ([2012](#ref-ASTME233012)) and ASTM\-E2927\-16 ([2016](#ref-ASTME292716)). This destructive method determines elemental composition with Inductively Coupled Plasma Mass Spectrometry (ICP\-MS). Up to 40 elements can be detected in a glass fragment using this method. In Weis et al. ([2011](#ref-weisglass)), only 18 elements are used: calcium (Ca), sodium (NA) and magnesium (Mg) are the major elements, followed by aluminum (Al), potassium (K) and iron (Fe) as minor elements, and lithium (Li), titanium (Ti), manganese (Mn), rubidium (Rb), strontium (Sr), zirconium (Zr), barium (Ba), lanthanum (La), cerium (Ce), neodymium (Nd), hafnium (Hf), and lead (Pb) as the trace elements. The methods of Weis et al. ([2011](#ref-weisglass)) use standard deviations (\\(\\sigma\\)) of repeated measurements of the same fragment to create intervals around the measurements. Intervals of width \\(2\\sigma, 4\\sigma, 6\\sigma, 8\\sigma, 10\\sigma, 12\\sigma, 16\\sigma, 20\\sigma, 30\\sigma,\\) and \\(40\\sigma\\) are considered for overlap.
### 7\.2\.2 Data source
Dr. Alicia Carriquiry of Iowa State University commissioned the collection of a large database of chemical compositions of float glass samples. The details of this database are explained in Park and Carriquiry ([2019](#ref-park2018)[a](#ref-park2018)). The full database is available [here](https://github.com/CSAFE-ISU/AOAS-2018-glass-manuscript/tree/master/glassdata). The database includes 31 panes of float glass manufactured by Company A and 17 panes manufactured by Company B, both located in the United States. The Company A panes are labeled AA, AB, … , AAR, and the Company B panes are labeled BA, BB, … ,BR. The panes from Company A were produced during a three week period (January 3\-24, 2017\) and the panes from Company B were produced during a two week period (December 5\-16, 2016\).
To understand variability within a ribbon of glass, two glass panes were collected on almost all days in each company, one from the left side and one from the right side of the ribbon. Twenty four fragments were randomly sampled from each glass pane. Five replicate measurements were obtained for 21 of the 24 fragments in each pane; for the remaining three fragments in each pane, we obtained 20 replicate measurements. Therefore, each pane has 165 measurements for 18 elements. For example, see the heuristic in Figure [7\.1](glass.html#fig:fragsample). In some panes, there may be a fragment with fewer than five replicate measurements. The unit for all measurements is parts per million (ppm).
```
library(tidyverse)
pane <- expand.grid(x = 1:16, y = 1:4)
n <- nrow(pane)
pane$id <- 1:n
frags <- sample(n, 24)
rep20 <- sample(frags, 3)
rep5 <- frags[!(frags %in% rep20)]
pane_sample <- data.frame(frag = c(rep(rep5, each = 5), rep(rep20, each = 20)),
rep = c(rep(1:5, 21), rep(1:20, 3)))
sample_data <- left_join(pane, pane_sample, by = c(id = "frag"))
ggplot(data = sample_data, aes(x = x, y = y)) + geom_tile(fill = "white", color = "black") +
geom_jitter(aes(color = as.factor(rep)), alpha = 0.8, size = 0.5) + scale_color_manual(values = c(rep("black",
20))) + theme_void() + theme(legend.position = "none")
```
Figure 7\.1: An example of how the glass fragments were sampled, if the 64 squares are imagined to be randomly broken fragments within a pane.
### 7\.2\.3 Data structure
Next, we look at the glass data.
```
glass <- read.csv("dat/glass_raw_all.csv")
head(glass)
```
```
## pane fragment Rep mfr element ppm
## 1 AA 1 1 A Al 2678.000
## 2 AA 1 1 A Ba 10.800
## 3 AA 1 1 A Ca 63140.000
## 4 AA 1 1 A Ce 9.520
## 5 AA 1 1 A Fe 667.000
## 6 AA 1 1 A Hf 1.148
```
The elements have very different scales, as some (e.g. Ca) are major elements, some (e.g. Al) are minor elements, and others (e.g Hf) are trace elements. Thus, we take the natural log transformation of all measurements to put them on a similar scale.
```
# need to make sure the panes are shown in order of mfr date
pane_order <- c(paste0("A", LETTERS[c(1:13, 15, 22:25)]), paste0("AA", LETTERS[c(1:4,
6, 8:13, 17:18)]), names(table(glass$pane))[c(32:48)])
glass$pane <- ordered(glass$pane, levels = pane_order)
glass_log <- glass %>% mutate(log_ppm = log(ppm))
glass_log %>% select(-ppm) %>% spread(element, log_ppm) %>% select(mfr, pane,
fragment, Rep, Li, Na, Mg, Al, K, Ca) %>% head()
```
```
## mfr pane fragment Rep Li Na Mg Al K
## 1 A AA 1 1 0.8329091 11.54025 10.04715 7.892826 7.136483
## 2 A AA 1 2 0.5423243 11.53772 10.03671 7.883069 7.126248
## 3 A AA 1 3 0.7227060 11.53126 10.04499 7.909122 7.127694
## 4 A AA 1 4 0.7975072 11.54219 10.05449 7.912423 7.144407
## 5 A AA 1 5 0.7227060 11.52968 10.02260 7.871311 7.103322
## 6 A AA 2 1 0.5596158 11.52663 10.04107 7.898411 7.118826
## Ca
## 1 11.05311
## 2 11.04930
## 3 11.04580
## 4 11.06147
## 5 11.01023
## 6 11.03747
```
```
cols <- csafethemes:::csafe_cols_secondary[c(3, 12)]
glass_log %>% filter(element %in% c("Li", "Na", "Mg", "Al", "K", "Ca")) %>%
ggplot() + geom_density(aes(x = log_ppm, fill = mfr), alpha = 0.7) + scale_fill_manual(name = "Manufacturer",
values = cols) + facet_wrap(~element, scales = "free", nrow = 2) + labs(x = "Log concentration (ppm)",
y = "Density") + theme(legend.position = "top")
```
Figure 7\.2: Density estimation of selected chemical compositions, colored by manufacturers
Figure [7\.2](glass.html#fig:density) shows density plots of six chemical compositions of Al, Ca, K, Li, Mg, Na. Na and Ca (major elements) show density curves from two manufacturers are overlapped, while Al, K show clear separation between curves by manufacturers. This implies that glass fragments from different manufacturers will be very easy to distinguish.
```
glass_log %>% filter(element %in% c("Na", "Ti", "Zr", "Hf")) %>% ggplot() +
geom_boxplot(aes(x = pane, y = log_ppm, fill = mfr), alpha = 0.8, outlier.size = 0.5,
size = 0.1) + scale_fill_manual(name = "Manufacturer", values = cols) +
facet_wrap(~element, scales = "free", nrow = 2, labeller = label_both) +
theme_bw() + theme(legend.position = "none") + scale_x_discrete(labels = c("AA",
rep("", 30), "BA", rep("", 15), "BR")) + labs(x = "Pane (in order of manufacture)",
y = "Log concentration (ppm)")
```
Figure 7\.3: Box plot of four elements in 48 panes, ordered by date of production, from two manufacturers.
Figure @ref(fig:box plot) shows box plots of the measurements of four elements (Na, Ti, Zr, Hf) in each of the 48 panes in the database, colored by manufacturer. Boxes are ordered by date of production within manufacturer. We can confirm the between\- and within\- pane variability. Interestingly, element values of Zr and Hf in manufacturer A are both decreasing in time, which is evidence that the element measurements are highly correlated. To account for this relationship, we use methods that do not require independence of measurements.
```
col_data_AA <- glass_log_long[, -6] %>% filter(pane == "AA") %>% spread(element,
log_ppm) %>% group_by(fragment) %>% summarise_if(is.numeric, mean, na.rm = TRUE)
col_data_BA <- glass_log_long[, -6] %>% filter(pane == "BA") %>% spread(element,
log_ppm) %>% group_by(fragment) %>% summarise_if(is.numeric, mean, na.rm = TRUE)
P1 <- ggcorr(col_data_AA[, 3:20], geom = "blank", label = TRUE, hjust = 0.75) +
geom_point(size = 10, aes(color = coefficient > 0, alpha = abs(coefficient) >
0.5)) + scale_alpha_manual(values = c(`TRUE` = 0.25, `FALSE` = 0)) +
guides(color = FALSE, alpha = FALSE)
P2 <- ggcorr(col_data_BA[, 3:20], geom = "blank", label = TRUE, hjust = 0.75) +
geom_point(size = 10, aes(color = coefficient > 0, alpha = abs(coefficient) >
0.5)) + scale_alpha_manual(values = c(`TRUE` = 0.25, `FALSE` = 0)) +
guides(color = FALSE, alpha = FALSE)
P1 + P2
```
### 7\.2\.1 Chemical composition of glass
The process for determining the chemical composition of a glass fragment is given in great detail in ASTM\-E2330\-12 ([2012](#ref-ASTME233012)) and ASTM\-E2927\-16 ([2016](#ref-ASTME292716)). This destructive method determines elemental composition with Inductively Coupled Plasma Mass Spectrometry (ICP\-MS). Up to 40 elements can be detected in a glass fragment using this method. In Weis et al. ([2011](#ref-weisglass)), only 18 elements are used: calcium (Ca), sodium (NA) and magnesium (Mg) are the major elements, followed by aluminum (Al), potassium (K) and iron (Fe) as minor elements, and lithium (Li), titanium (Ti), manganese (Mn), rubidium (Rb), strontium (Sr), zirconium (Zr), barium (Ba), lanthanum (La), cerium (Ce), neodymium (Nd), hafnium (Hf), and lead (Pb) as the trace elements. The methods of Weis et al. ([2011](#ref-weisglass)) use standard deviations (\\(\\sigma\\)) of repeated measurements of the same fragment to create intervals around the measurements. Intervals of width \\(2\\sigma, 4\\sigma, 6\\sigma, 8\\sigma, 10\\sigma, 12\\sigma, 16\\sigma, 20\\sigma, 30\\sigma,\\) and \\(40\\sigma\\) are considered for overlap.
### 7\.2\.2 Data source
Dr. Alicia Carriquiry of Iowa State University commissioned the collection of a large database of chemical compositions of float glass samples. The details of this database are explained in Park and Carriquiry ([2019](#ref-park2018)[a](#ref-park2018)). The full database is available [here](https://github.com/CSAFE-ISU/AOAS-2018-glass-manuscript/tree/master/glassdata). The database includes 31 panes of float glass manufactured by Company A and 17 panes manufactured by Company B, both located in the United States. The Company A panes are labeled AA, AB, … , AAR, and the Company B panes are labeled BA, BB, … ,BR. The panes from Company A were produced during a three week period (January 3\-24, 2017\) and the panes from Company B were produced during a two week period (December 5\-16, 2016\).
To understand variability within a ribbon of glass, two glass panes were collected on almost all days in each company, one from the left side and one from the right side of the ribbon. Twenty four fragments were randomly sampled from each glass pane. Five replicate measurements were obtained for 21 of the 24 fragments in each pane; for the remaining three fragments in each pane, we obtained 20 replicate measurements. Therefore, each pane has 165 measurements for 18 elements. For example, see the heuristic in Figure [7\.1](glass.html#fig:fragsample). In some panes, there may be a fragment with fewer than five replicate measurements. The unit for all measurements is parts per million (ppm).
```
library(tidyverse)
pane <- expand.grid(x = 1:16, y = 1:4)
n <- nrow(pane)
pane$id <- 1:n
frags <- sample(n, 24)
rep20 <- sample(frags, 3)
rep5 <- frags[!(frags %in% rep20)]
pane_sample <- data.frame(frag = c(rep(rep5, each = 5), rep(rep20, each = 20)),
rep = c(rep(1:5, 21), rep(1:20, 3)))
sample_data <- left_join(pane, pane_sample, by = c(id = "frag"))
ggplot(data = sample_data, aes(x = x, y = y)) + geom_tile(fill = "white", color = "black") +
geom_jitter(aes(color = as.factor(rep)), alpha = 0.8, size = 0.5) + scale_color_manual(values = c(rep("black",
20))) + theme_void() + theme(legend.position = "none")
```
Figure 7\.1: An example of how the glass fragments were sampled, if the 64 squares are imagined to be randomly broken fragments within a pane.
### 7\.2\.3 Data structure
Next, we look at the glass data.
```
glass <- read.csv("dat/glass_raw_all.csv")
head(glass)
```
```
## pane fragment Rep mfr element ppm
## 1 AA 1 1 A Al 2678.000
## 2 AA 1 1 A Ba 10.800
## 3 AA 1 1 A Ca 63140.000
## 4 AA 1 1 A Ce 9.520
## 5 AA 1 1 A Fe 667.000
## 6 AA 1 1 A Hf 1.148
```
The elements have very different scales, as some (e.g. Ca) are major elements, some (e.g. Al) are minor elements, and others (e.g Hf) are trace elements. Thus, we take the natural log transformation of all measurements to put them on a similar scale.
```
# need to make sure the panes are shown in order of mfr date
pane_order <- c(paste0("A", LETTERS[c(1:13, 15, 22:25)]), paste0("AA", LETTERS[c(1:4,
6, 8:13, 17:18)]), names(table(glass$pane))[c(32:48)])
glass$pane <- ordered(glass$pane, levels = pane_order)
glass_log <- glass %>% mutate(log_ppm = log(ppm))
glass_log %>% select(-ppm) %>% spread(element, log_ppm) %>% select(mfr, pane,
fragment, Rep, Li, Na, Mg, Al, K, Ca) %>% head()
```
```
## mfr pane fragment Rep Li Na Mg Al K
## 1 A AA 1 1 0.8329091 11.54025 10.04715 7.892826 7.136483
## 2 A AA 1 2 0.5423243 11.53772 10.03671 7.883069 7.126248
## 3 A AA 1 3 0.7227060 11.53126 10.04499 7.909122 7.127694
## 4 A AA 1 4 0.7975072 11.54219 10.05449 7.912423 7.144407
## 5 A AA 1 5 0.7227060 11.52968 10.02260 7.871311 7.103322
## 6 A AA 2 1 0.5596158 11.52663 10.04107 7.898411 7.118826
## Ca
## 1 11.05311
## 2 11.04930
## 3 11.04580
## 4 11.06147
## 5 11.01023
## 6 11.03747
```
```
cols <- csafethemes:::csafe_cols_secondary[c(3, 12)]
glass_log %>% filter(element %in% c("Li", "Na", "Mg", "Al", "K", "Ca")) %>%
ggplot() + geom_density(aes(x = log_ppm, fill = mfr), alpha = 0.7) + scale_fill_manual(name = "Manufacturer",
values = cols) + facet_wrap(~element, scales = "free", nrow = 2) + labs(x = "Log concentration (ppm)",
y = "Density") + theme(legend.position = "top")
```
Figure 7\.2: Density estimation of selected chemical compositions, colored by manufacturers
Figure [7\.2](glass.html#fig:density) shows density plots of six chemical compositions of Al, Ca, K, Li, Mg, Na. Na and Ca (major elements) show density curves from two manufacturers are overlapped, while Al, K show clear separation between curves by manufacturers. This implies that glass fragments from different manufacturers will be very easy to distinguish.
```
glass_log %>% filter(element %in% c("Na", "Ti", "Zr", "Hf")) %>% ggplot() +
geom_boxplot(aes(x = pane, y = log_ppm, fill = mfr), alpha = 0.8, outlier.size = 0.5,
size = 0.1) + scale_fill_manual(name = "Manufacturer", values = cols) +
facet_wrap(~element, scales = "free", nrow = 2, labeller = label_both) +
theme_bw() + theme(legend.position = "none") + scale_x_discrete(labels = c("AA",
rep("", 30), "BA", rep("", 15), "BR")) + labs(x = "Pane (in order of manufacture)",
y = "Log concentration (ppm)")
```
Figure 7\.3: Box plot of four elements in 48 panes, ordered by date of production, from two manufacturers.
Figure @ref(fig:box plot) shows box plots of the measurements of four elements (Na, Ti, Zr, Hf) in each of the 48 panes in the database, colored by manufacturer. Boxes are ordered by date of production within manufacturer. We can confirm the between\- and within\- pane variability. Interestingly, element values of Zr and Hf in manufacturer A are both decreasing in time, which is evidence that the element measurements are highly correlated. To account for this relationship, we use methods that do not require independence of measurements.
```
col_data_AA <- glass_log_long[, -6] %>% filter(pane == "AA") %>% spread(element,
log_ppm) %>% group_by(fragment) %>% summarise_if(is.numeric, mean, na.rm = TRUE)
col_data_BA <- glass_log_long[, -6] %>% filter(pane == "BA") %>% spread(element,
log_ppm) %>% group_by(fragment) %>% summarise_if(is.numeric, mean, na.rm = TRUE)
P1 <- ggcorr(col_data_AA[, 3:20], geom = "blank", label = TRUE, hjust = 0.75) +
geom_point(size = 10, aes(color = coefficient > 0, alpha = abs(coefficient) >
0.5)) + scale_alpha_manual(values = c(`TRUE` = 0.25, `FALSE` = 0)) +
guides(color = FALSE, alpha = FALSE)
P2 <- ggcorr(col_data_BA[, 3:20], geom = "blank", label = TRUE, hjust = 0.75) +
geom_point(size = 10, aes(color = coefficient > 0, alpha = abs(coefficient) >
0.5)) + scale_alpha_manual(values = c(`TRUE` = 0.25, `FALSE` = 0)) +
guides(color = FALSE, alpha = FALSE)
P1 + P2
```
7\.3 R Packages
---------------
We propose a [machine learning](glossary.html#def:ml) method to quantify the similarity between two glass fragments \\(Q\\) and \\(K\\). The goal here is to construct a [classifier](glossary.html#def:classifier) that predicts, with low error, whether or not \\(Q\\) and \\(K\\) have the same source. Using the glass database, we construct many pairwise comparisons between two glass fragments with known sources, and we will record their source as either same source or different source.
To construct the set of comparisons, we
1. Take the natural log of all measurements of all glass fragments. (ppm to \\(\\log\\)(ppm))
2. Select one pane of glass in the database to be the “questioned” source. Sample one fragment from this pane, and call it \\(Q\\).
3. Select one pane of glass in the database to be the “known” source. Sample one fragment from this pane, and call it \\(K\\).
4. Construct the response variable: if the panes in 2 and 3 are the same, the response variable is KM for known mates. Otherwise, the response variable is KNM for known non\-mates.
5. Construct the features: take the mean of the repeated measurements of \\(Q\\) and \\(K\\) in each element, and take the absolute value of the difference between the mean of \\(Q\\) and the mean of \\(K\\).
6. Repeat 2\-5 until we have a data set suitable for training a classifier.
We use the R package [`caret`](https://topepo.github.io/caret/index.html) to train a random forest classifier to determine whether two glass fragments (\\(Q\\) and \\(K\\)) have the same source or have different sources (Jed Wing et al. [2018](#ref-R-caret)).
To begin, the package can be installed from CRAN:
```
install.packages("caret")
```
```
library(caret)
# other package used for plotting
library(GGally)
library(patchwork)
```
The R package `caret` (**C**lassification **A**nd **RE**gression **T**raining) is used for applied predictive modeling. The `caret` package allows us to run 238 different models as well as adjusting data splitting, preprocessing, feature selection, tuning parameter, and variable importance estimation. We use it here for fitting a random forest model to our data using cross\-validation and down\-sampling.
```
diff_Q_K_data <- readRDS("dat/rf_data_kfrags_1z.rds")
```
Table 7\.1: Differences of log values of concentrations (Li, Na, Mg, Al, K, Ca) from pairs of known mates (KM) and known non\-mates (KNM)
| Class | Li | Na | Mg | Al | K | Ca |
| --- | --- | --- | --- | --- | --- | --- |
| KM | 0\.0078 | 0\.0043 | 0\.0089 | 0\.0270 | 0\.0103 | 0\.0149 |
| KM | 0\.0845 | 0\.0065 | 0\.0042 | 0\.0049 | 0\.0126 | 0\.0032 |
| KM | 0\.0826 | 0\.0170 | 0\.0069 | 0\.0137 | 0\.0259 | 0\.0014 |
| KNM | 0\.5362 | 0\.0581 | 0\.0267 | 0\.0741 | 0\.0834 | 0\.0619 |
| KNM | 0\.2437 | 0\.0283 | 0\.0092 | 0\.0181 | 0\.0377 | 0\.0386 |
| KNM | 0\.2319 | 0\.0316 | 0\.0283 | 0\.0001 | 0\.0437 | 0\.0518 |
Table [7\.1](glass.html#tab:diffdata2) shows the example of pairwise differences among glass measurements. If we take the difference of two fragments from the same pane, then KM is assigned to the response variable of `Class`. If we take the difference of two fragments from two different panes, then KNM is assigned to variable `Class`. Each row has 18 differences and one variable `Class` indicating the source of two glass fragments. By taking pairwise differences, there are many more KNM pairs than KM pairs: we can construct 67,680 KNM pairs but only 1,440 KM pairs from the glass database.
```
diff_Q_K_data %>% gather(element, diff, Li:Pb) %>% filter(element %in% c("Zr",
"Li", "Hf", "Ca")) %>% ggplot() + geom_density(aes(x = diff, fill = class),
alpha = 0.7) + scale_fill_manual(name = "Class", values = cols) + facet_wrap(~element,
scales = "free", nrow = 2) + labs(x = "Difference between Q and K (log(ppm))") +
theme(legend.position = "top")
```
Figure 7\.4: Histogram of differences from four chemical elements among KM and KNM.
Figure [7\.4](glass.html#fig:diffhist) shows the distribution of differences for KM and KNM pairs in the Ca, Hf, Li, and Zr. Across all elements, distribution of differences from KNM pairs are more dispersed than differences of KM. The differences of KM have high density near zero, while the KNM measurements are shifted to the right and have a long tail. These differences for all 18 elements will be the [features](glossary.html#def:features) of the random forest.
Finally, we construct the random forest classifier. Since the data have 1,440 KM and 67,680 KNM observations, the response variable is imbalanced. With this large imbalance, the algorithm can simply predict KNM and have low error rate without learning anything about the properties of the KM class. You can find more ways to consider this imbalance problem for fitting the random forest in this glass fragment source prediction in (Park and Carriquiry [2019](#ref-park2018)[a](#ref-park2018)). In this chapter, we will down\-sample the KNM observations to equal the number of KM observations. That way, we will have 1,440 KM and 1,440 KNM comparisons. Then, we sample 70% of them to be the training set and the remaining 30% are the testing set.
```
diff_Q_K_data$class <- as.factor(diff_Q_K_data$class)
# Down sample the majority class to the same number of minority class
set.seed(123789)
down_diff_Q_K_data <- downSample(diff_Q_K_data[, c(1:18)], diff_Q_K_data$class)
down_diff_Q_K_data <- down_diff_Q_K_data %>% mutate(id = row_number())
names(down_diff_Q_K_data)[19] <- "class"
table(down_diff_Q_K_data$class)
```
```
##
## KM KNM
## 1440 1440
```
```
# Create training set
train_data <- down_diff_Q_K_data %>% sample_frac(0.7)
# Create test set
test_data <- anti_join(down_diff_Q_K_data, train_data, by = "id")
train_data <- train_data[, -20] #exclude id
test_data <- test_data[, -20] #exclude id
# dim(train_data) # 2016 19 dim(test_data) # 864 19
```
After down\-sampling and setting aside 30% of the data for testing, we can train the classifier. Below is the R code to fit the random forest, using `caret` (Jed Wing et al. [2018](#ref-R-caret)). The tuning parameter for the random forest algorithm is `mtry`, the number of variables available for splitting at each tree node. We use five values (1, 2, 3, 4, 5\) to tune `mtry` and pick the optimal one. For the classification, the default `mtry` is the square root of the number of predictor variables (it is \\(\\sqrt{18} \\approx 4\.2\\), in our study). To pick the optimal `mtry` parameter, the area under the [receiver operating characteristic](glossary.html#def:roc) (ROC) curve is calculated. The optimal value will result the highest area under the ROC curve (AUC). The data are also automatically centered and scaled for each predictor. We use 10\-fold [cross\-valildation](glossary.html#def:crossval) to evaluate the random forest algorithm and repeat entire process three times.
```
ctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 3, savePredictions = "final",
summaryFunction = twoClassSummary, classProbs = TRUE)
# mtry is recommended to use sqrt(# of variables)=sqrt(18) so 1:5 are tried
# to find the optimal one
RF_classifier <- train(class ~ ., train_data, method = "rf", tuneGrid = expand.grid(.mtry = c(1:5)),
metric = "ROC", preProc = c("center", "scale"), trControl = ctrl)
```
```
RF_classifier <- readRDS("dat/RF_classifier.RDS")
RF_classifier
```
```
## Random Forest
##
## 2016 samples
## 18 predictor
## 2 classes: 'KM', 'KNM'
##
## Pre-processing: centered (18), scaled (18)
## Resampling: Cross-Validated (10 fold, repeated 3 times)
## Summary of sample sizes: 1815, 1814, 1814, 1814, 1815, 1814, ...
## Resampling results across tuning parameters:
##
## mtry ROC Sens Spec
## 1 0.9637918 0.9826797 0.8323266
## 2 0.9641196 0.9807190 0.8467239
## 3 0.9625203 0.9758170 0.8457239
## 4 0.9624018 0.9738562 0.8467172
## 5 0.9614865 0.9728758 0.8473838
##
## ROC was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 2.
```
From the RF fitting result, it showed the AUC (ROC), Sens ([Sensitivity](glossary.html#def:sensitivity)) and Spec ([Specificity](glossary.html#def:specificity)) values in each mtry values. R\-package caret selects the best mtry value (it is 2, in our study) to give the highest AUC (ROC) value 0\.964, from 10\-fold cross\-validation and 3 repeated process.
The final performance of the random forest algorithm with the optimal mtry of 2 with 500 of number of trees. In the training set, the false negative rate is 0\.02 and the false positive rate is 0\.153 of false positive rate, which is the optimized classification result. We see the OOB estimate of error rate of 0\.085, which is the average error rate from the fold left out of the training set during cross\-validation.
```
RF_classifier$finalModel$confusion
```
```
## KM KNM class.error
## KM 1000 20 0.01960784
## KNM 152 844 0.15261044
```
```
imp <- varImp(RF_classifier)$importance
imp <- as.data.frame(imp)
imp$varnames <- rownames(imp) # row names to column
rownames(imp) <- NULL
imp <- imp %>% arrange(-Overall)
imp$varnames <- as.character(imp$varnames)
elements <- read_csv("dat/elements.csv")
imp %>% left_join(elements[, c(2, 4)], by = c(varnames = "symb")) %>% ggplot(aes(x = reorder(varnames,
Overall), y = Overall, fill = classification)) + geom_bar(stat = "identity",
color = "grey40") + scale_fill_brewer(name = "Element", palette = "Blues",
direction = -1) + labs(y = "Overall importance", x = "Variable") + scale_y_continuous(position = "right") +
coord_flip() + theme_bw()
```
Figure 7\.5: Variable importance from the RF classifier, colored by element types (major, minor or trace).
By fitting the random forest algorithm, we can also get a measure of variable importance. This metric ranks which elements out of 18 are most important to correctly predicting the source of the glass fragments. Figure [7\.5](glass.html#fig:varimp) shows that elements K, Ce, Zr, Rb, and Hf are five very important variables and Pb, Sr, Na, Ca, and Mg are five minimally important variables. All major elements are not important: this is not surprising because all glass is, broadly speaking, very similar chemically. Conversely, most of the trace elements are ranked in the top half.
7\.4 Drawing Conclusions
------------------------
The result of the random forest classifier is a probability of the observation belonging to the two classes, KM and KNM, and a predicted class. For an observation, if the class probability for KM is greater than 0\.5, then the class prediction is M (mated), otherwise it is NM (non\-mated). We use the random forest classifier to predict on the test set. Recall that we know the ground truth, KM and KNM for the test data.
```
# Get prediction from RF classifier in test data
pred_class <- predict(RF_classifier, test_data)
table_pred_class <- confusionMatrix(pred_class, test_data$class)$table
table2 <- data.frame(table_pred_class) %>% spread(Reference, Freq)
table2$Prediction <- c("M", "NM")
table2 %>% knitr::kable(format = "html", caption = "Classification result of test set",
longtable = FALSE, row.names = FALSE)
```
Table 7\.2: Classification result of test set
| Prediction | KM | KNM |
| --- | --- | --- |
| M | 409 | 69 |
| NM | 11 | 375 |
Table [7\.2](glass.html#tab:testtable) shows the classification results on the testing set from the random forest. There are 420 KM and 444 KNM comparisons in the test set. For the KM cases, the RF classifier correctly predicts the source as M in 97\.4% of cases and wrongly predicts the source as NM in 2\.6% of cases. This is the false negative rate, FNR. For the KNM cases, the RF correctly classifies them as NM 84\.5% of the time, while 15\.5% are incorrectly classified as M. We know that the method resulted in higher FPR (15\.5\) because the data we have are from many close panes made by the same manufacturer that are difficult to distinguish from one another.
```
# class probability for the same pane is used as similarity score for RF
prob_prediction <- predict(RF_classifier, test_data, type = "prob")[, "KM"]
test_data$prob <- prob_prediction
test_data %>% ggplot() + geom_density(aes(x = prob, fill = class), alpha = 0.7) +
scale_fill_manual(name = "Class", values = cols) + labs(x = "Empirical probability for KM from RF classifier",
y = "Density") + theme(legend.position = "top")
```
Figure 7\.6: Scores from the RF classifier for the test set. Ground truth (either KM or KNM) is known. Any observations above 0\.5 are declared same source, while all others are declared different source. Notice the long tail in the KNM cases.
Figure [7\.6](glass.html#fig:testscore) shows the distribution of the RF scores, colored by true classes in test data. The score is the empirical class probability that an observation belongs to the KM class. The modes of the the two classes are well separated, while there is still some overlap between the classes’ density curve. The tail of the distribution of scores for KNMs is more skewed than that of the KMs, which helps explain the higher false positive rate. Table [7\.2](glass.html#tab:testtable) is the predicted classification result using the cut\-off of 0\.5\. If the RF score is larger than 0\.5, then we will predict the pair of glass fragments have the same source (M). If not, then we declare they have different source (NM).
7\.5 Case Study
---------------
Here we introduce a new set of five pairs of comparisons and use the random forest classifier we trained in Section @ref(glass\_rpkgs) to determine if the two samples being compared are from the same source of glass or from different sources.
Suppose you are given the following data on eight glass samples, where each sample has been measured 5 times:
Your supervisor wants you to make five comparisons:
| compare\_id | sample\_id\_1 | sample\_id\_2 |
| --- | --- | --- |
| 1 | 1 | 2 |
| 2 | 1 | 3 |
| 3 | 1 | 5 |
| 4 | 4 | 6 |
| 5 | 7 | 8 |
First, you need to take the log of the measurements, then get the mean for each sample and each element.
```
# take the log, then the mean of each for comparison
new_samples <- mutate(new_samples, logppm = log(ppm)) %>%
select(-ppm) %>%
group_by(sample_id, element) %>%
summarise(mean_logppm = mean(logppm))
```
Next, make each of the five comparisons. We write a function, `make_compare()` to do so, then we use [`purrr::map2()`](https://purrr.tidyverse.org/reference/map2.html) to perform the comparison for each row in `compare_id`. The resulting compared data is below.
```
make_compare <- function(id1, id2) {
dat1 <- new_samples %>% filter(sample_id == id1) %>% ungroup() %>% select(element,
mean_logppm)
dat2 <- new_samples %>% filter(sample_id == id2) %>% ungroup() %>% ungroup() %>%
select(element, mean_logppm)
tibble(element = dat1$element, diff = abs(dat1$mean_logppm - dat2$mean_logppm)) %>%
spread(element, diff)
}
compared <- comparisons %>% mutate(compare = map2(sample_id_1, sample_id_2,
make_compare)) %>% unnest()
DT::datatable(compared) %>% DT::formatRound(4:21, 3)
```
Finally, we take the compared data and predict using the random forest object:
```
newpred <- predict(RF_classifier, newdata = compared, type = "prob")
```
```
# only take the id data from compare, and the M prediction from newpred
bind_cols(compared[, 1:3], score = newpred[, 1]) %>% mutate(prediction = ifelse(score >
0.5, "M", "NM")) %>% knitr::kable(format = "html", caption = "Predicted scores of new comparison cases from the random forest classifier",
longtable = FALSE, row.names = FALSE, digits = 3, col.names = c("Comparison ID",
"Sample ID 1", "Sample ID 2", "Score", "Decision"))
```
Table 7\.3: Predicted scores of new comparison cases from the random forest classifier
| Comparison ID | Sample ID 1 | Sample ID 2 | Score | Decision |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2 | 0\.812 | M |
| 2 | 1 | 3 | 0\.238 | NM |
| 3 | 1 | 5 | 0\.002 | NM |
| 4 | 4 | 6 | 0\.430 | NM |
| 5 | 7 | 8 | 0\.618 | M |
Table [7\.3](glass.html#tab:newtesttable) shows the RF score which is defined as the empirical class probability that the samples are from the same pane (M) by the RF algorithm[6](#fn6). Based on these scores, it appears that samples 1 and 2 are very likely from the same source, while samples 1 and 5 are very likely from different sources. Using a threshold of 0\.5, sample 4 and sample 6 are predicted to be from different sources. However, this score is so close to the threshold of 0\.5 that we may want to say this is inconclusive.[7](#fn7) Samples 1 and 3 are probably from different sources, though we are less confident in that determination than we are in the decision that samples 1 and 5 are from different sources. Finally, samples 7 and 8 are probably from the same source, but we are less certain of this than we are that samples 1 and 2 are from the same source.
More research into the appropriateness of this random forest method for making glass source conclusions is needed. Important points to consider are:
* What database should be used to train the algorithm?
* Could the random forest method over fit, reducing the generalizability of this method?
Nevertheless, the RF classifier makes good use of the glass fragment data, with its high dimension and small number of repeated measurements. More details on the RF classifier and additional discussion can be found in Park and Carriquiry ([2019](#ref-park2018)[a](#ref-park2018)).
[
| Field Specific |
sctyner.github.io | https://sctyner.github.io/OpenForSciR/decision-making.html |
Chapter 8 Decision\-making in Forensic Identification Tasks
===========================================================
#### *Amanda Luby*
8\.1 Introduction
-----------------
Although forensic measurement and analysis tools are increasingly accurate and objective, many final decisions are largely left to individual examiners (PCAST [2016](#ref-pcast)). Human decision\-makers will continue to play a central role in forensic science for the foreseeable future, and it is unrealistic to assume that, within the United States’ current criminal justice system,
* there are no differences in the decision\-making process between examiners,
* day\-to\-day forensic decision\-making tasks are equally difficult, or
* human decision\-making can be removed from the process entirely.
The role of human decisions in forensic science is perhaps most studied in the fingerprint domain, which will be the focus of this chapter. High\-profile examples of misidentification have inspired studies showing that fingerprint examiners, like all humans, may be susceptible to biased instructions and unreliable in final decisions (Dror and Rosenthal [2008](#ref-dror2008meta)) or influenced by external factors or contextual information (Dror, Charlton, and Péron [2006](#ref-dror2006); Dror and Cole [2010](#ref-dror2010vision)). These studies contradict common perceptions of the accuracy of fingerprint examination, and demonstrate that fingerprint analysis is far from error\-free.
Although fingerprint examination is the focus of this chapter, it is not the only forensic domain that relies on human decision\-making. Firearms examination (see, e.g., NRC ([2009](#ref-nas2009)[b](#ref-nas2009)) pg. 150\-155\) is similar to latent print examination in many ways, particularly in that examiners rely on pattern evidence to determine whether two cartridges originated from the same source. Handwriting comparison (see National Research Council ([2009](#ref-nas2009)[b](#ref-nas2009)) pg. 163\-167 on “Questioned Document Examination” and Stoel et al. ([2010](#ref-stoelshaky)) for discussion) consists of examiners determining whether two samples of handwriting were authored by the same person, taking potential forgery or disguise into account. A third example is interpreting mixtures of DNA evidence (see PCAST ([2016](#ref-pcast)) Section 5\.2\). A DNA mixture is a biological sample that contains DNA from two or more donors and requires analysts to make subjective decisions to determine how many individuals contributed to the DNA profile. Due to these currently unavoidable human factors, the President’s Council of Advisors on Science and Technology ([2016](#ref-pcast)) recommended increased “black box” error rate studies for these and other subjective forensic science methods.
The FBI “Black Box” study (Bradford T. Ulery et al. [2011](#ref-ulery2011)) was the first large\-scale study performed to assess the accuracy and reliability of latent print examiners’ decisions. The questions included a range of attributes and quality seen in casework, and were representative of searches from an automated fingerprint identification system. The overall false positive rate in the study was 0\.1% and the overall false negative rate was 7\.5%. These computed quantities, however, have excluded all “inconclusive” responses (i.e. neither identifications nor exclusions). This is noteworthy, as nearly a third of all responses were inconclusive and respondents varied on how often they reported inconclusives. Respondents who report a large number of inconclusives, and only make identification or exclusion decisions for the most pristine prints, will likely make far fewer false positive and false negative decisions than respondents who reported fewer inconclusives. The authors of the study also note that it is difficult to compare the error rates and inconclusive rates of individual examiners because each examiner saw a different set of fingerprint images (see Appendix 3 of Bradford T. Ulery et al. ([2011](#ref-ulery2011))). In other words, it would be unfair to compare the error rate of someone who was given a set of “easy” questions to the error rate of someone who was given a set of “difficult” questions. A better measure of examiner skill would account for both error rates and difficulty of prints that were examined.
Accurately measuring proficiency, or examiner skill, is valuable not only for determining whether a forensic examiner has met baseline competency requirements, but for training purposes as well. Personalized feedback after participating in a study could lead to targeted training for examiners in order to improve their proficiency. Additionally, if proficiency is not accounted for among a group of study participants, which often include trainees or non\-experts as well as experienced examiners, the overall results from the study may be biased.
There also exist substantial differences in the difficulty of forensic evaluation tasks. Properties of the evidence, such as the quality, quantity, concentration, or rarity of characteristics may make it easier or harder to evaluate. Some evidence, regardless of how skilled the examiner is, will not have enough information to result in an identification or exclusion in a comparison task. An inconclusive response, in this case, should be treated as the “correct” response. Inconclusive responses on more straightforward identification tasks, on the other hand, may be treated as mistakes.
Methods for analyzing forensic decision\-making data should thus provide estimates for both participant proficiency and evidence difficulty. *Item response models*, a class of statistical methods used prominently in educational testing, have been proposed for use in forensic science for these reasons (Kerkhoff et al. [2015](#ref-kerkhoff2015)). Luby and Kadane ([2018](#ref-luby2018proficiency)) provided the first item response analysis for forensic proficiency test data, and we improve and extend upon that work by
\- analyzing a different fingerprint identification study that includes richer data on decision\-making, and
\- extending the range of models considered.
The remainder of the chapter is organized as follows: Section [8\.1\.1](decision-making.html#irt) gives a brief overview of Item Response Models, Section [8\.2](decision-making.html#humans-data) provides an overview on how decision\-making data is collected in forensic science, and Section [8\.3](decision-making.html#rpackages) describes an R package that can be used to fit these models. Section [8\.4](decision-making.html#humans-conclusions) describes how conclusions are drawn from an Item Response analysis, and Section [8\.5](decision-making.html#casestudy) gives an example IRT analysis of the FBI “Black Box” study.
### 8\.1\.1 A Brief Overview of Item Response Models
For \\(P\\) individuals responding to \\(I\\) test items, we can express the binary responses (i.e. correct/incorrect) as a \\(P\\times I\\) matrix, \\(Y\\). Item Response Theory (IRT) is based on the idea that the probability of a correct response depends on individual *proficiency*, \\(\\theta\_p, p \= 1, \\ldots, P\\), and item *difficulty*, \\(b\_i, i \= 1, \\ldots I\\).
#### 8\.1\.1\.1 Rasch Model
The Rasch Model (Rasch [1960](#ref-rasch1960studies); Fischer and Molenaar [2012](#ref-raschbook)) is a relatively simple yet powerful item response model, and serves as the basis for extensions introduced later. The probability of a correct response is modeled as a logistic function of the difference between the participant proficiency, \\(\\theta\_p\\) (\\(p\=1, \\dots, P\\)), and the item difficulty, \\(b\_i\\) (\\(i\=1, \\dots, I\\)):
\\\[\\begin{equation}
P(Y\_{pi} \= 1\) \= \\frac{1}{1\-\\exp(\-(\\theta\_p \- b\_i))}.
\\tag{8\.1}
\\end{equation}\\]
To identify the model, we shall use the convention of constraining the mean of the participant parameters (\\(\\mu\_\\theta\\)) to be equal to zero. This allows for a nice interpretation of both participant and item parameters relative to the “average participant”. If \\(\\theta\_p \>0\\), participant \\(p\\) is of “above average” proficiency and if \\(\\theta\_p \<0\\), participant \\(p\\) is of “below average” proficiency. Similarly, if \\(b\_i \< 0\\) question \\(i\\) is an “easier” question and the average participant is more likely to correctly answer question \\(i\\). If \\(b\_i \>0\\) then question \\(i\\) is a more “difficult” question and the average participant is less likely to correctly answer question \\(i\\). Other common conventions for identifying the model include setting a particular \\(b\_i\\) or the mean of the \\(b\_i\\)s equal to zero.
The item characteristic curve (ICC) describes the relationship between proficiency and performance on a particular item (see Figure [8\.1](decision-making.html#fig:hf-extensionsexample) for examples). For item parameters estimated under a Rasch model, all ICCs are standard logistic curves with different locations on the latent difficulty/proficiency scale.
Note that Equation [(8\.1\)](decision-making.html#eq:rasch) also describes a generalized linear model (GLM), where \\(\\theta\_p \- b\_i\\) is the linear component, with a logit link function. By formulating the Rasch Model as a hierarchical GLM with prior distributions on both \\(\\theta\_p\\) and \\(b\_i\\), the identifiability problem is solved. We assign \\(\\theta\_p \\sim N(0, \\sigma\_\\theta^2\)\\) and \\(b\_i \\sim N(\\mu\_b, \\sigma\_b^2\)\\), although more complicated prior distributions are certainly possible.
The *two\-parameter logistic model* (2PL) and *three\-parameter logistic model* (3PL) are additional popular item response models (Lord [1980](#ref-lord1980applications)). They are both similar to the Rasch model in that the probability of a correct response depends on participant proficiency and item difficulty, but additional item parameters are also included. We omit a full discussion of these models here, but further reading may be found in van der Linden and Hambleton ([2013](#ref-van2013handbook)) and Boeck and Wilson ([2004](#ref-eirtbook)).
Figure 8\.1: Item Characteristic Curve (ICC) examples for the Rasch, 2PL, and 3PL models.
#### 8\.1\.1\.2 Partial Credit Model
The *partial credit model* (PCM) (Masters [1982](#ref-masters1982)) is distinct from the models discussed above because it allows for the response variable, \\(Y\_{pi}\\), to take additional values beyond zero (incorrect) and one (correct). This is especially useful for modeling partially correct responses, although may be applied in other contexts where the responses can be ordered. When \\(Y\_{pi}\\) is binary, the partial credit model is equivalent to the Rasch model. Under the PCM, the probability of response \\(Y\_{pi}\\) depends on \\(\\theta\_p\\), the proficiency of participant \\(p\\) as in the above models; \\(m\_i\\), the maximum score for item \\(i\\) (and the number of step parameters); and \\(\\beta\_{il}\\), the \\(l^{th}\\) step parameter for item \\(i\\) (\\(l\=0, \\dots, m\_i\\)):
\\\[\\begin{equation}
P(Y\_{pi} \= 0\) \= \\frac{1}{1\+\\sum\_{k\=1}^{m\_i} \\exp \\sum\_{l\=1}^k (\\theta\_p \- \\beta\_{il})}
\\end{equation}\\]
\\\[\\begin{equation}
P(Y\_{pi} \= y, y\>0\) \= \\frac{\\exp\\sum\_{l\=1}^y(\\theta\_p \- \\beta\_{il})}{1\+\\sum\_{k\=1}^{m\_i}\\exp \\sum\_{l\=1}^k (\\theta\_p \- \\beta\_{il})}.
\\tag{8\.2}
\\end{equation}\\]
An example PCM is shown in Figure [8\.2](decision-making.html#fig:hf-pcmexample) by plotting the probabilities of observing each of three categories as a function of \\(\\theta\_p\\) (analogous to the ICC curves above).
Figure 8\.2: Category response functions for the PCM.
8\.2 Data
---------
The vast majority of forensic decision\-making occurs in casework, information about which is not often made available to researchers due to privacy concerns. Outside of casework, data on forensic science decision\-making is collected through proficiency test results and in error rate studies. *Proficiency tests* are periodic competency exams that must be completed for forensic laboratories to maintain their accreditation. *Error rate studies* are independent research studies designed to measure casework error rates. As their names suggest, these two data collection scenarios serve completely different purposes. Proficiency tests are designed to assess *basic competency* of individuals, and mistakes are rare. Error rate studies are designed to mimic the difficulty of evidence in casework and estimate the *overall error rate*, aggregating over many individuals, and mistakes are more common by design.
Proficiency exams consist of a large number of participants (often \\(\>400\\)) responding to a small set of questions (often \\(\<20\\)). Since every participant answers every question, we can assess participant proficiency and question difficulty using the observed scores. As proficiency exams are designed to assess basic competency, most questions are relatively easy and the vast majority of participants score 100%. Error rate studies, on the other hand, consist of a smaller number of participants (fewer than \\(200\\)) and a larger pool of questions (more than \\(500\\)). The questions are designed to be difficult, and every participant does not answer every question, which makes determining participant proficiency and question difficulty a more complicated task.
Results from both proficiency tests and error rate studies can be represented as a set of individuals responding to several items, in which responses can be scored as correct or incorrect. This is not unlike an educational testing scenario where students (individuals) answer questions (items) either correctly or incorrectly. There is a rich body of statistical methods for estimating student proficiency and item difficulty from test responses. Item Response Theory (IRT) is used extensively in educational testing to study the relationship between an individual’s (unobserved) proficiency and their performance on varying tasks. IRT is an especially useful tool to estimate participant proficiencies and question difficulties when participants do not necessarily answer the same set of questions.
8\.3 R Packages
---------------
The case study makes use of the [`blackboxstudyR`](https://github.com/aluby/blackboxstudyR) R package (Luby [2019](#ref-R-blackboxstudyR)), which provides functions for working with the FBI black box data, implementations of basic IRT models in Stan (Guo, Gabry, and Goodrich [2018](#ref-R-rstan)), and plotting functions for results.
The primary functions of `blackboxstudyR` include:
* `score_bb_data()`: Scores the FBI “Black Box” data under one of five scoring schemes.
* `irt_data_bb()`: Formats the FBI “Black Box” data into a form appropriate for fitting a Stan model.
* `fit_irt()`: Wrapper for Stan to fit standard IRT models to data (does not be the FBI data). Models currently available are:
+ Rasch Model (Section [8\.1\.1\.1](decision-making.html#rasch-model))
+ 2PL Model (Section [8\.1\.1\.1](decision-making.html#rasch-model))
+ Partial Credit Model (Section [8\.1\.1\.2](decision-making.html#partial-credit-model))
* `plot_difficulty_posteriors` and `plot_proficiency_posteriors`: Plot posterior intervals for difficulty and proficiency estimates, respectively.
8\.4 Drawing Conclusions
------------------------
An IRT analysis produces estimates of both participant proficiency and item difficulty. As mentioned previously, this property is especially useful for settings where participants respond to different subsets of items, as it allows all participants to be compared on the same scale.
By comparing the estimated proficiency to more traditional measures of participant performance (e.g. false positive rate or false negative rate), we can see whether there are aspects captured by proficiency that are not captured in other measures. For instance, the false positive rate and the false negative rate contain no information about the inconclusive rate, while IRT does implicitly, as it accounts for the number of question answered by each participant.
In the forensic science setting, completing an IRT analysis will often include an additional step of choosing how the data should be scored. For example, should inconclusive responses be scored as incorrect or treated as missing? An additional question we may wish to answer is, “Which scoring scheme is most appropriate for the setting at hand?” In some cases, the optimal scoring scheme may be determined using expert knowledge, or by specifying the expected answers to each item beforehand. In other cases, it may not be possible to determine the optimal scoring scheme before fitting an IRT model. In those cases, multiple scoring methods should be used to fit an IRT model, and the results from each model should be compared and contrasted.
8\.5 Case Study
---------------
We use the FBI “black box” data (`blackboxstudyR::TestResponses`) for our case study. `TestResponses` is a data frame in which each row corresponds to an examiner, each column represents the item, and the value in each unique combination of row and column is the examiner’s response to that item. In addition to the examiner ID (`Examiner_ID`) and item ID (`Pair_ID`), the data contains:
* `Mating`: whether the pair of prints were “Mates” (same source) or “Non\-mates” (different source)
* `Latent_Value`: the examiner’s assessment of the value of the print (NV \= No Value, VEO \= Value for Exclusion Only, VID \= Value for Individualization)
* `Compare_Value`: the examiner’s assessment of whether the pair of prints is an “Exclusion”, “Inconclusive” or “Individualization”
* `Inconclusive_Reason`: If inconclusive, the reason for the inconclusive
+ “Close”: *The correspondence of features is supportive of the conclusion that the two impressions originated from the same source, but not the extent sufficient for individualization.*
+ “Insufficient”: *Potentially corresponding areas are present, but there is insufficient information present.* Participants were told to select this reason if the reference print was not of value.
+ “No Overlap”: *No overlapping area between the latent and reference*
* `Exclusion_Reason`: If exclusion, the reason for the exclusion
+ “Minutiae”
+ “Pattern”
* `Difficulty`: Reported difficulty ranging from “A\_Obvious” to “E\_VeryDifficult”
In order to fit an IRT model, we must first score the data. Responses are scored as correct if they are true identifications (`Mating == Mates` and `Compare_Value == Individualization`) or exclusions (`Mating == Non-mates` and `Compare_Value == Exclusion`). Similarly, responses are scored as incorrect if they are false identifications (`Mating == Non-mates` and `Compare_Value == Individualization`) or false exclusions (`Mating == Mates` and `Compare_Value == Exclusion`).
Inconclusive responses, which are never keyed as correct responses, complicate the scoring of the exam due to both their ambiguity and prevalence. There are a large number of inconclusive answers (4907 of 17121 responses), and examiners vary on which latent print pairs are inconclusive.
The `blackboxstudyR` package includes five methods to score inconclusive responses:
1. Score all inconclusive responses as incorrect (`inconclusive_incorrect`). This may penalize participants who were shown more vague or harder questions and therefore reported more inconclusives.
2. Treat inconclusive responses as missing completely at random (`inconclusive_mcar`). This decreases the amount of data included in the analysis, and does not explicitly penalize examiners who report many inconclusives. This is the scoring method most similar to the method used in Bradford T. Ulery et al. ([2011](#ref-ulery2011)) to compute false positive and false negative rates.
3. Score inconclusive as correct if the reason given for an inconclusive is “correct”. Since the ground truth “correct” inconclusive reason is unknown, the consensus reason from other inconclusive responses for that question is used. If no consensus reason exists, the inconclusive response was scored in one of two ways:
1. Treat inconclusive responses as incorrect if no consensus reason exists (`no_consensus_incorrect`).
2. Treat inconclusive responses as missing completely at random if no consensus reason exists (`no_consensus_mcar`).
4. Score inconclusive responses as “partial credit” (`partial_credit`).
In the remainder of the case study we will
1\. demonstrate how to fit an IRT model in R,
2\. illustrate how IRT analysis complements an error rate analysis by accounting for participants seeing different sets of questions, and
3\. show how different scoring methods can change results from an IRT analysis.
### 8\.5\.1 Fitting the IRT model
We’ll proceed with an IRT analysis of the data under the `inconclusive_mcar` scoring scheme, which is analogous to how the data were scored under Bradford T. Ulery et al. ([2011](#ref-ulery2011)).
```
im_scored <- score_bb_data(TestResponses, "inconclusive_mcar")
```
Scoring the black box data as above gives us the response variable (\\(y\\)). The `irt_data_bb` function takes the original black box data, along with the scored variable produced by `score_bb_data`, and produces a list object in the form needed by Stan to fit the IRT models. If you wish to fit the models on a different set of data, you can do so if the dataset has been formatted as a list object with the same attributes as the `irt_data_bb` function output (see package documentation for additional details).
```
im_data <- irt_data_bb(TestResponses, im_scored)
```
We can now use `fit_irt` to fit the Rasch models.
```
im_model <- fit_rasch(im_data, iterations = 600, n_chains = 4)
```
In practice, it is necessary to ensure that the MCMC sampler has converged using a variety of diagnostics. We omit these steps here for brevity, but the `blackboxstudyR` package will include a vignette detailing this process, or see e.g. Gelman et al. ([2013](#ref-bda3)).
After the model has been fit, we can plot the posterior distributions of difficulties and proficiencies:
```
p1 <- plot_proficiency_posteriors(im_samples) + my_theme
p2 <- plot_difficulty_posteriors(im_samples) + my_theme
ggarrange(p1, p2, ncol = 2)
```
The lighter gray interval represents the 95% posterior interval and the black interval represents the 50% posterior interval. If we examine the posterior intervals for the difficulty estimates (\\(b\\)), we can see groups which have noticeably larger intervals, and thus more uncertainty regarding the estimate:
1\. those on the bottom left
2\. those on the upper right, and
3\. those in the middle.
These three groups of uncertain estimates correspond to:
1\. the questions that every participant answered correctly,
2\. the questions that every participant answered incorrectly, and
3\. the questions that every participant reported as an “inconclusive” or “no value”.
### 8\.5\.2 IRT complements an error rate analysis
The original analysis of the FBI “Black Box” Study (see Bradford T. Ulery et al. [2011](#ref-ulery2011)) did not include analysis of participant error rates, because each participant saw a different question set. Since proficiency accounts for the difficulty of question sets, however, we can directly compare participant proficiencies to each other, and also see how error rates and proficiency are related.
First, we compute the observed person scores.
```
obs_p_score <- bb_person_score(TestResponses, im_scored)
```
In order to use the `error_rate_analysis` function, we need to extract the median question difficulties from MCMC results.
```
q_diff <- apply(im_samples, 3, median)[grep("b\\[", names(apply(im_samples,
3, median)))]
ex_error_rates <- error_rate_analysis(TestResponses, q_diff)
```
Now, we can plot the proficiency estimates (with 95% posterior intervals) against the results from a traditional error rate analysis.
```
p1 <- person_mcmc_intervals(im_samples) %>% right_join(., obs_p_score, by = "exID") %>%
full_join(., ex_error_rates, by = "exID") %>% dplyr::select(., score, m,
ll, hh, exID, avg_diff, fpr, fnr) %>% ggplot(., aes(x = fpr, y = m, ymin = ll,
ymax = hh)) + geom_pointrange(size = 0.3) + labs(x = "False Positive Rate",
y = "Proficiency Estimate") + my_theme
p2 <- person_mcmc_intervals(im_samples) %>% right_join(., obs_p_score, by = "exID") %>%
full_join(., ex_error_rates, by = "exID") %>% dplyr::select(., score, m,
ll, hh, exID, avg_diff, fpr, fnr) %>% ggplot(., aes(x = fnr, y = m, ymin = ll,
ymax = hh, color = fpr > 0)) + geom_pointrange(size = 0.3) + labs(x = "False Negative Rate",
y = "Proficiency Estimate") + scale_colour_manual(values = c("black", "steelblue")) +
my_theme + theme(legend.position = "none")
ggarrange(p1, p2, ncol = 2)
```
Figure 8\.3: Proficiency vs False Positive Rate (left) and False Negative Rate (right)
Figure [8\.3](decision-making.html#fig:hf-error-rate-plots) shows proficiency against the false positive rate (left) and false negative rate (right). Those participants who made at least one false positive error are colored in blue on the right side plot. We see that one of the participants who made a false positive error still received a relatively large proficiency estimate due to having such a small false negative rate.
If, instead of looking error rates for each participant, we examine observed scores, the estimated proficiencies correlate with the observed score (Figure [8\.4](decision-making.html#fig:hf-prof-observed). That is, participants with a higher observed score are generally given larger proficiency estimates than participants with lower scores. There are, however, cases where participants scored roughly the same on the study but are given vastly different proficiency estimates. For example, the highlighted participants in the right plot above all scored between 94% and 96%, but their estimated proficiencies range from \\(\-1\.25\\) to \\(2\.5\\).
```
p1 <- person_mcmc_intervals(im_samples) %>% right_join(obs_p_score, by = "exID") %>%
ggplot(aes(x = score, y = m, ymin = ll, ymax = hh)) + geom_pointrange(size = 0.3) +
labs(x = "Observed Score", y = "Proficiency Estimate") + my_theme
p2 <- person_mcmc_intervals(im_samples) %>% right_join(obs_p_score, by = "exID") %>%
ggplot(aes(x = score, y = m, ymin = ll, ymax = hh)) + geom_pointrange(size = 0.3) +
gghighlight(score < 0.96 & score > 0.94) + labs(x = "Observed Score", y = "Proficiency Estimate") +
my_theme
ggarrange(p1, p2, ncol = 2)
```
Figure 8\.4: Proficiency vs Observed Score
If we examine those participants who scored between 94% and 96% more closely, we can see that the discrepancies in their proficiencies are largely explained by the difficulty of the specific question set they saw. This is evidenced by the positive trend in Figure [8\.5](decision-making.html#fig:hf-prof-by-diff). In addition to the observed score and difficulty of the question set, the number of questions the participant answers conclusively (i.e. individualization or exclusion) also plays a role in the proficiency estimate. Participants who are conclusive more often generally receive higher estimates of proficiency than participants who are conclusive less often.
```
person_mcmc_intervals(im_samples) %>% right_join(obs_p_score, by = "exID") %>%
full_join(ex_error_rates, by = "exID") %>% dplyr::select(score, m, ll, hh,
exID, avg_diff, pct_skipped) %>% filter(score < 0.96 & score > 0.94) %>%
ggplot(aes(x = avg_diff, y = m, ymin = ll, ymax = hh, col = 1 - pct_skipped)) +
geom_pointrange(size = 0.3) + labs(x = "Avg Q Difficulty", y = "Proficiency Estimate",
color = "% Conclusive") + my_theme
```
Figure 8\.5: Proficiency vs Average Question Difficulty, for participants with observed score between 94 and 96 percent correct.
### 8\.5\.3 Scoring method affects proficiency estimates
To illustrate the difference in results between different scoring methods, we’ll now score the data and fit models in two more ways: `no_consensus_incorrect` and `partial_credit`.
```
nci_scored <- score_bb_data(TestResponses, "no_consensus_incorrect")
nci_data <- irt_data_bb(TestResponses, nci_scored)
pc_scored <- score_bb_data(TestResponses, "partial_credit")
pc_data <- irt_data_bb(TestResponses, pc_scored)
```
We use `fit_rasch` to fit the Rasch model to the `no_consensus_incorrect` data, and since the `partial_credit` data has three outcomes (correct, inconclusive, or incorrect) instead of only two (correct/incorrect), we use `fit_pcm` to fit a partial credit model to the data.
```
nci_model <- fit_rasch(nci_data, iterations = 1000, n_chains = 4)
pc_model <- fit_pcm(pc_data, iterations = 1000, n_chains = 4)
```
We can examine the proficiency estimates and observed scores for each participant under each of the three scoring schemes, similar to Figure [8\.4](decision-making.html#fig:hf-prof-observed) above. Under the partial credit scoring scheme, a correct identification/exclusion is scored as a “2”, an inconclusive response is scored as a “1” and an incorrect identification/exclusion is scored as a “0”. The observed score is then computed by \\((\\\# Correct \+ \\\# Inconclusive) / (2 \\times \\\# Responses)\\) to scale the score to be between 0 and 1\.
```
p_score_im <- bb_person_score(TestResponses, im_scored)
p_score_im <- person_mcmc_intervals(blackboxstudyR::im_samples) %>% right_join(p_score_im,
by = "exID") %>% mutate(scoring = rep("im", nrow(p_score_im)))
p_score_nci <- bb_person_score(TestResponses, nci_scored)
p_score_nci <- person_mcmc_intervals(blackboxstudyR::nci_samples) %>% right_join(p_score_nci,
by = "exID") %>% mutate(scoring = rep("nci", nrow(p_score_nci)))
p_score_pc <- bb_person_score(TestResponses, pc_scored)
p_score_pc <- person_mcmc_intervals(blackboxstudyR::pc_samples) %>% right_join(p_score_pc,
by = "exID") %>% mutate(scoring = rep("pc", nrow(p_score_pc)))
p1 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% ggplot(aes(x = score,
y = m, ymin = ll, ymax = hh, col = scoring)) + geom_pointrange(size = 0.3,
alpha = 0.5) + labs(x = "Observed Score", y = "Estimated Proficiency") +
my_theme
p2 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% group_by(exID) %>%
ggplot(aes(x = score, y = m, ymin = ll, ymax = hh, col = scoring, group = exID)) +
geom_pointrange() + gghighlight(hh < -0.5, use_group_by = FALSE) + geom_line(col = "black",
linetype = "dotted") + labs(x = "Observed Score", y = "Estimated Proficiency") +
geom_hline(yintercept = -0.5, col = "darkred", linetype = "dashed") + my_theme
ggarrange(p1, p2, ncol = 2, common.legend = TRUE, legend = "bottom")
```
Figure 8\.6: Proficiency vs Observed Score for each of three scoring schemes
Treating the inconclusives as missing (“im”), leads to both the smallest range of observed scores and largest range of estimated proficiencies. Harsher scoring methods (e.g. `no consensus incorrect` (“nci”)) do not necessarily lead to lower estimated proficiencies. For instance, the participants who scored around 45% under the “nci” scoring method (in green) are given higher proficiency estimates than the participant who scored 70% under the “im” scoring method. The scoring method thus affects the proficiency estimates in a somewhat non\-intuitive way, as larger ranges of observed scores do not necessarily correspond to larger ranges of proficiency estimates.
Also note that the uncertainty intervals under the “im” scoring scheme are noticeably larger than under the other scoring schemes. This is because the `inconclusive_mcar` scheme treats all of the inconclusives, nearly a third of the data, as missing. This missingness is completely uninformative when estimating the difficulty and proficiency estimates. Under the other scoring schemes (`no consensus incorrect` and `partial credit`) the inconclusive responses are never treated as missing, leading to a larger number of observations per participant and therefore a smaller amount of uncertainty in the proficiency estimate.
The range of proficiencies under different scoring schemes and the uncertainty intervals for the proficiency estimates both have substantial implications if we consider setting a “mastery level” for participants. As an example, let’s consider setting the mastery threshold at \\(\-0\.5\\). We will then say examiners have not demonstrated mastery if the upper end of their proficiency uncertainty estimate is below \\(\-0\.5\\), illustrated in the right plot of Figure [8\.6](decision-making.html#fig:hf-prof-three-scores).
The number of examiners that have not demonstrated mastery varies based on the scoring method used (11 for “nci”, 8 for “pc” and 11 for “im”) due to the variation in range of proficiency estimates. Additionally, for each of the scoring schemes, there are a number of examiners that did achieve mastery with the same observed score as those that did not demonstrate mastery. This is due to a main feature of item response models discussed earlier: participants that answered more difficult questions are given higher proficiency estimates than participants that answered the same number of easier questions.
We’ve also drawn dotted lines between proficiency estimates that correspond to the same person. Note that many of the participants who do not achieve mastery under one scoring scheme *do* achieve mastery under the other scoring schemes, since not all of the points are connected by dotted lines. There are also a few participants who do not achieve mastery under any of the scoring schemes. This raises the question of how much the proficiency estimates change for each participant under the different scoring schemes.
The plot on the left in Figure [8\.7](decision-making.html#fig:hf-prof-by-id) shows both a change in examiner proficiencies across scoring schemes (the lines connecting the proficiencies are not horizontal) as well as a change in the ordering of examiner proficiencies (the lines cross one another). That is, different scoring schemes affect examiner proficiencies in different ways.
The plot on the right illustrates participants that see substantial differences in their proficiency estimates under different scoring schemes. Examiners 105 and 3 benefit from the leniency in scoring when inconclusives are treated as missing (“im”). When inconclusives are scored as incorrect (“nci”) or partial credit (“pc”), they see a substantial decrease in their proficiency due to reporting a high number of inconclusives and differing from other examiners in their reasoning for reporting inconclusives. Examiners 142, 60 and 110, on the other hand, are hurt by the leniency in scoring when inconclusives are treated as missing (“im”). Their proficiency estimates increase when inconclusives are scored as correct when they match the consensus reason (“nci”) or are worth partial credit (“pc”).
```
p1 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% arrange(parameter) %>%
mutate(id = rep(1:169, each = 3)) %>% dplyr::select(id, m, scoring) %>%
spread(scoring, m) %>% mutate(max.diff = apply(cbind(abs(im - nci), abs(nci -
pc), abs(im - pc)), 1, max)) %>% gather("model", "median", -c(id, max.diff)) %>%
ggplot(aes(x = model, y = median, group = id, col = id)) + geom_point() +
geom_line() + labs(x = "Scoring Method", y = "Estimated Proficiency") +
my_theme
p2 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% arrange(parameter) %>%
mutate(id = rep(1:169, each = 3)) %>% dplyr::select(id, m, scoring) %>%
spread(scoring, m) %>% mutate(max.diff = apply(cbind(abs(im - nci), abs(nci -
pc), abs(im - pc)), 1, max)) %>% gather("model", "median", -c(id, max.diff)) %>%
ggplot(aes(x = model, y = median, group = id, col = id)) + geom_point() +
geom_line() + labs(x = "Scoring Method", y = "Estimated Proficiency") +
gghighlight(max.diff > 1.95, label_key = id, use_group_by = FALSE) + my_theme
ggarrange(p1, p2, ncol = 2, common.legend = TRUE, legend = "bottom")
```
Figure 8\.7: Change in proficiency for each examiner under the three scoring schemes. The right side plot has highlighted five examiners whose proficiency estimates change the most across schemes.
### 8\.5\.4 Discussion
We have provided an overview of human decision\-making in forensic analyses, through the lens of latent print comparisons and the FBI “black box” study (Bradford T. Ulery et al. [2011](#ref-ulery2011)). A brief overview of Item Response Theory (IRT), a class of models used extensively in educational testing, was introduced in Section [8\.1\.1](decision-making.html#irt). A case study is provided of an IRT analysis on the FBI \`\`black box’’ study in [8\.5](decision-making.html#casestudy).
Results from an IRT analysis are largely consistent with conclusions from an error rate analysis. However, IRT provides substantially more information than a more traditional analysis, specifically through accounting for the difficulty of questions seen. Additionally, IRT implicitly accounts for the inconclusive rate of different participants and provides estimates of uncertainty for both participant proficiency and item difficulty. If IRT were to be adopted on a large scale, participants would be able to be directly compared even if they took different exams (for instance, proficiency exams in different years).
Three scoring schemes were presented in the case study, each of which leads to substantially different proficiency estimates across participants. Although IRT is a powerful tool for better understanding examiner performance on forensic identification tasks, we must be careful when choosing a scoring scheme. This is especially important for analyzing ambiguous responses, such as the inconclusive responses in the “black box” study.
[
#### *Amanda Luby*
8\.1 Introduction
-----------------
Although forensic measurement and analysis tools are increasingly accurate and objective, many final decisions are largely left to individual examiners (PCAST [2016](#ref-pcast)). Human decision\-makers will continue to play a central role in forensic science for the foreseeable future, and it is unrealistic to assume that, within the United States’ current criminal justice system,
* there are no differences in the decision\-making process between examiners,
* day\-to\-day forensic decision\-making tasks are equally difficult, or
* human decision\-making can be removed from the process entirely.
The role of human decisions in forensic science is perhaps most studied in the fingerprint domain, which will be the focus of this chapter. High\-profile examples of misidentification have inspired studies showing that fingerprint examiners, like all humans, may be susceptible to biased instructions and unreliable in final decisions (Dror and Rosenthal [2008](#ref-dror2008meta)) or influenced by external factors or contextual information (Dror, Charlton, and Péron [2006](#ref-dror2006); Dror and Cole [2010](#ref-dror2010vision)). These studies contradict common perceptions of the accuracy of fingerprint examination, and demonstrate that fingerprint analysis is far from error\-free.
Although fingerprint examination is the focus of this chapter, it is not the only forensic domain that relies on human decision\-making. Firearms examination (see, e.g., NRC ([2009](#ref-nas2009)[b](#ref-nas2009)) pg. 150\-155\) is similar to latent print examination in many ways, particularly in that examiners rely on pattern evidence to determine whether two cartridges originated from the same source. Handwriting comparison (see National Research Council ([2009](#ref-nas2009)[b](#ref-nas2009)) pg. 163\-167 on “Questioned Document Examination” and Stoel et al. ([2010](#ref-stoelshaky)) for discussion) consists of examiners determining whether two samples of handwriting were authored by the same person, taking potential forgery or disguise into account. A third example is interpreting mixtures of DNA evidence (see PCAST ([2016](#ref-pcast)) Section 5\.2\). A DNA mixture is a biological sample that contains DNA from two or more donors and requires analysts to make subjective decisions to determine how many individuals contributed to the DNA profile. Due to these currently unavoidable human factors, the President’s Council of Advisors on Science and Technology ([2016](#ref-pcast)) recommended increased “black box” error rate studies for these and other subjective forensic science methods.
The FBI “Black Box” study (Bradford T. Ulery et al. [2011](#ref-ulery2011)) was the first large\-scale study performed to assess the accuracy and reliability of latent print examiners’ decisions. The questions included a range of attributes and quality seen in casework, and were representative of searches from an automated fingerprint identification system. The overall false positive rate in the study was 0\.1% and the overall false negative rate was 7\.5%. These computed quantities, however, have excluded all “inconclusive” responses (i.e. neither identifications nor exclusions). This is noteworthy, as nearly a third of all responses were inconclusive and respondents varied on how often they reported inconclusives. Respondents who report a large number of inconclusives, and only make identification or exclusion decisions for the most pristine prints, will likely make far fewer false positive and false negative decisions than respondents who reported fewer inconclusives. The authors of the study also note that it is difficult to compare the error rates and inconclusive rates of individual examiners because each examiner saw a different set of fingerprint images (see Appendix 3 of Bradford T. Ulery et al. ([2011](#ref-ulery2011))). In other words, it would be unfair to compare the error rate of someone who was given a set of “easy” questions to the error rate of someone who was given a set of “difficult” questions. A better measure of examiner skill would account for both error rates and difficulty of prints that were examined.
Accurately measuring proficiency, or examiner skill, is valuable not only for determining whether a forensic examiner has met baseline competency requirements, but for training purposes as well. Personalized feedback after participating in a study could lead to targeted training for examiners in order to improve their proficiency. Additionally, if proficiency is not accounted for among a group of study participants, which often include trainees or non\-experts as well as experienced examiners, the overall results from the study may be biased.
There also exist substantial differences in the difficulty of forensic evaluation tasks. Properties of the evidence, such as the quality, quantity, concentration, or rarity of characteristics may make it easier or harder to evaluate. Some evidence, regardless of how skilled the examiner is, will not have enough information to result in an identification or exclusion in a comparison task. An inconclusive response, in this case, should be treated as the “correct” response. Inconclusive responses on more straightforward identification tasks, on the other hand, may be treated as mistakes.
Methods for analyzing forensic decision\-making data should thus provide estimates for both participant proficiency and evidence difficulty. *Item response models*, a class of statistical methods used prominently in educational testing, have been proposed for use in forensic science for these reasons (Kerkhoff et al. [2015](#ref-kerkhoff2015)). Luby and Kadane ([2018](#ref-luby2018proficiency)) provided the first item response analysis for forensic proficiency test data, and we improve and extend upon that work by
\- analyzing a different fingerprint identification study that includes richer data on decision\-making, and
\- extending the range of models considered.
The remainder of the chapter is organized as follows: Section [8\.1\.1](decision-making.html#irt) gives a brief overview of Item Response Models, Section [8\.2](decision-making.html#humans-data) provides an overview on how decision\-making data is collected in forensic science, and Section [8\.3](decision-making.html#rpackages) describes an R package that can be used to fit these models. Section [8\.4](decision-making.html#humans-conclusions) describes how conclusions are drawn from an Item Response analysis, and Section [8\.5](decision-making.html#casestudy) gives an example IRT analysis of the FBI “Black Box” study.
### 8\.1\.1 A Brief Overview of Item Response Models
For \\(P\\) individuals responding to \\(I\\) test items, we can express the binary responses (i.e. correct/incorrect) as a \\(P\\times I\\) matrix, \\(Y\\). Item Response Theory (IRT) is based on the idea that the probability of a correct response depends on individual *proficiency*, \\(\\theta\_p, p \= 1, \\ldots, P\\), and item *difficulty*, \\(b\_i, i \= 1, \\ldots I\\).
#### 8\.1\.1\.1 Rasch Model
The Rasch Model (Rasch [1960](#ref-rasch1960studies); Fischer and Molenaar [2012](#ref-raschbook)) is a relatively simple yet powerful item response model, and serves as the basis for extensions introduced later. The probability of a correct response is modeled as a logistic function of the difference between the participant proficiency, \\(\\theta\_p\\) (\\(p\=1, \\dots, P\\)), and the item difficulty, \\(b\_i\\) (\\(i\=1, \\dots, I\\)):
\\\[\\begin{equation}
P(Y\_{pi} \= 1\) \= \\frac{1}{1\-\\exp(\-(\\theta\_p \- b\_i))}.
\\tag{8\.1}
\\end{equation}\\]
To identify the model, we shall use the convention of constraining the mean of the participant parameters (\\(\\mu\_\\theta\\)) to be equal to zero. This allows for a nice interpretation of both participant and item parameters relative to the “average participant”. If \\(\\theta\_p \>0\\), participant \\(p\\) is of “above average” proficiency and if \\(\\theta\_p \<0\\), participant \\(p\\) is of “below average” proficiency. Similarly, if \\(b\_i \< 0\\) question \\(i\\) is an “easier” question and the average participant is more likely to correctly answer question \\(i\\). If \\(b\_i \>0\\) then question \\(i\\) is a more “difficult” question and the average participant is less likely to correctly answer question \\(i\\). Other common conventions for identifying the model include setting a particular \\(b\_i\\) or the mean of the \\(b\_i\\)s equal to zero.
The item characteristic curve (ICC) describes the relationship between proficiency and performance on a particular item (see Figure [8\.1](decision-making.html#fig:hf-extensionsexample) for examples). For item parameters estimated under a Rasch model, all ICCs are standard logistic curves with different locations on the latent difficulty/proficiency scale.
Note that Equation [(8\.1\)](decision-making.html#eq:rasch) also describes a generalized linear model (GLM), where \\(\\theta\_p \- b\_i\\) is the linear component, with a logit link function. By formulating the Rasch Model as a hierarchical GLM with prior distributions on both \\(\\theta\_p\\) and \\(b\_i\\), the identifiability problem is solved. We assign \\(\\theta\_p \\sim N(0, \\sigma\_\\theta^2\)\\) and \\(b\_i \\sim N(\\mu\_b, \\sigma\_b^2\)\\), although more complicated prior distributions are certainly possible.
The *two\-parameter logistic model* (2PL) and *three\-parameter logistic model* (3PL) are additional popular item response models (Lord [1980](#ref-lord1980applications)). They are both similar to the Rasch model in that the probability of a correct response depends on participant proficiency and item difficulty, but additional item parameters are also included. We omit a full discussion of these models here, but further reading may be found in van der Linden and Hambleton ([2013](#ref-van2013handbook)) and Boeck and Wilson ([2004](#ref-eirtbook)).
Figure 8\.1: Item Characteristic Curve (ICC) examples for the Rasch, 2PL, and 3PL models.
#### 8\.1\.1\.2 Partial Credit Model
The *partial credit model* (PCM) (Masters [1982](#ref-masters1982)) is distinct from the models discussed above because it allows for the response variable, \\(Y\_{pi}\\), to take additional values beyond zero (incorrect) and one (correct). This is especially useful for modeling partially correct responses, although may be applied in other contexts where the responses can be ordered. When \\(Y\_{pi}\\) is binary, the partial credit model is equivalent to the Rasch model. Under the PCM, the probability of response \\(Y\_{pi}\\) depends on \\(\\theta\_p\\), the proficiency of participant \\(p\\) as in the above models; \\(m\_i\\), the maximum score for item \\(i\\) (and the number of step parameters); and \\(\\beta\_{il}\\), the \\(l^{th}\\) step parameter for item \\(i\\) (\\(l\=0, \\dots, m\_i\\)):
\\\[\\begin{equation}
P(Y\_{pi} \= 0\) \= \\frac{1}{1\+\\sum\_{k\=1}^{m\_i} \\exp \\sum\_{l\=1}^k (\\theta\_p \- \\beta\_{il})}
\\end{equation}\\]
\\\[\\begin{equation}
P(Y\_{pi} \= y, y\>0\) \= \\frac{\\exp\\sum\_{l\=1}^y(\\theta\_p \- \\beta\_{il})}{1\+\\sum\_{k\=1}^{m\_i}\\exp \\sum\_{l\=1}^k (\\theta\_p \- \\beta\_{il})}.
\\tag{8\.2}
\\end{equation}\\]
An example PCM is shown in Figure [8\.2](decision-making.html#fig:hf-pcmexample) by plotting the probabilities of observing each of three categories as a function of \\(\\theta\_p\\) (analogous to the ICC curves above).
Figure 8\.2: Category response functions for the PCM.
### 8\.1\.1 A Brief Overview of Item Response Models
For \\(P\\) individuals responding to \\(I\\) test items, we can express the binary responses (i.e. correct/incorrect) as a \\(P\\times I\\) matrix, \\(Y\\). Item Response Theory (IRT) is based on the idea that the probability of a correct response depends on individual *proficiency*, \\(\\theta\_p, p \= 1, \\ldots, P\\), and item *difficulty*, \\(b\_i, i \= 1, \\ldots I\\).
#### 8\.1\.1\.1 Rasch Model
The Rasch Model (Rasch [1960](#ref-rasch1960studies); Fischer and Molenaar [2012](#ref-raschbook)) is a relatively simple yet powerful item response model, and serves as the basis for extensions introduced later. The probability of a correct response is modeled as a logistic function of the difference between the participant proficiency, \\(\\theta\_p\\) (\\(p\=1, \\dots, P\\)), and the item difficulty, \\(b\_i\\) (\\(i\=1, \\dots, I\\)):
\\\[\\begin{equation}
P(Y\_{pi} \= 1\) \= \\frac{1}{1\-\\exp(\-(\\theta\_p \- b\_i))}.
\\tag{8\.1}
\\end{equation}\\]
To identify the model, we shall use the convention of constraining the mean of the participant parameters (\\(\\mu\_\\theta\\)) to be equal to zero. This allows for a nice interpretation of both participant and item parameters relative to the “average participant”. If \\(\\theta\_p \>0\\), participant \\(p\\) is of “above average” proficiency and if \\(\\theta\_p \<0\\), participant \\(p\\) is of “below average” proficiency. Similarly, if \\(b\_i \< 0\\) question \\(i\\) is an “easier” question and the average participant is more likely to correctly answer question \\(i\\). If \\(b\_i \>0\\) then question \\(i\\) is a more “difficult” question and the average participant is less likely to correctly answer question \\(i\\). Other common conventions for identifying the model include setting a particular \\(b\_i\\) or the mean of the \\(b\_i\\)s equal to zero.
The item characteristic curve (ICC) describes the relationship between proficiency and performance on a particular item (see Figure [8\.1](decision-making.html#fig:hf-extensionsexample) for examples). For item parameters estimated under a Rasch model, all ICCs are standard logistic curves with different locations on the latent difficulty/proficiency scale.
Note that Equation [(8\.1\)](decision-making.html#eq:rasch) also describes a generalized linear model (GLM), where \\(\\theta\_p \- b\_i\\) is the linear component, with a logit link function. By formulating the Rasch Model as a hierarchical GLM with prior distributions on both \\(\\theta\_p\\) and \\(b\_i\\), the identifiability problem is solved. We assign \\(\\theta\_p \\sim N(0, \\sigma\_\\theta^2\)\\) and \\(b\_i \\sim N(\\mu\_b, \\sigma\_b^2\)\\), although more complicated prior distributions are certainly possible.
The *two\-parameter logistic model* (2PL) and *three\-parameter logistic model* (3PL) are additional popular item response models (Lord [1980](#ref-lord1980applications)). They are both similar to the Rasch model in that the probability of a correct response depends on participant proficiency and item difficulty, but additional item parameters are also included. We omit a full discussion of these models here, but further reading may be found in van der Linden and Hambleton ([2013](#ref-van2013handbook)) and Boeck and Wilson ([2004](#ref-eirtbook)).
Figure 8\.1: Item Characteristic Curve (ICC) examples for the Rasch, 2PL, and 3PL models.
#### 8\.1\.1\.2 Partial Credit Model
The *partial credit model* (PCM) (Masters [1982](#ref-masters1982)) is distinct from the models discussed above because it allows for the response variable, \\(Y\_{pi}\\), to take additional values beyond zero (incorrect) and one (correct). This is especially useful for modeling partially correct responses, although may be applied in other contexts where the responses can be ordered. When \\(Y\_{pi}\\) is binary, the partial credit model is equivalent to the Rasch model. Under the PCM, the probability of response \\(Y\_{pi}\\) depends on \\(\\theta\_p\\), the proficiency of participant \\(p\\) as in the above models; \\(m\_i\\), the maximum score for item \\(i\\) (and the number of step parameters); and \\(\\beta\_{il}\\), the \\(l^{th}\\) step parameter for item \\(i\\) (\\(l\=0, \\dots, m\_i\\)):
\\\[\\begin{equation}
P(Y\_{pi} \= 0\) \= \\frac{1}{1\+\\sum\_{k\=1}^{m\_i} \\exp \\sum\_{l\=1}^k (\\theta\_p \- \\beta\_{il})}
\\end{equation}\\]
\\\[\\begin{equation}
P(Y\_{pi} \= y, y\>0\) \= \\frac{\\exp\\sum\_{l\=1}^y(\\theta\_p \- \\beta\_{il})}{1\+\\sum\_{k\=1}^{m\_i}\\exp \\sum\_{l\=1}^k (\\theta\_p \- \\beta\_{il})}.
\\tag{8\.2}
\\end{equation}\\]
An example PCM is shown in Figure [8\.2](decision-making.html#fig:hf-pcmexample) by plotting the probabilities of observing each of three categories as a function of \\(\\theta\_p\\) (analogous to the ICC curves above).
Figure 8\.2: Category response functions for the PCM.
#### 8\.1\.1\.1 Rasch Model
The Rasch Model (Rasch [1960](#ref-rasch1960studies); Fischer and Molenaar [2012](#ref-raschbook)) is a relatively simple yet powerful item response model, and serves as the basis for extensions introduced later. The probability of a correct response is modeled as a logistic function of the difference between the participant proficiency, \\(\\theta\_p\\) (\\(p\=1, \\dots, P\\)), and the item difficulty, \\(b\_i\\) (\\(i\=1, \\dots, I\\)):
\\\[\\begin{equation}
P(Y\_{pi} \= 1\) \= \\frac{1}{1\-\\exp(\-(\\theta\_p \- b\_i))}.
\\tag{8\.1}
\\end{equation}\\]
To identify the model, we shall use the convention of constraining the mean of the participant parameters (\\(\\mu\_\\theta\\)) to be equal to zero. This allows for a nice interpretation of both participant and item parameters relative to the “average participant”. If \\(\\theta\_p \>0\\), participant \\(p\\) is of “above average” proficiency and if \\(\\theta\_p \<0\\), participant \\(p\\) is of “below average” proficiency. Similarly, if \\(b\_i \< 0\\) question \\(i\\) is an “easier” question and the average participant is more likely to correctly answer question \\(i\\). If \\(b\_i \>0\\) then question \\(i\\) is a more “difficult” question and the average participant is less likely to correctly answer question \\(i\\). Other common conventions for identifying the model include setting a particular \\(b\_i\\) or the mean of the \\(b\_i\\)s equal to zero.
The item characteristic curve (ICC) describes the relationship between proficiency and performance on a particular item (see Figure [8\.1](decision-making.html#fig:hf-extensionsexample) for examples). For item parameters estimated under a Rasch model, all ICCs are standard logistic curves with different locations on the latent difficulty/proficiency scale.
Note that Equation [(8\.1\)](decision-making.html#eq:rasch) also describes a generalized linear model (GLM), where \\(\\theta\_p \- b\_i\\) is the linear component, with a logit link function. By formulating the Rasch Model as a hierarchical GLM with prior distributions on both \\(\\theta\_p\\) and \\(b\_i\\), the identifiability problem is solved. We assign \\(\\theta\_p \\sim N(0, \\sigma\_\\theta^2\)\\) and \\(b\_i \\sim N(\\mu\_b, \\sigma\_b^2\)\\), although more complicated prior distributions are certainly possible.
The *two\-parameter logistic model* (2PL) and *three\-parameter logistic model* (3PL) are additional popular item response models (Lord [1980](#ref-lord1980applications)). They are both similar to the Rasch model in that the probability of a correct response depends on participant proficiency and item difficulty, but additional item parameters are also included. We omit a full discussion of these models here, but further reading may be found in van der Linden and Hambleton ([2013](#ref-van2013handbook)) and Boeck and Wilson ([2004](#ref-eirtbook)).
Figure 8\.1: Item Characteristic Curve (ICC) examples for the Rasch, 2PL, and 3PL models.
#### 8\.1\.1\.2 Partial Credit Model
The *partial credit model* (PCM) (Masters [1982](#ref-masters1982)) is distinct from the models discussed above because it allows for the response variable, \\(Y\_{pi}\\), to take additional values beyond zero (incorrect) and one (correct). This is especially useful for modeling partially correct responses, although may be applied in other contexts where the responses can be ordered. When \\(Y\_{pi}\\) is binary, the partial credit model is equivalent to the Rasch model. Under the PCM, the probability of response \\(Y\_{pi}\\) depends on \\(\\theta\_p\\), the proficiency of participant \\(p\\) as in the above models; \\(m\_i\\), the maximum score for item \\(i\\) (and the number of step parameters); and \\(\\beta\_{il}\\), the \\(l^{th}\\) step parameter for item \\(i\\) (\\(l\=0, \\dots, m\_i\\)):
\\\[\\begin{equation}
P(Y\_{pi} \= 0\) \= \\frac{1}{1\+\\sum\_{k\=1}^{m\_i} \\exp \\sum\_{l\=1}^k (\\theta\_p \- \\beta\_{il})}
\\end{equation}\\]
\\\[\\begin{equation}
P(Y\_{pi} \= y, y\>0\) \= \\frac{\\exp\\sum\_{l\=1}^y(\\theta\_p \- \\beta\_{il})}{1\+\\sum\_{k\=1}^{m\_i}\\exp \\sum\_{l\=1}^k (\\theta\_p \- \\beta\_{il})}.
\\tag{8\.2}
\\end{equation}\\]
An example PCM is shown in Figure [8\.2](decision-making.html#fig:hf-pcmexample) by plotting the probabilities of observing each of three categories as a function of \\(\\theta\_p\\) (analogous to the ICC curves above).
Figure 8\.2: Category response functions for the PCM.
8\.2 Data
---------
The vast majority of forensic decision\-making occurs in casework, information about which is not often made available to researchers due to privacy concerns. Outside of casework, data on forensic science decision\-making is collected through proficiency test results and in error rate studies. *Proficiency tests* are periodic competency exams that must be completed for forensic laboratories to maintain their accreditation. *Error rate studies* are independent research studies designed to measure casework error rates. As their names suggest, these two data collection scenarios serve completely different purposes. Proficiency tests are designed to assess *basic competency* of individuals, and mistakes are rare. Error rate studies are designed to mimic the difficulty of evidence in casework and estimate the *overall error rate*, aggregating over many individuals, and mistakes are more common by design.
Proficiency exams consist of a large number of participants (often \\(\>400\\)) responding to a small set of questions (often \\(\<20\\)). Since every participant answers every question, we can assess participant proficiency and question difficulty using the observed scores. As proficiency exams are designed to assess basic competency, most questions are relatively easy and the vast majority of participants score 100%. Error rate studies, on the other hand, consist of a smaller number of participants (fewer than \\(200\\)) and a larger pool of questions (more than \\(500\\)). The questions are designed to be difficult, and every participant does not answer every question, which makes determining participant proficiency and question difficulty a more complicated task.
Results from both proficiency tests and error rate studies can be represented as a set of individuals responding to several items, in which responses can be scored as correct or incorrect. This is not unlike an educational testing scenario where students (individuals) answer questions (items) either correctly or incorrectly. There is a rich body of statistical methods for estimating student proficiency and item difficulty from test responses. Item Response Theory (IRT) is used extensively in educational testing to study the relationship between an individual’s (unobserved) proficiency and their performance on varying tasks. IRT is an especially useful tool to estimate participant proficiencies and question difficulties when participants do not necessarily answer the same set of questions.
8\.3 R Packages
---------------
The case study makes use of the [`blackboxstudyR`](https://github.com/aluby/blackboxstudyR) R package (Luby [2019](#ref-R-blackboxstudyR)), which provides functions for working with the FBI black box data, implementations of basic IRT models in Stan (Guo, Gabry, and Goodrich [2018](#ref-R-rstan)), and plotting functions for results.
The primary functions of `blackboxstudyR` include:
* `score_bb_data()`: Scores the FBI “Black Box” data under one of five scoring schemes.
* `irt_data_bb()`: Formats the FBI “Black Box” data into a form appropriate for fitting a Stan model.
* `fit_irt()`: Wrapper for Stan to fit standard IRT models to data (does not be the FBI data). Models currently available are:
+ Rasch Model (Section [8\.1\.1\.1](decision-making.html#rasch-model))
+ 2PL Model (Section [8\.1\.1\.1](decision-making.html#rasch-model))
+ Partial Credit Model (Section [8\.1\.1\.2](decision-making.html#partial-credit-model))
* `plot_difficulty_posteriors` and `plot_proficiency_posteriors`: Plot posterior intervals for difficulty and proficiency estimates, respectively.
8\.4 Drawing Conclusions
------------------------
An IRT analysis produces estimates of both participant proficiency and item difficulty. As mentioned previously, this property is especially useful for settings where participants respond to different subsets of items, as it allows all participants to be compared on the same scale.
By comparing the estimated proficiency to more traditional measures of participant performance (e.g. false positive rate or false negative rate), we can see whether there are aspects captured by proficiency that are not captured in other measures. For instance, the false positive rate and the false negative rate contain no information about the inconclusive rate, while IRT does implicitly, as it accounts for the number of question answered by each participant.
In the forensic science setting, completing an IRT analysis will often include an additional step of choosing how the data should be scored. For example, should inconclusive responses be scored as incorrect or treated as missing? An additional question we may wish to answer is, “Which scoring scheme is most appropriate for the setting at hand?” In some cases, the optimal scoring scheme may be determined using expert knowledge, or by specifying the expected answers to each item beforehand. In other cases, it may not be possible to determine the optimal scoring scheme before fitting an IRT model. In those cases, multiple scoring methods should be used to fit an IRT model, and the results from each model should be compared and contrasted.
8\.5 Case Study
---------------
We use the FBI “black box” data (`blackboxstudyR::TestResponses`) for our case study. `TestResponses` is a data frame in which each row corresponds to an examiner, each column represents the item, and the value in each unique combination of row and column is the examiner’s response to that item. In addition to the examiner ID (`Examiner_ID`) and item ID (`Pair_ID`), the data contains:
* `Mating`: whether the pair of prints were “Mates” (same source) or “Non\-mates” (different source)
* `Latent_Value`: the examiner’s assessment of the value of the print (NV \= No Value, VEO \= Value for Exclusion Only, VID \= Value for Individualization)
* `Compare_Value`: the examiner’s assessment of whether the pair of prints is an “Exclusion”, “Inconclusive” or “Individualization”
* `Inconclusive_Reason`: If inconclusive, the reason for the inconclusive
+ “Close”: *The correspondence of features is supportive of the conclusion that the two impressions originated from the same source, but not the extent sufficient for individualization.*
+ “Insufficient”: *Potentially corresponding areas are present, but there is insufficient information present.* Participants were told to select this reason if the reference print was not of value.
+ “No Overlap”: *No overlapping area between the latent and reference*
* `Exclusion_Reason`: If exclusion, the reason for the exclusion
+ “Minutiae”
+ “Pattern”
* `Difficulty`: Reported difficulty ranging from “A\_Obvious” to “E\_VeryDifficult”
In order to fit an IRT model, we must first score the data. Responses are scored as correct if they are true identifications (`Mating == Mates` and `Compare_Value == Individualization`) or exclusions (`Mating == Non-mates` and `Compare_Value == Exclusion`). Similarly, responses are scored as incorrect if they are false identifications (`Mating == Non-mates` and `Compare_Value == Individualization`) or false exclusions (`Mating == Mates` and `Compare_Value == Exclusion`).
Inconclusive responses, which are never keyed as correct responses, complicate the scoring of the exam due to both their ambiguity and prevalence. There are a large number of inconclusive answers (4907 of 17121 responses), and examiners vary on which latent print pairs are inconclusive.
The `blackboxstudyR` package includes five methods to score inconclusive responses:
1. Score all inconclusive responses as incorrect (`inconclusive_incorrect`). This may penalize participants who were shown more vague or harder questions and therefore reported more inconclusives.
2. Treat inconclusive responses as missing completely at random (`inconclusive_mcar`). This decreases the amount of data included in the analysis, and does not explicitly penalize examiners who report many inconclusives. This is the scoring method most similar to the method used in Bradford T. Ulery et al. ([2011](#ref-ulery2011)) to compute false positive and false negative rates.
3. Score inconclusive as correct if the reason given for an inconclusive is “correct”. Since the ground truth “correct” inconclusive reason is unknown, the consensus reason from other inconclusive responses for that question is used. If no consensus reason exists, the inconclusive response was scored in one of two ways:
1. Treat inconclusive responses as incorrect if no consensus reason exists (`no_consensus_incorrect`).
2. Treat inconclusive responses as missing completely at random if no consensus reason exists (`no_consensus_mcar`).
4. Score inconclusive responses as “partial credit” (`partial_credit`).
In the remainder of the case study we will
1\. demonstrate how to fit an IRT model in R,
2\. illustrate how IRT analysis complements an error rate analysis by accounting for participants seeing different sets of questions, and
3\. show how different scoring methods can change results from an IRT analysis.
### 8\.5\.1 Fitting the IRT model
We’ll proceed with an IRT analysis of the data under the `inconclusive_mcar` scoring scheme, which is analogous to how the data were scored under Bradford T. Ulery et al. ([2011](#ref-ulery2011)).
```
im_scored <- score_bb_data(TestResponses, "inconclusive_mcar")
```
Scoring the black box data as above gives us the response variable (\\(y\\)). The `irt_data_bb` function takes the original black box data, along with the scored variable produced by `score_bb_data`, and produces a list object in the form needed by Stan to fit the IRT models. If you wish to fit the models on a different set of data, you can do so if the dataset has been formatted as a list object with the same attributes as the `irt_data_bb` function output (see package documentation for additional details).
```
im_data <- irt_data_bb(TestResponses, im_scored)
```
We can now use `fit_irt` to fit the Rasch models.
```
im_model <- fit_rasch(im_data, iterations = 600, n_chains = 4)
```
In practice, it is necessary to ensure that the MCMC sampler has converged using a variety of diagnostics. We omit these steps here for brevity, but the `blackboxstudyR` package will include a vignette detailing this process, or see e.g. Gelman et al. ([2013](#ref-bda3)).
After the model has been fit, we can plot the posterior distributions of difficulties and proficiencies:
```
p1 <- plot_proficiency_posteriors(im_samples) + my_theme
p2 <- plot_difficulty_posteriors(im_samples) + my_theme
ggarrange(p1, p2, ncol = 2)
```
The lighter gray interval represents the 95% posterior interval and the black interval represents the 50% posterior interval. If we examine the posterior intervals for the difficulty estimates (\\(b\\)), we can see groups which have noticeably larger intervals, and thus more uncertainty regarding the estimate:
1\. those on the bottom left
2\. those on the upper right, and
3\. those in the middle.
These three groups of uncertain estimates correspond to:
1\. the questions that every participant answered correctly,
2\. the questions that every participant answered incorrectly, and
3\. the questions that every participant reported as an “inconclusive” or “no value”.
### 8\.5\.2 IRT complements an error rate analysis
The original analysis of the FBI “Black Box” Study (see Bradford T. Ulery et al. [2011](#ref-ulery2011)) did not include analysis of participant error rates, because each participant saw a different question set. Since proficiency accounts for the difficulty of question sets, however, we can directly compare participant proficiencies to each other, and also see how error rates and proficiency are related.
First, we compute the observed person scores.
```
obs_p_score <- bb_person_score(TestResponses, im_scored)
```
In order to use the `error_rate_analysis` function, we need to extract the median question difficulties from MCMC results.
```
q_diff <- apply(im_samples, 3, median)[grep("b\\[", names(apply(im_samples,
3, median)))]
ex_error_rates <- error_rate_analysis(TestResponses, q_diff)
```
Now, we can plot the proficiency estimates (with 95% posterior intervals) against the results from a traditional error rate analysis.
```
p1 <- person_mcmc_intervals(im_samples) %>% right_join(., obs_p_score, by = "exID") %>%
full_join(., ex_error_rates, by = "exID") %>% dplyr::select(., score, m,
ll, hh, exID, avg_diff, fpr, fnr) %>% ggplot(., aes(x = fpr, y = m, ymin = ll,
ymax = hh)) + geom_pointrange(size = 0.3) + labs(x = "False Positive Rate",
y = "Proficiency Estimate") + my_theme
p2 <- person_mcmc_intervals(im_samples) %>% right_join(., obs_p_score, by = "exID") %>%
full_join(., ex_error_rates, by = "exID") %>% dplyr::select(., score, m,
ll, hh, exID, avg_diff, fpr, fnr) %>% ggplot(., aes(x = fnr, y = m, ymin = ll,
ymax = hh, color = fpr > 0)) + geom_pointrange(size = 0.3) + labs(x = "False Negative Rate",
y = "Proficiency Estimate") + scale_colour_manual(values = c("black", "steelblue")) +
my_theme + theme(legend.position = "none")
ggarrange(p1, p2, ncol = 2)
```
Figure 8\.3: Proficiency vs False Positive Rate (left) and False Negative Rate (right)
Figure [8\.3](decision-making.html#fig:hf-error-rate-plots) shows proficiency against the false positive rate (left) and false negative rate (right). Those participants who made at least one false positive error are colored in blue on the right side plot. We see that one of the participants who made a false positive error still received a relatively large proficiency estimate due to having such a small false negative rate.
If, instead of looking error rates for each participant, we examine observed scores, the estimated proficiencies correlate with the observed score (Figure [8\.4](decision-making.html#fig:hf-prof-observed). That is, participants with a higher observed score are generally given larger proficiency estimates than participants with lower scores. There are, however, cases where participants scored roughly the same on the study but are given vastly different proficiency estimates. For example, the highlighted participants in the right plot above all scored between 94% and 96%, but their estimated proficiencies range from \\(\-1\.25\\) to \\(2\.5\\).
```
p1 <- person_mcmc_intervals(im_samples) %>% right_join(obs_p_score, by = "exID") %>%
ggplot(aes(x = score, y = m, ymin = ll, ymax = hh)) + geom_pointrange(size = 0.3) +
labs(x = "Observed Score", y = "Proficiency Estimate") + my_theme
p2 <- person_mcmc_intervals(im_samples) %>% right_join(obs_p_score, by = "exID") %>%
ggplot(aes(x = score, y = m, ymin = ll, ymax = hh)) + geom_pointrange(size = 0.3) +
gghighlight(score < 0.96 & score > 0.94) + labs(x = "Observed Score", y = "Proficiency Estimate") +
my_theme
ggarrange(p1, p2, ncol = 2)
```
Figure 8\.4: Proficiency vs Observed Score
If we examine those participants who scored between 94% and 96% more closely, we can see that the discrepancies in their proficiencies are largely explained by the difficulty of the specific question set they saw. This is evidenced by the positive trend in Figure [8\.5](decision-making.html#fig:hf-prof-by-diff). In addition to the observed score and difficulty of the question set, the number of questions the participant answers conclusively (i.e. individualization or exclusion) also plays a role in the proficiency estimate. Participants who are conclusive more often generally receive higher estimates of proficiency than participants who are conclusive less often.
```
person_mcmc_intervals(im_samples) %>% right_join(obs_p_score, by = "exID") %>%
full_join(ex_error_rates, by = "exID") %>% dplyr::select(score, m, ll, hh,
exID, avg_diff, pct_skipped) %>% filter(score < 0.96 & score > 0.94) %>%
ggplot(aes(x = avg_diff, y = m, ymin = ll, ymax = hh, col = 1 - pct_skipped)) +
geom_pointrange(size = 0.3) + labs(x = "Avg Q Difficulty", y = "Proficiency Estimate",
color = "% Conclusive") + my_theme
```
Figure 8\.5: Proficiency vs Average Question Difficulty, for participants with observed score between 94 and 96 percent correct.
### 8\.5\.3 Scoring method affects proficiency estimates
To illustrate the difference in results between different scoring methods, we’ll now score the data and fit models in two more ways: `no_consensus_incorrect` and `partial_credit`.
```
nci_scored <- score_bb_data(TestResponses, "no_consensus_incorrect")
nci_data <- irt_data_bb(TestResponses, nci_scored)
pc_scored <- score_bb_data(TestResponses, "partial_credit")
pc_data <- irt_data_bb(TestResponses, pc_scored)
```
We use `fit_rasch` to fit the Rasch model to the `no_consensus_incorrect` data, and since the `partial_credit` data has three outcomes (correct, inconclusive, or incorrect) instead of only two (correct/incorrect), we use `fit_pcm` to fit a partial credit model to the data.
```
nci_model <- fit_rasch(nci_data, iterations = 1000, n_chains = 4)
pc_model <- fit_pcm(pc_data, iterations = 1000, n_chains = 4)
```
We can examine the proficiency estimates and observed scores for each participant under each of the three scoring schemes, similar to Figure [8\.4](decision-making.html#fig:hf-prof-observed) above. Under the partial credit scoring scheme, a correct identification/exclusion is scored as a “2”, an inconclusive response is scored as a “1” and an incorrect identification/exclusion is scored as a “0”. The observed score is then computed by \\((\\\# Correct \+ \\\# Inconclusive) / (2 \\times \\\# Responses)\\) to scale the score to be between 0 and 1\.
```
p_score_im <- bb_person_score(TestResponses, im_scored)
p_score_im <- person_mcmc_intervals(blackboxstudyR::im_samples) %>% right_join(p_score_im,
by = "exID") %>% mutate(scoring = rep("im", nrow(p_score_im)))
p_score_nci <- bb_person_score(TestResponses, nci_scored)
p_score_nci <- person_mcmc_intervals(blackboxstudyR::nci_samples) %>% right_join(p_score_nci,
by = "exID") %>% mutate(scoring = rep("nci", nrow(p_score_nci)))
p_score_pc <- bb_person_score(TestResponses, pc_scored)
p_score_pc <- person_mcmc_intervals(blackboxstudyR::pc_samples) %>% right_join(p_score_pc,
by = "exID") %>% mutate(scoring = rep("pc", nrow(p_score_pc)))
p1 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% ggplot(aes(x = score,
y = m, ymin = ll, ymax = hh, col = scoring)) + geom_pointrange(size = 0.3,
alpha = 0.5) + labs(x = "Observed Score", y = "Estimated Proficiency") +
my_theme
p2 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% group_by(exID) %>%
ggplot(aes(x = score, y = m, ymin = ll, ymax = hh, col = scoring, group = exID)) +
geom_pointrange() + gghighlight(hh < -0.5, use_group_by = FALSE) + geom_line(col = "black",
linetype = "dotted") + labs(x = "Observed Score", y = "Estimated Proficiency") +
geom_hline(yintercept = -0.5, col = "darkred", linetype = "dashed") + my_theme
ggarrange(p1, p2, ncol = 2, common.legend = TRUE, legend = "bottom")
```
Figure 8\.6: Proficiency vs Observed Score for each of three scoring schemes
Treating the inconclusives as missing (“im”), leads to both the smallest range of observed scores and largest range of estimated proficiencies. Harsher scoring methods (e.g. `no consensus incorrect` (“nci”)) do not necessarily lead to lower estimated proficiencies. For instance, the participants who scored around 45% under the “nci” scoring method (in green) are given higher proficiency estimates than the participant who scored 70% under the “im” scoring method. The scoring method thus affects the proficiency estimates in a somewhat non\-intuitive way, as larger ranges of observed scores do not necessarily correspond to larger ranges of proficiency estimates.
Also note that the uncertainty intervals under the “im” scoring scheme are noticeably larger than under the other scoring schemes. This is because the `inconclusive_mcar` scheme treats all of the inconclusives, nearly a third of the data, as missing. This missingness is completely uninformative when estimating the difficulty and proficiency estimates. Under the other scoring schemes (`no consensus incorrect` and `partial credit`) the inconclusive responses are never treated as missing, leading to a larger number of observations per participant and therefore a smaller amount of uncertainty in the proficiency estimate.
The range of proficiencies under different scoring schemes and the uncertainty intervals for the proficiency estimates both have substantial implications if we consider setting a “mastery level” for participants. As an example, let’s consider setting the mastery threshold at \\(\-0\.5\\). We will then say examiners have not demonstrated mastery if the upper end of their proficiency uncertainty estimate is below \\(\-0\.5\\), illustrated in the right plot of Figure [8\.6](decision-making.html#fig:hf-prof-three-scores).
The number of examiners that have not demonstrated mastery varies based on the scoring method used (11 for “nci”, 8 for “pc” and 11 for “im”) due to the variation in range of proficiency estimates. Additionally, for each of the scoring schemes, there are a number of examiners that did achieve mastery with the same observed score as those that did not demonstrate mastery. This is due to a main feature of item response models discussed earlier: participants that answered more difficult questions are given higher proficiency estimates than participants that answered the same number of easier questions.
We’ve also drawn dotted lines between proficiency estimates that correspond to the same person. Note that many of the participants who do not achieve mastery under one scoring scheme *do* achieve mastery under the other scoring schemes, since not all of the points are connected by dotted lines. There are also a few participants who do not achieve mastery under any of the scoring schemes. This raises the question of how much the proficiency estimates change for each participant under the different scoring schemes.
The plot on the left in Figure [8\.7](decision-making.html#fig:hf-prof-by-id) shows both a change in examiner proficiencies across scoring schemes (the lines connecting the proficiencies are not horizontal) as well as a change in the ordering of examiner proficiencies (the lines cross one another). That is, different scoring schemes affect examiner proficiencies in different ways.
The plot on the right illustrates participants that see substantial differences in their proficiency estimates under different scoring schemes. Examiners 105 and 3 benefit from the leniency in scoring when inconclusives are treated as missing (“im”). When inconclusives are scored as incorrect (“nci”) or partial credit (“pc”), they see a substantial decrease in their proficiency due to reporting a high number of inconclusives and differing from other examiners in their reasoning for reporting inconclusives. Examiners 142, 60 and 110, on the other hand, are hurt by the leniency in scoring when inconclusives are treated as missing (“im”). Their proficiency estimates increase when inconclusives are scored as correct when they match the consensus reason (“nci”) or are worth partial credit (“pc”).
```
p1 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% arrange(parameter) %>%
mutate(id = rep(1:169, each = 3)) %>% dplyr::select(id, m, scoring) %>%
spread(scoring, m) %>% mutate(max.diff = apply(cbind(abs(im - nci), abs(nci -
pc), abs(im - pc)), 1, max)) %>% gather("model", "median", -c(id, max.diff)) %>%
ggplot(aes(x = model, y = median, group = id, col = id)) + geom_point() +
geom_line() + labs(x = "Scoring Method", y = "Estimated Proficiency") +
my_theme
p2 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% arrange(parameter) %>%
mutate(id = rep(1:169, each = 3)) %>% dplyr::select(id, m, scoring) %>%
spread(scoring, m) %>% mutate(max.diff = apply(cbind(abs(im - nci), abs(nci -
pc), abs(im - pc)), 1, max)) %>% gather("model", "median", -c(id, max.diff)) %>%
ggplot(aes(x = model, y = median, group = id, col = id)) + geom_point() +
geom_line() + labs(x = "Scoring Method", y = "Estimated Proficiency") +
gghighlight(max.diff > 1.95, label_key = id, use_group_by = FALSE) + my_theme
ggarrange(p1, p2, ncol = 2, common.legend = TRUE, legend = "bottom")
```
Figure 8\.7: Change in proficiency for each examiner under the three scoring schemes. The right side plot has highlighted five examiners whose proficiency estimates change the most across schemes.
### 8\.5\.4 Discussion
We have provided an overview of human decision\-making in forensic analyses, through the lens of latent print comparisons and the FBI “black box” study (Bradford T. Ulery et al. [2011](#ref-ulery2011)). A brief overview of Item Response Theory (IRT), a class of models used extensively in educational testing, was introduced in Section [8\.1\.1](decision-making.html#irt). A case study is provided of an IRT analysis on the FBI \`\`black box’’ study in [8\.5](decision-making.html#casestudy).
Results from an IRT analysis are largely consistent with conclusions from an error rate analysis. However, IRT provides substantially more information than a more traditional analysis, specifically through accounting for the difficulty of questions seen. Additionally, IRT implicitly accounts for the inconclusive rate of different participants and provides estimates of uncertainty for both participant proficiency and item difficulty. If IRT were to be adopted on a large scale, participants would be able to be directly compared even if they took different exams (for instance, proficiency exams in different years).
Three scoring schemes were presented in the case study, each of which leads to substantially different proficiency estimates across participants. Although IRT is a powerful tool for better understanding examiner performance on forensic identification tasks, we must be careful when choosing a scoring scheme. This is especially important for analyzing ambiguous responses, such as the inconclusive responses in the “black box” study.
[
### 8\.5\.1 Fitting the IRT model
We’ll proceed with an IRT analysis of the data under the `inconclusive_mcar` scoring scheme, which is analogous to how the data were scored under Bradford T. Ulery et al. ([2011](#ref-ulery2011)).
```
im_scored <- score_bb_data(TestResponses, "inconclusive_mcar")
```
Scoring the black box data as above gives us the response variable (\\(y\\)). The `irt_data_bb` function takes the original black box data, along with the scored variable produced by `score_bb_data`, and produces a list object in the form needed by Stan to fit the IRT models. If you wish to fit the models on a different set of data, you can do so if the dataset has been formatted as a list object with the same attributes as the `irt_data_bb` function output (see package documentation for additional details).
```
im_data <- irt_data_bb(TestResponses, im_scored)
```
We can now use `fit_irt` to fit the Rasch models.
```
im_model <- fit_rasch(im_data, iterations = 600, n_chains = 4)
```
In practice, it is necessary to ensure that the MCMC sampler has converged using a variety of diagnostics. We omit these steps here for brevity, but the `blackboxstudyR` package will include a vignette detailing this process, or see e.g. Gelman et al. ([2013](#ref-bda3)).
After the model has been fit, we can plot the posterior distributions of difficulties and proficiencies:
```
p1 <- plot_proficiency_posteriors(im_samples) + my_theme
p2 <- plot_difficulty_posteriors(im_samples) + my_theme
ggarrange(p1, p2, ncol = 2)
```
The lighter gray interval represents the 95% posterior interval and the black interval represents the 50% posterior interval. If we examine the posterior intervals for the difficulty estimates (\\(b\\)), we can see groups which have noticeably larger intervals, and thus more uncertainty regarding the estimate:
1\. those on the bottom left
2\. those on the upper right, and
3\. those in the middle.
These three groups of uncertain estimates correspond to:
1\. the questions that every participant answered correctly,
2\. the questions that every participant answered incorrectly, and
3\. the questions that every participant reported as an “inconclusive” or “no value”.
### 8\.5\.2 IRT complements an error rate analysis
The original analysis of the FBI “Black Box” Study (see Bradford T. Ulery et al. [2011](#ref-ulery2011)) did not include analysis of participant error rates, because each participant saw a different question set. Since proficiency accounts for the difficulty of question sets, however, we can directly compare participant proficiencies to each other, and also see how error rates and proficiency are related.
First, we compute the observed person scores.
```
obs_p_score <- bb_person_score(TestResponses, im_scored)
```
In order to use the `error_rate_analysis` function, we need to extract the median question difficulties from MCMC results.
```
q_diff <- apply(im_samples, 3, median)[grep("b\\[", names(apply(im_samples,
3, median)))]
ex_error_rates <- error_rate_analysis(TestResponses, q_diff)
```
Now, we can plot the proficiency estimates (with 95% posterior intervals) against the results from a traditional error rate analysis.
```
p1 <- person_mcmc_intervals(im_samples) %>% right_join(., obs_p_score, by = "exID") %>%
full_join(., ex_error_rates, by = "exID") %>% dplyr::select(., score, m,
ll, hh, exID, avg_diff, fpr, fnr) %>% ggplot(., aes(x = fpr, y = m, ymin = ll,
ymax = hh)) + geom_pointrange(size = 0.3) + labs(x = "False Positive Rate",
y = "Proficiency Estimate") + my_theme
p2 <- person_mcmc_intervals(im_samples) %>% right_join(., obs_p_score, by = "exID") %>%
full_join(., ex_error_rates, by = "exID") %>% dplyr::select(., score, m,
ll, hh, exID, avg_diff, fpr, fnr) %>% ggplot(., aes(x = fnr, y = m, ymin = ll,
ymax = hh, color = fpr > 0)) + geom_pointrange(size = 0.3) + labs(x = "False Negative Rate",
y = "Proficiency Estimate") + scale_colour_manual(values = c("black", "steelblue")) +
my_theme + theme(legend.position = "none")
ggarrange(p1, p2, ncol = 2)
```
Figure 8\.3: Proficiency vs False Positive Rate (left) and False Negative Rate (right)
Figure [8\.3](decision-making.html#fig:hf-error-rate-plots) shows proficiency against the false positive rate (left) and false negative rate (right). Those participants who made at least one false positive error are colored in blue on the right side plot. We see that one of the participants who made a false positive error still received a relatively large proficiency estimate due to having such a small false negative rate.
If, instead of looking error rates for each participant, we examine observed scores, the estimated proficiencies correlate with the observed score (Figure [8\.4](decision-making.html#fig:hf-prof-observed). That is, participants with a higher observed score are generally given larger proficiency estimates than participants with lower scores. There are, however, cases where participants scored roughly the same on the study but are given vastly different proficiency estimates. For example, the highlighted participants in the right plot above all scored between 94% and 96%, but their estimated proficiencies range from \\(\-1\.25\\) to \\(2\.5\\).
```
p1 <- person_mcmc_intervals(im_samples) %>% right_join(obs_p_score, by = "exID") %>%
ggplot(aes(x = score, y = m, ymin = ll, ymax = hh)) + geom_pointrange(size = 0.3) +
labs(x = "Observed Score", y = "Proficiency Estimate") + my_theme
p2 <- person_mcmc_intervals(im_samples) %>% right_join(obs_p_score, by = "exID") %>%
ggplot(aes(x = score, y = m, ymin = ll, ymax = hh)) + geom_pointrange(size = 0.3) +
gghighlight(score < 0.96 & score > 0.94) + labs(x = "Observed Score", y = "Proficiency Estimate") +
my_theme
ggarrange(p1, p2, ncol = 2)
```
Figure 8\.4: Proficiency vs Observed Score
If we examine those participants who scored between 94% and 96% more closely, we can see that the discrepancies in their proficiencies are largely explained by the difficulty of the specific question set they saw. This is evidenced by the positive trend in Figure [8\.5](decision-making.html#fig:hf-prof-by-diff). In addition to the observed score and difficulty of the question set, the number of questions the participant answers conclusively (i.e. individualization or exclusion) also plays a role in the proficiency estimate. Participants who are conclusive more often generally receive higher estimates of proficiency than participants who are conclusive less often.
```
person_mcmc_intervals(im_samples) %>% right_join(obs_p_score, by = "exID") %>%
full_join(ex_error_rates, by = "exID") %>% dplyr::select(score, m, ll, hh,
exID, avg_diff, pct_skipped) %>% filter(score < 0.96 & score > 0.94) %>%
ggplot(aes(x = avg_diff, y = m, ymin = ll, ymax = hh, col = 1 - pct_skipped)) +
geom_pointrange(size = 0.3) + labs(x = "Avg Q Difficulty", y = "Proficiency Estimate",
color = "% Conclusive") + my_theme
```
Figure 8\.5: Proficiency vs Average Question Difficulty, for participants with observed score between 94 and 96 percent correct.
### 8\.5\.3 Scoring method affects proficiency estimates
To illustrate the difference in results between different scoring methods, we’ll now score the data and fit models in two more ways: `no_consensus_incorrect` and `partial_credit`.
```
nci_scored <- score_bb_data(TestResponses, "no_consensus_incorrect")
nci_data <- irt_data_bb(TestResponses, nci_scored)
pc_scored <- score_bb_data(TestResponses, "partial_credit")
pc_data <- irt_data_bb(TestResponses, pc_scored)
```
We use `fit_rasch` to fit the Rasch model to the `no_consensus_incorrect` data, and since the `partial_credit` data has three outcomes (correct, inconclusive, or incorrect) instead of only two (correct/incorrect), we use `fit_pcm` to fit a partial credit model to the data.
```
nci_model <- fit_rasch(nci_data, iterations = 1000, n_chains = 4)
pc_model <- fit_pcm(pc_data, iterations = 1000, n_chains = 4)
```
We can examine the proficiency estimates and observed scores for each participant under each of the three scoring schemes, similar to Figure [8\.4](decision-making.html#fig:hf-prof-observed) above. Under the partial credit scoring scheme, a correct identification/exclusion is scored as a “2”, an inconclusive response is scored as a “1” and an incorrect identification/exclusion is scored as a “0”. The observed score is then computed by \\((\\\# Correct \+ \\\# Inconclusive) / (2 \\times \\\# Responses)\\) to scale the score to be between 0 and 1\.
```
p_score_im <- bb_person_score(TestResponses, im_scored)
p_score_im <- person_mcmc_intervals(blackboxstudyR::im_samples) %>% right_join(p_score_im,
by = "exID") %>% mutate(scoring = rep("im", nrow(p_score_im)))
p_score_nci <- bb_person_score(TestResponses, nci_scored)
p_score_nci <- person_mcmc_intervals(blackboxstudyR::nci_samples) %>% right_join(p_score_nci,
by = "exID") %>% mutate(scoring = rep("nci", nrow(p_score_nci)))
p_score_pc <- bb_person_score(TestResponses, pc_scored)
p_score_pc <- person_mcmc_intervals(blackboxstudyR::pc_samples) %>% right_join(p_score_pc,
by = "exID") %>% mutate(scoring = rep("pc", nrow(p_score_pc)))
p1 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% ggplot(aes(x = score,
y = m, ymin = ll, ymax = hh, col = scoring)) + geom_pointrange(size = 0.3,
alpha = 0.5) + labs(x = "Observed Score", y = "Estimated Proficiency") +
my_theme
p2 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% group_by(exID) %>%
ggplot(aes(x = score, y = m, ymin = ll, ymax = hh, col = scoring, group = exID)) +
geom_pointrange() + gghighlight(hh < -0.5, use_group_by = FALSE) + geom_line(col = "black",
linetype = "dotted") + labs(x = "Observed Score", y = "Estimated Proficiency") +
geom_hline(yintercept = -0.5, col = "darkred", linetype = "dashed") + my_theme
ggarrange(p1, p2, ncol = 2, common.legend = TRUE, legend = "bottom")
```
Figure 8\.6: Proficiency vs Observed Score for each of three scoring schemes
Treating the inconclusives as missing (“im”), leads to both the smallest range of observed scores and largest range of estimated proficiencies. Harsher scoring methods (e.g. `no consensus incorrect` (“nci”)) do not necessarily lead to lower estimated proficiencies. For instance, the participants who scored around 45% under the “nci” scoring method (in green) are given higher proficiency estimates than the participant who scored 70% under the “im” scoring method. The scoring method thus affects the proficiency estimates in a somewhat non\-intuitive way, as larger ranges of observed scores do not necessarily correspond to larger ranges of proficiency estimates.
Also note that the uncertainty intervals under the “im” scoring scheme are noticeably larger than under the other scoring schemes. This is because the `inconclusive_mcar` scheme treats all of the inconclusives, nearly a third of the data, as missing. This missingness is completely uninformative when estimating the difficulty and proficiency estimates. Under the other scoring schemes (`no consensus incorrect` and `partial credit`) the inconclusive responses are never treated as missing, leading to a larger number of observations per participant and therefore a smaller amount of uncertainty in the proficiency estimate.
The range of proficiencies under different scoring schemes and the uncertainty intervals for the proficiency estimates both have substantial implications if we consider setting a “mastery level” for participants. As an example, let’s consider setting the mastery threshold at \\(\-0\.5\\). We will then say examiners have not demonstrated mastery if the upper end of their proficiency uncertainty estimate is below \\(\-0\.5\\), illustrated in the right plot of Figure [8\.6](decision-making.html#fig:hf-prof-three-scores).
The number of examiners that have not demonstrated mastery varies based on the scoring method used (11 for “nci”, 8 for “pc” and 11 for “im”) due to the variation in range of proficiency estimates. Additionally, for each of the scoring schemes, there are a number of examiners that did achieve mastery with the same observed score as those that did not demonstrate mastery. This is due to a main feature of item response models discussed earlier: participants that answered more difficult questions are given higher proficiency estimates than participants that answered the same number of easier questions.
We’ve also drawn dotted lines between proficiency estimates that correspond to the same person. Note that many of the participants who do not achieve mastery under one scoring scheme *do* achieve mastery under the other scoring schemes, since not all of the points are connected by dotted lines. There are also a few participants who do not achieve mastery under any of the scoring schemes. This raises the question of how much the proficiency estimates change for each participant under the different scoring schemes.
The plot on the left in Figure [8\.7](decision-making.html#fig:hf-prof-by-id) shows both a change in examiner proficiencies across scoring schemes (the lines connecting the proficiencies are not horizontal) as well as a change in the ordering of examiner proficiencies (the lines cross one another). That is, different scoring schemes affect examiner proficiencies in different ways.
The plot on the right illustrates participants that see substantial differences in their proficiency estimates under different scoring schemes. Examiners 105 and 3 benefit from the leniency in scoring when inconclusives are treated as missing (“im”). When inconclusives are scored as incorrect (“nci”) or partial credit (“pc”), they see a substantial decrease in their proficiency due to reporting a high number of inconclusives and differing from other examiners in their reasoning for reporting inconclusives. Examiners 142, 60 and 110, on the other hand, are hurt by the leniency in scoring when inconclusives are treated as missing (“im”). Their proficiency estimates increase when inconclusives are scored as correct when they match the consensus reason (“nci”) or are worth partial credit (“pc”).
```
p1 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% arrange(parameter) %>%
mutate(id = rep(1:169, each = 3)) %>% dplyr::select(id, m, scoring) %>%
spread(scoring, m) %>% mutate(max.diff = apply(cbind(abs(im - nci), abs(nci -
pc), abs(im - pc)), 1, max)) %>% gather("model", "median", -c(id, max.diff)) %>%
ggplot(aes(x = model, y = median, group = id, col = id)) + geom_point() +
geom_line() + labs(x = "Scoring Method", y = "Estimated Proficiency") +
my_theme
p2 <- p_score_im %>% bind_rows(p_score_nci) %>% bind_rows(p_score_pc) %>% arrange(parameter) %>%
mutate(id = rep(1:169, each = 3)) %>% dplyr::select(id, m, scoring) %>%
spread(scoring, m) %>% mutate(max.diff = apply(cbind(abs(im - nci), abs(nci -
pc), abs(im - pc)), 1, max)) %>% gather("model", "median", -c(id, max.diff)) %>%
ggplot(aes(x = model, y = median, group = id, col = id)) + geom_point() +
geom_line() + labs(x = "Scoring Method", y = "Estimated Proficiency") +
gghighlight(max.diff > 1.95, label_key = id, use_group_by = FALSE) + my_theme
ggarrange(p1, p2, ncol = 2, common.legend = TRUE, legend = "bottom")
```
Figure 8\.7: Change in proficiency for each examiner under the three scoring schemes. The right side plot has highlighted five examiners whose proficiency estimates change the most across schemes.
### 8\.5\.4 Discussion
We have provided an overview of human decision\-making in forensic analyses, through the lens of latent print comparisons and the FBI “black box” study (Bradford T. Ulery et al. [2011](#ref-ulery2011)). A brief overview of Item Response Theory (IRT), a class of models used extensively in educational testing, was introduced in Section [8\.1\.1](decision-making.html#irt). A case study is provided of an IRT analysis on the FBI \`\`black box’’ study in [8\.5](decision-making.html#casestudy).
Results from an IRT analysis are largely consistent with conclusions from an error rate analysis. However, IRT provides substantially more information than a more traditional analysis, specifically through accounting for the difficulty of questions seen. Additionally, IRT implicitly accounts for the inconclusive rate of different participants and provides estimates of uncertainty for both participant proficiency and item difficulty. If IRT were to be adopted on a large scale, participants would be able to be directly compared even if they took different exams (for instance, proficiency exams in different years).
Three scoring schemes were presented in the case study, each of which leads to substantially different proficiency estimates across participants. Although IRT is a powerful tool for better understanding examiner performance on forensic identification tasks, we must be careful when choosing a scoring scheme. This is especially important for analyzing ambiguous responses, such as the inconclusive responses in the “black box” study.
[
| Field Specific |
urbanspatial.github.io | https://urbanspatial.github.io/PublicPolicyAnalytics/index.html |
Preface
=======
Welcome to the online version of *Public Policy Analytics: Code \& Context for Data Science in Government*, a book set to be [published by CRC Press](https://www.routledge.com/Public-Policy-Analytics-Code-and-Context-for-Data-Science-in-Government/Steif/p/book/9780367507619) as part of its Data Science Series. The data for this book can be found [here](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing).
The goal of this book is to make data science accessible to social scientists and City Planners, in particular. I hope to convince readers that one with strong domain expertise plus intermediate data skills can have a greater impact in government than the sharpest computer scientist who has never studied economics, sociology, public health, political science, criminology etc.
Public Policy Analytics was written to pass along the knowledge I have personally gained from so many gifted educators over the last 20 years. They are too many to name individually, but their impression on me has been so lasting and so monumental, that somewhere along the line, I decided to become an educator myself. This book is a reflection of all that these individuals have given to me.
I am incredibly grateful to my colleague Sydney Goldstein, without whom this book would not have been possible. Sydney was instrumental in helping me edit and compile the text. Additionally, she and I co\-authored an initial version of Chapter 7 as a white paper. Dr. Tony Smith, a most cherished mentor and friend, edited nearly every machine learning chapter in this book. Dr. Maria Cuellar (Ch. 5\), Michael Fichman (Intro), Matt Harris (review of functions), Dr. George Kikuchi (Ch. 5\); and Dr. Jordan Purdy (Chs. 6 \& 7\), each generously provided their time and expertise in review. I thank them wholeheartedly. All errors are mine alone. Finally, this book is dedicated to my wife, Diana, and my sons Emil and Malcolm, who always keep me focused on love and positivity.
I hope both non\-technical policymakers and budding public\-sector data scientists find this book useful and I thank you for taking a look.
Ken
Spring, 2021
West Philadelphia, PA.
Table of Contents
-----------------
```
## Warning: package 'kableExtra' was built under R version 4.0.5
```
| Chapter | Description | Data |
| --- | --- | --- |
| Chapter 1: Indicators for Transit Oriented Development | Following the Introduction, Chapter 1 introduces indicators as an important tool for simplifying and communicating complex processes to non\-technical decision makers. Introducing the `tidyverse`, `tidycensus`, and `sf` packages, this chapter analyzes whether Philadelphia renters are willing to pay a premium for transit amenities. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-1-indicators-for-transit-oriented-development) |
| Chapter 2: Expanding the Urban Growth Boundary | Chapter 2 explores the discontinuous nature of boundaries to understand how an Urban Growth Area in Lancaster County, PA affects suburban sprawl. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-2-expanding-the-urban-growth-boundary) |
| Chapters 3 \& 4: Intro to Geospatial Machine Learning | Chapters 3 and 4 provide a first look at geospatial predictive modeling, forecasting home prices in Boston, MA. Chapter 3 introduces linear regression, goodness of fit metrics, and cross\-validation, with the goal of assessing model accuracy and generalizability. Chapter 4 builds on the initial analysis to account for the ‘spatial process’ or pattern of home prices. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapters-3--4-intro-to-geospatial-machine-learning) |
| Chapter 5: Geospatial Risk Modeling \- Predictive Policing | Chapter 5 tackles the controversial topic of Predictive Policing, forecasting burglary risk in Chicago. The argument is made that converting Broken Windows theory into Broken Window policing, can bake bias directly into a predictive model and lead to a discriminatory resource allocation tool. The concept of generalizability remains key. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-5-geospatial-risk-modeling---predictive-policing) |
| Chapter 6: People\-Based ML Models | Chapter 6 introduces the use of machine learning in estimating risk/opportunity for individuals. The resulting intelligence is then used to develop a cost/benefit analysis for Bounce to Work! a pogo\-transit start\-up. The goal is to predict the probability a client will ‘churn’ or not re\-up their membership. This is valuable for public\-sector data scientists working with individuals and families. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-6-people-based-ml-models) |
| Chapter 7: People\-Based ML Models: Algorithmic Fairness | Chapter 7 evaluates people\-based algorithms for ‘disparate impact’ \- the idea that even if an algorithm is not designed to discriminte on its face, it may still have a discriminatory effect. This chapter returns to a criminal justice use case, estimating the *social* costs and benefits. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-7-people-based-ml-models-algorithmic-fairness) |
| Chapter 8: Predicting Rideshare Demand | Chapter 8 builds a space/time predictive model of ride share demand in Chicago. New R functionality is introduced along with functions unique to time series data. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-8-predicting-rideshare-demand) |
Table of Contents
-----------------
```
## Warning: package 'kableExtra' was built under R version 4.0.5
```
| Chapter | Description | Data |
| --- | --- | --- |
| Chapter 1: Indicators for Transit Oriented Development | Following the Introduction, Chapter 1 introduces indicators as an important tool for simplifying and communicating complex processes to non\-technical decision makers. Introducing the `tidyverse`, `tidycensus`, and `sf` packages, this chapter analyzes whether Philadelphia renters are willing to pay a premium for transit amenities. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-1-indicators-for-transit-oriented-development) |
| Chapter 2: Expanding the Urban Growth Boundary | Chapter 2 explores the discontinuous nature of boundaries to understand how an Urban Growth Area in Lancaster County, PA affects suburban sprawl. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-2-expanding-the-urban-growth-boundary) |
| Chapters 3 \& 4: Intro to Geospatial Machine Learning | Chapters 3 and 4 provide a first look at geospatial predictive modeling, forecasting home prices in Boston, MA. Chapter 3 introduces linear regression, goodness of fit metrics, and cross\-validation, with the goal of assessing model accuracy and generalizability. Chapter 4 builds on the initial analysis to account for the ‘spatial process’ or pattern of home prices. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapters-3--4-intro-to-geospatial-machine-learning) |
| Chapter 5: Geospatial Risk Modeling \- Predictive Policing | Chapter 5 tackles the controversial topic of Predictive Policing, forecasting burglary risk in Chicago. The argument is made that converting Broken Windows theory into Broken Window policing, can bake bias directly into a predictive model and lead to a discriminatory resource allocation tool. The concept of generalizability remains key. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-5-geospatial-risk-modeling---predictive-policing) |
| Chapter 6: People\-Based ML Models | Chapter 6 introduces the use of machine learning in estimating risk/opportunity for individuals. The resulting intelligence is then used to develop a cost/benefit analysis for Bounce to Work! a pogo\-transit start\-up. The goal is to predict the probability a client will ‘churn’ or not re\-up their membership. This is valuable for public\-sector data scientists working with individuals and families. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-6-people-based-ml-models) |
| Chapter 7: People\-Based ML Models: Algorithmic Fairness | Chapter 7 evaluates people\-based algorithms for ‘disparate impact’ \- the idea that even if an algorithm is not designed to discriminte on its face, it may still have a discriminatory effect. This chapter returns to a criminal justice use case, estimating the *social* costs and benefits. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-7-people-based-ml-models-algorithmic-fairness) |
| Chapter 8: Predicting Rideshare Demand | Chapter 8 builds a space/time predictive model of ride share demand in Chicago. New R functionality is introduced along with functions unique to time series data. | [link](https://github.com/urbanSpatial/Public-Policy-Analytics-Landing#chapter-8-predicting-rideshare-demand) |
| Field Specific |
urbanspatial.github.io | https://urbanspatial.github.io/PublicPolicyAnalytics/TOD.html |
Chapter 1 Indicators for Transit Oriented Development
=====================================================
1\.1 Why Start With Indicators?
-------------------------------
According to the Federal Transit Administration, not one of America’s largest passenger subway systems saw fare revenues exceed operating expenses in 2015\.[6](#fn6)
This is an indicator \- a stylized fact that gives simple insight into a complicated phenomena. Mastering indicators is critical for conveying nuanced context to non\-technical audiences. Here are four suggestions on what makes a good indicator:
1. A *relatable* indicator is typically motivated by a pressing policy concern. “How is it possible that passenger rail in New York City has such widespread delays, service suspensions, and rider discontent?” A great indicator solicits interest from an audience.
2. A *simple* indicator may be used as an exploratory tool in place of more complex statistics. Simplicity helps the audience understand the indicator’s significance and keeps them engaged in the analysis.
3. A *relative* indicator draws a contrast. “How can New York City passenger rail, with the most trips, still loose more money than each of the next ten largest cities?” Contextualizing an indicator with a relevant comparison makes for greater impact.
4. A good indicator typically generates more questions than answers. Thus, a good indicator fits into a broader *narrative* which helps motivate a more robust research agenda and ultimately, more applied analytics.
Simplicity is an indicator’s strength, but it may also be its weakness. Most statistics make assumptions. You should be aware of these assumptions, how they affect your conclusions, and ultimately how the audience interprets your results.
In this first chapter, space/time indicators are built from Census data to explore Transit Oriented Development (TOD) potential in Philadelphia. Along the way, we will learn how assumptions can lead to incorrect policy conclusions.
TOD advocates for increased housing and amenity density around transit (rail, subway, bus, etc.). There are many benefits to promoting this density, but two examples are particularly noteworthy.
First, transit needs scale to exist. Transit demand is a function of density and the more households, customers, and businesses around transit, the more efficient it is to operate a transit system. Efficiency means less maintenance, staffing, etc. Interestingly, Figure 1\.1 suggests that most transit systems are remarkably inefficient despite being in cities with density of just about everything.
Second, TOD is important for land value capitalization, which is essential to both developers and governments. If renters and home buyers are willing to pay more to locate near transit amenities, it should be reflected in higher land values and property tax returns near stations.
In this chapter, we play the role of Transportation Planner for the City of Philadelphia, and assess whether rents are higher in transit\-rich areas relative to places without transit access. If residents value these locations, officials might consider changing the zoning code to allow increased density around transit.
As the analysis progresses, spatial data wrangling and visualization fundamentals are presented with the `tidyverse`, `sf` and `ggplot2` packages. The `tidycensus` package is used to gather U.S. Census tract data. We begin by identifying some of the key assumptions made when working with geospatial Census data.
### 1\.1\.1 Mapping \& scale bias in areal aggregate data
Data visualization is a data scientist’s strongest communication tool because a picture tells a thousand words. However, visualizations and maps in particular, can mislead. Figure x.x maps Median Household Income for Philadelphia Census tracts in 2000 and 2017\. The narrative suggests that incomes in Center City (around City Hall) have increased. While this is likely true, the precise narrative is in part, driven by how colors are assigned to incomes on the map.
Figure 1\.3 below illustrates two different approaches for coloring maps. The topmost plots use `ggplot`‘s default ’equal interval’ breaks, while those on the bottom bin income into 5 quintile groups, which are intervals at the 1st, 20th, 40th, 60th and 80th percentiles of the data.
Setting map breaks can alter the narrative of a map. Ultimately, breaks should ‘hug the cliffs’ of the distribution. Compared to the equal interval breaks, the quintile breaks map portrays a sharper contrast in incomes across the city.
Census tract maps also introduce bias related to scale. To start, when summary statistics like mean or median are used to summarize individuals, results may be biased by the ‘Ecological Fallacy’. For example, consider Figure 1\.4 below which visualizes three household income distributions in a tract.
One plot has a normal (i.e. bell curve) distribution with values close to the mean; one with much greater variance around the mean; and one skewed. These tracts are all very different, but they all share the same ‘mean income’.
Figure 1\.5 shows these differences geographically, drawing two of the above household income distributions across the same census tract. The “Normal Distribution” seems relatively homogeneous while the “Skewed Distribution”, appears like a mixed income community.
The ecological fallacy is made worse if we assume a Census tract polygon is akin to a neighborhood \- an area of social, economic, and built environment homogeneity. The Census uses tracts to count people, not to reference social phenomena. When counting occurs inside *arbitrarily* drawn aggregate units like tracts, another source of bias emerges. This kind of scale bias is known as the ‘Modifiable Areal Unit Problem’ (MAUP).[7](#fn7).
For example, Figure 1\.6 locates houses in one corner of a tract, assuming that the remaining area is a park, perhaps. Again, we can see how representing this area with a single summary statistic may be problematic.
Scale bias is an important consideration when creating and interpreting indicators. We will continue to make scale assumptions throughout the remainder of this chapter, and in every other spatial chapter throughout the book. With experience, you will find it possible to create analytics that are useful despite scale bias.
1\.2 Setup
----------
The code blocks below include the libraries, options, and functions needed for the analysis. The `tidyverse` library enables data wrangling and visualization; `tidycensus` allows access to the Census API; `sf` wrangles and processes spatial data; and `kableExtra` helps create tables.
```
library(tidyverse)
library(tidycensus)
library(sf)
library(kableExtra)
options(scipen=999)
options(tigris_class = "sf")
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
```
Two global `options` are set to first tell R not to use scientific notation (`scipen`) and second, `tigris_class` tells `tidycensus` to download Census geometries in the `sf` or Simple Feature format. Finally, the functions used throughout this book are read in from a source file. This chapter uses the `mapTheme` and `plotTheme` to standardize the formatting of maps and plots, as well as the `qBr` and `q5` functions to create quintile map breaks.
`palette5` is a color palette comprised of a list of hex codes. The operation `c` ‘combines’ a set of values into a list. There are a variety of websites like ColorBrewer that can help you explore color ramps for maps.[8](#fn8)
```
palette5 <- c("#f0f9e8","#bae4bc","#7bccc4","#43a2ca","#0868ac")
```
### 1\.2\.1 Downloading \& wrangling Census data
The `tidycensus` package provides an excellent interface for querying Census data in R. The below table shows a set of variables, variable codes, and the `Short_name` we will use for this analysis. For a list of the 2000 Census variables, use `View(load_variables(2000, "sf3", cache = TRUE))`.
| Variable | Census\_code\_in\_2000 | Census\_code\_in\_2017 | Short\_name |
| --- | --- | --- | --- |
| Total Population | P001001 | B25026\_001E | TotalPop |
| Number of white residents | P006002 | B02001\_002E | NumberWhites |
| Total: Female: 18 to 24 years: Bachelor’s degree | PCT025050 | B15001\_050E | TotalFemaleBachelors |
| Total: Male: 18 to 24 years: Bachelor’s degree | PCT025009 | B15001\_009E | TotalMaleBacheors |
| Median Household Income | P053001 | B19013\_001E | MedHHInc |
| Median contract rent | H056001 | B25058\_001E | MedRent |
| Total living in poverty | P092001 | B06012\_002E | TotalPoverty |
| |
| --- |
| Table 1\.1 |
Before querying the Census API, you will need your own Census API key which can be downloaded for free.[9](#fn9) Once acquired, input the key with `census_api_key("myKey...")`.
The `get_decennial` function downloads tract data for the variables in the table above. `year` is set to 2000 and `state` and `county` are set to Pennsylvania and Philadelphia, respectively. Setting `geometry` to true (`T`) ensures tract geometries are downloaded with the data.
`st_transform` is used to project the data from decimal degrees to feet. ‘Projection’ is the mathematical process by which the Earth is ‘flattened’ onto a plane, like a map. It is preferable to work in coordinate systems projected in feet or meters where distance can be measured reliably.
In this chapter, the `tidycensus` is used to query and return Census data directly into R. At the time of publication however, the Census has disabled the year 2000 endpoint/data.[10](#fn10) Instead, the code below downloads an equivalent data frame called `tracts00`. A data frame is the most common way to store data in R. The code block at the end of this section demonstrates a `tidycensus` call to return 2017 data.
`class(tracts00)` shows that `tracts00` is a unique type of data frame as it is also a Simple Features or `sf` object with polygon geometries for each census tract.
```
tracts00 <-
st_read(file.path(root.dir,"/Chapter1/PHL_CT00.geojson")) %>%
st_transform('ESRI:102728')
```
`tracts00[1:3,]` is an example of matrix notation and can be used to reference rows and columns of a data frame. This tells R to return the first three rows and all of the columns from the `tracts00` data frame. `tracts00[1:3,1]` returns the first three rows and the first column. In both instances, the `geometry` field is also returned. A specific set of columns can be returned by specifying a list, like so \- `tracts00[1:3,c(1:3,5)]`. The first three rows are returned along with the first through third columns and column five.
`GEOID` is a unique identifier the Census gives for each geography nationwide. ‘42’ is the state of Pennsylvania; ‘101’ is Philadelphia County; and the remaining digits identify a Census tract.
Note the header information that outputs as well, including the `geometry type`, in this case, polygons; the layer’s spatial extent, or bounding box, `bbox`; and the coordinate system, `CRS`. Google the CRS and find it on spatialreference.org to reveal its code, `102728`. This parameter is passed to `st_transform` to project the layer into feet.[11](#fn11)
The seven variables downloaded are structured in a strange way. You may be most familiar with data in ‘wide’ form, where each row represents a Census tract and each column, a variable. This data is formatted in ‘long’ form, which provides some interesting advantages.
`table(tracts00$variable)` returns a frequency count of rows associated with each `variable`. Note the `$` operator, which points to a specific column in a data frame. The output shows seven variables have been downloaded, each consisting of 381 rows, suggesting there are 381 census tracts in Philadelphia. In this long form, 381 tracts across seven variables are ‘stacked’ atop one another for a total 2,667 rows.
As an intro to mapping with `sf` layers, the code below creates a new data frame, `totalPop00`, including only Total Population in 2000, `P001001`. The `tracts00` data frame is ‘piped’ into the `filter` function using the `%>%` operator. A pipe is a shorthand method for enchaining successive functions. Many larger code blocks throughout the book are spliced together with pipes. Running each function successively will help new R users learn. `filter` performs a subset by evaluating a specific query, in this case, looking for any row where the `variable` field equals `P001001`. The result is set equal to (`<-`) the `sf` data frame, `totalPop00`.
```
totalPop00 <-
tracts00 %>%
filter(variable == "P001001")
```
`nrow(totalPop00)` tells us that the data frame has 381 rows. `names(totalPop00)` returns the data frame’s column names. Finally, `head(totalPop00)` returns just the first six rows and all the columns.
A `geometry` field is included in a `sf` layer and the `sf` package has some quick and useful plotting functions. `plot` is the base R plotting function, but when called with an `sf` layer, a map of each variable is returned. Matrix notation can be used to map a single variable, such as, `plot(totalPop00[,4])`.
```
plot(totalPop00)
```
`ggplot2` is a powerful tool for designing maps and visualization. A `ggplot` is constructed from a series of ‘geoms’ or geometric objects. Many different geoms[12](#fn12) exist, but `geom_sf` is used to map a Simple Features data frame.
The code block below and Figure 1\.8 illustrates incrementally, how nuance can be added to a `ggplot`. Here, plot `A` maps `totalPop00`, using the `aes` or aesthetic parameter to `fill` the tract polygons by the `value` field.
Plot `B` converts `value` to 5 quintile categories using the `q5` function. These 5 categories are of class `factor`. Try `q5(totalPop00$value)`.
Plot `C` adds fill color and legend improvements using the `scale_fill_manual` function. Many different scale types are possible.[13](#fn13) `values` is set to a list of colors, `palette5`. `labels` is set to the values of the associated quintiles (try `qBr(totalPop00, "value")`). Finally, a legend `name` is added. `\n` inserts a hard return.
Plot `D` inserts a title using the `labs` parameter, as well as a `mapTheme`.
```
A <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = value))
B <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = q5(value)))
C <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = q5(value))) +
scale_fill_manual(values = palette5,
labels = qBr(totalPop00, "value"),
name = "Total\nPopluation\n(Quintile Breaks)")
D <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = q5(value))) +
scale_fill_manual(values = palette5,
labels = qBr(totalPop00, "value"),
name = "Popluation\n(Quintile Breaks)") +
labs(title = "Total Population", subtitle = "Philadelphia; 2000") +
mapTheme()
```
To demonstrate how to calculate new variables, the raw Census data is converted to rates. `tracts00` is converted from long to the more common, wide form using the `spread` function. The `select` function drops one of the tract identifiers and `rename` is used to rename the variables.
Note the wide form output now shows 381 rows, one for each unique tract. This is likely a more familiar format for most readers.
```
tracts00 <-
dplyr::select(tracts00, -NAME) %>%
spread(variable, value) %>%
dplyr::select(-geometry) %>%
rename(TotalPop = P001001, Whites = P006002, MaleBachelors = PCT025009,
FemaleBachelors = PCT025050, MedHHInc = P053001, MedRent = H056001,
TotalPoverty = P092001)
st_drop_geometry(tracts00)[1:3,]
```
```
## GEOID MedRent TotalPop Whites MedHHInc TotalPoverty MaleBachelors
## 1 42101000100 858 2576 2095 48886 1739 64
## 2 42101000200 339 1355 176 8349 505 23
## 3 42101000300 660 2577 1893 40625 1189 41
## FemaleBachelors
## 1 48
## 2 73
## 3 103
```
Next, `mutate` is used to create the new rate variables; then extraneous columns are dropped using `select`. The `ifelse` prevents divide by 0 errors.
```
tracts00 <-
tracts00 %>%
mutate(pctWhite = ifelse(TotalPop > 0, Whites / TotalPop, 0),
pctBachelors = ifelse(TotalPop > 0, ((FemaleBachelors + MaleBachelors) / TotalPop), 0),
pctPoverty = ifelse(TotalPop > 0, TotalPoverty / TotalPop, 0),
year = "2000") %>%
dplyr::select(-Whites,-FemaleBachelors,-MaleBachelors,-TotalPoverty)
```
`tracts00` is now a complete dataset for the year 2000, but the TOD study requires data in the future as well. To download the 2017 American Community Survey or ACS data (`get_acs`), the below code block uses the pipe, `%>%`, to enable a more concise workflow.
The 2017 variable names are different from 2000, and the 2017 equivalent `value` field is called `estimate`. Setting `output="wide"`, automatically downloads the data into wide form. `mutate` calculates rates. Finally, the `select` function is used with `starts_with` to remove all the original census codes (which happen to begin with `B`).
```
tracts17 <-
get_acs(geography = "tract", variables = c("B25026_001E","B02001_002E","B15001_050E",
"B15001_009E","B19013_001E","B25058_001E",
"B06012_002E"),
year=2017, state=42, county=101, geometry=T, output="wide") %>%
st_transform('ESRI:102728') %>%
rename(TotalPop = B25026_001E, Whites = B02001_002E,
FemaleBachelors = B15001_050E, MaleBachelors = B15001_009E,
MedHHInc = B19013_001E, MedRent = B25058_001E,
TotalPoverty = B06012_002E) %>%
dplyr::select(-NAME, -starts_with("B")) %>%
mutate(pctWhite = ifelse(TotalPop > 0, Whites / TotalPop,0),
pctBachelors = ifelse(TotalPop > 0, ((FemaleBachelors + MaleBachelors) / TotalPop),0),
pctPoverty = ifelse(TotalPop > 0, TotalPoverty / TotalPop, 0),
year = "2017") %>%
dplyr::select(-Whites, -FemaleBachelors, -MaleBachelors, -TotalPoverty)
```
The last step then is to combine the 2000 and 2017 tracts together, stacking both layers atop one another with `rbind`. `allTracts` is now a complete time/space dataset.
```
allTracts <- rbind(tracts00,tracts17)
```
### 1\.2\.2 Wrangling transit open data
The next task is to relate the Census tracts to subway stops, in space. Subway stations are downloaded directly into R from the SEPTA open data site.[14](#fn14) Philadelphia has two subway lines, the “El” or Elevated Subway which runs east to west down Market Street, and the Broad Street Line, which runs north to south on Broad Street.
The code below downloads and binds together El and Broad St. station locations into a single layer, `septaStops`. `st_read` downloads the data in geojson form (with geometries) from the web. A `Line` field is generated and selected along with the `Station` field. Lastly, the data is projected into the same coordinate system as `tracts00`.
```
septaStops <-
rbind(
st_read("https://opendata.arcgis.com/datasets/8c6e2575c8ad46eb887e6bb35825e1a6_0.geojson") %>%
mutate(Line = "El") %>%
select(Station, Line),
st_read("https://opendata.arcgis.com/datasets/2e9037fd5bef406488ffe5bb67d21312_0.geojson") %>%
mutate(Line ="Broad_St") %>%
select(Station, Line)) %>%
st_transform(st_crs(tracts00))
```
`septaStops` are mapped in Figure 1\.9 to illustrate a `ggplot` map overlay. The first `geom_sf` plots a Philadelphia basemap using `st_union` to ‘dissolve’ tract boundaries into a city boundary. The second `geom_sf` maps `septaStops`, assigning `colour` to the `Line` attribute. `show.legend` ensures the legend displays points. Note that a `data` parameter is specified for each `geom_sf`.
Above, `scale_fill_manual` was used to `fill` the `totalPop00` tract *polygons* with color. In this case `septaStops` *points* are `colour`ed using `scale_colour_manual`.
```
ggplot() +
geom_sf(data=st_union(tracts00)) +
geom_sf(data=septaStops, aes(colour = Line), show.legend = "point", size= 2) +
scale_colour_manual(values = c("orange","blue")) +
labs(title="Septa Stops", subtitle="Philadelphia, PA") +
mapTheme()
```
### 1\.2\.3 Relating tracts \& subway stops in space
To understand whether `TOD` tracts are valued more than `non-TOD` tracts, a methodology is needed to assign tracts to one of these two groups. Several overlay techniques are demonstrated below to find tracts *close* to subway stations. Defining ‘close’ provides another important lesson on spatial scale.
Human beings have a very instinctual understanding of closeness. You may be willing to ride a bike 3 miles to work everyday, but getting up to fetch the remote control from across the room is a noticeable burden. It’s fine for near and far to be subjective in the real world, but here, spatial relationships must be defined explicitly. Below, close is defined as tracts within a half mile (2,640 ft.) of stations.
`st_buffer` generates polygon ‘buffers’ with boundaries exactly half mile from the stations. A long form layer stacks two sets of station buffers, dissolving (`st_union`) the second set into one large polygon. `2640` is understood as feet because `septaBuffers` is projected in feet. Note the two `Legend` items and that the `st_union` output is converted to an `sf` layer with `st_sf`.
The resulting ‘small multiple’ map (Figure 1\.10\) is only possible when data is organized in long form. `facet_wrap` is used to create the small multiple map, ensuring both are perfectly aligned.
```
septaBuffers <-
rbind(
st_buffer(septaStops, 2640) %>%
mutate(Legend = "Buffer") %>%
dplyr::select(Legend),
st_union(st_buffer(septaStops, 2640)) %>%
st_sf() %>%
mutate(Legend = "Unioned Buffer"))
ggplot() +
geom_sf(data=septaBuffers) +
geom_sf(data=septaStops, show.legend = "point") +
facet_wrap(~Legend) +
mapTheme()
```
Now to select tracts that fall inside of the buffer. The `Unioned Buffer` is used because it enables a cleaner overlay. Below, three different approaches for selecting tracts that are with 0\.5 miles of subway stations are considered.
1. Intersection or `Clip` \- Relate tracts and the buffer using the latter to ‘cookie cutter’ the former.
2. `Spatial Selection` \- Select all tracts that intersect or touch the buffer.
3. `Select by Centroids` \- Select all the tract *centroids* that intersect the buffer.
A centroid is the gravitational center of a polygon. The centroid for a circle is the point at which you can balance it on your finger. Interestingly, very irregular shapes may have centroids outside of a polygon’s boundary as Figure 1\.11 illustrates.
In the code block below, the `buffer` object pulls out just the `Unioned Buffer`. The first approach, `clip` uses `st_intersection` to cookie cutter the tracts. `TotalPop` is selected and a `Legend` is created. The second approach, `selection` uses matrix notation to select all `tracts00` that touch the `buffer`.
The third approach, `selectCentroids` is more complicated. Run each line separately to see how it works. A spatial selection is used again, but this time, `st_centroid` returns tract centroid points instead of polygons. A polygon output is needed, so `st_drop_geometry` converts the `sf` to a data frame and `left_join` joins it back to the original `tracts00`, complete with the polygon geometries. `st_sf` then converts the data frame to `sf` with polygon geometries. Joining two tables together requires a unique identifier, in this case `GEOID`.
```
buffer <- filter(septaBuffers, Legend=="Unioned Buffer")
clip <-
st_intersection(buffer, tracts00) %>%
dplyr::select(TotalPop) %>%
mutate(Selection_Type = "Clip")
selection <-
tracts00[buffer,] %>%
dplyr::select(TotalPop) %>%
mutate(Selection_Type = "Spatial Selection")
selectCentroids <-
st_centroid(tracts00)[buffer,] %>%
st_drop_geometry() %>%
left_join(dplyr::select(tracts00, GEOID)) %>%
st_sf() %>%
dplyr::select(TotalPop) %>%
mutate(Selection_Type = "Select by Centroids")
```
Can you create the below small multiple map? To do so, `rbind` the three selection types together and use `facet_wrap` to return three maps. Remember, `st_union(tracts00)` creates a basemap. What are the apparent differences across the selection criteria?
`Clip` is the worst choice as changing the geometries exasperates ecological fallacy and MAUP bias. `Spatial Selection` is better, but a tract is included even if only sliver of its area touches the `buffer`. This approach selects perhaps too many tracts.
`Select by Centroids` is my choice because it captures a concise set of tracts without altering geometries. While well\-reasoned, it is still subjective, and each choice will ultimately lead to different results. Such is the impact of scale problems in spatial analysis.
1\.3 Developing TOD Indicators
------------------------------
### 1\.3\.1 TOD indicator maps
Let us now explore the hypothesis that if residents value TOD, then rents should be higher in areas close to transit relative to places at greater distances.
The code block below replicates the select by centroid approach to return `allTracts` within and beyond the 0\.5 mile `buffer` (the `TOD` and `Non-TOD` groups, respectively). The second spatial selection uses `st_disjoint` to select centroids that *do not* intersect the `buffer` and are thus beyond half mile. `mutate` is used to adjust 2000 rents for inflation.
```
allTracts.group <-
rbind(
st_centroid(allTracts)[buffer,] %>%
st_drop_geometry() %>%
left_join(allTracts) %>%
st_sf() %>%
mutate(TOD = "TOD"),
st_centroid(allTracts)[buffer, op = st_disjoint] %>%
st_drop_geometry() %>%
left_join(allTracts) %>%
st_sf() %>%
mutate(TOD = "Non-TOD")) %>%
mutate(MedRent.inf = ifelse(year == "2000", MedRent * 1.42, MedRent))
```
The small multiple map in Figure 1\.13 below visualizes both the `year` and `TOD` groups. The rightmost figure then then maps inflation\-adjusted rent for 2000 and 2017\. Can you re\-create this figure using three `geom_sf` layers? The first is a basemap; the second maps rents using `fill = q5(MedRent.inf)`, removing tract boundaries by setting `colour=NA`; and the third overlays `buffer`, setting `colour = "red"` and `fill = NA`.
The map suggests that although rents increased dramatically in Philadelphia’s central business district, many areas close to transit did not see significant rent increases.
### 1\.3\.2 TOD indicator tables
Tables are often the least compelling approach for presenting data, but they can be useful. The table below shows the average difference for variables across `year` and `TOD_Group`.
The `tidyverse` package makes summary statistics easy. `group_by` defines the grouping variables and `summarize` calculates the across\-group means. `na.rm = T` removes any missing or `NA` values from the calculation. Without their removal, the calculation would return `NA`. The result is TOD by year group means.
A clean table is then generated using the `kable` function. Is this the best format for comparing across space and time?
```
allTracts.Summary <-
st_drop_geometry(allTracts.group) %>%
group_by(year, TOD) %>%
summarize(Rent = mean(MedRent, na.rm = T),
Population = mean(TotalPop, na.rm = T),
Percent_White = mean(pctWhite, na.rm = T),
Percent_Bach = mean(pctBachelors, na.rm = T),
Percent_Poverty = mean(pctPoverty, na.rm = T))
kable(allTracts.Summary) %>%
kable_styling() %>%
footnote(general_title = "\n",
general = "Table 1.2")
```
| year | TOD | Rent | Population | Percent\_White | Percent\_Bach | Percent\_Poverty |
| --- | --- | --- | --- | --- | --- | --- |
| 2000 | Non\-TOD | 470\.5458 | 3966\.789 | 0\.4695256 | 0\.0096146 | 0\.3735100 |
| 2000 | TOD | 469\.8247 | 4030\.742 | 0\.3848745 | 0\.0161826 | 0\.4031254 |
| 2017 | Non\-TOD | 821\.1642 | 4073\.547 | 0\.4396967 | 0\.0116228 | 0\.2373258 |
| 2017 | TOD | 913\.3750 | 3658\.500 | 0\.4803197 | 0\.0288166 | 0\.3080936 |
| |
| --- |
| Table 1\.2 |
How about an approach with variables as rows and the groups as columns? The `year` and `TOD` fields are ‘spliced’ together into a `year.TOD` field with the `unite` function. `gather` converts the data to long form using `year.TOD` as the grouping variable. `mutate` can `round` the `Value` field and `spread` transposes the rows and columns allowing for comparisons across both group types.
Is this table better? What can you conclude about our hypothesis?
```
allTracts.Summary %>%
unite(year.TOD, year, TOD, sep = ": ", remove = T) %>%
gather(Variable, Value, -year.TOD) %>%
mutate(Value = round(Value, 2)) %>%
spread(year.TOD, Value) %>%
kable() %>%
kable_styling() %>%
footnote(general_title = "\n",
general = "Table 1.3")
```
| Variable | 2000: Non\-TOD | 2000: TOD | 2017: Non\-TOD | 2017: TOD |
| --- | --- | --- | --- | --- |
| Percent\_Bach | 0\.01 | 0\.02 | 0\.01 | 0\.03 |
| Percent\_Poverty | 0\.37 | 0\.40 | 0\.24 | 0\.31 |
| Percent\_White | 0\.47 | 0\.38 | 0\.44 | 0\.48 |
| Population | 3966\.79 | 4030\.74 | 4073\.55 | 3658\.50 |
| Rent | 470\.55 | 469\.82 | 821\.16 | 913\.38 |
| |
| --- |
| Table 1\.3 |
### 1\.3\.3 TOD indicator plots
The best way to visualize group differences is with a grouped bar plot. Below one is created by moving the data into long form with `gather`. Explore how the minus sign works inside `gather`.
In the plotting code, `year` is defined on the x\-axis, with each bar color filled by `TOD`. `geom_bar` defines a bar plot with two critical parameters. `stat` tells `ggplot` that a y\-axis `Value` is provided and `position` ensures the bars are side\-by\-side. What happens when the `position` parameter is removed?
`facet_wrap` is used to create small multiple plots across `Variable`s, and `scales = "free"` allows the y\-axis to vary with the scale of each variable (percentages vs. dollars).
```
allTracts.Summary %>%
gather(Variable, Value, -year, -TOD) %>%
ggplot(aes(year, Value, fill = TOD)) +
geom_bar(stat = "identity", position = "dodge") +
facet_wrap(~Variable, scales = "free", ncol=5) +
scale_fill_manual(values = c("#bae4bc", "#0868ac")) +
labs(title = "Indicator differences across time and space") +
plotTheme() + theme(legend.position="bottom")
```
What do these indicators tell us about TOD in Philadelphia? Between 2000 and 2017, the City became slightly more educated and less impoverished while rents increased dramatically. In 2000, there was almost no difference in rents between TOD and Non\-TOD tracts, but in 2017, that difference increased to more than $100\. It appears that residents increasingly are willing to pay more for transit access.
This is not the end of the story, however. Thus far, our analysis has ignored a key question \- *what is the relevant spatial process*? The Introduction discusses how the spatial process or pattern relates to decision\-making. Residents may be willing to pay more for transit, but perhaps there are other reasons behind rent increases?
It turns out that some TOD areas also happen to be in Center City, Philadelphia’s central business district. Living in and around Center City affords access to many other amenities beyond transit. Could omitting this critical spatial process from our analysis bias our results?
Let’s find out by creating three housing submarkets, one for each of the two subway lines and a third, Center City area, where the two subway lines intersect.
1\.4 Capturing three submarkets of interest
-------------------------------------------
In this section, three new submarkets are created. The `centerCity` submarket is created where the unioned El and Broad Street Line buffers intersect (`st_intersection`).
The `el` and `broad.st` submarket areas are created a bit differently. `st_difference` is used to *erase* any portion of the unioned El and Broad Street Line buffers areas that intersect `centerCity`. The three buffers are then bound into one layer and the result is mapped.
```
centerCity <-
st_intersection(
st_buffer(filter(septaStops, Line == "El"), 2640) %>% st_union(),
st_buffer(filter(septaStops, Line == "Broad_St"), 2640) %>% st_union()) %>%
st_sf() %>%
mutate(Submarket = "Center City")
el <-
st_buffer(filter(septaStops, Line == "El"), 2640) %>% st_union() %>%
st_sf() %>%
st_difference(centerCity) %>%
mutate(Submarket = "El")
broad.st <-
st_buffer(filter(septaStops, Line == "Broad_St"), 2640) %>% st_union() %>%
st_sf() %>%
st_difference(centerCity) %>%
mutate(Submarket = "Broad Street")
threeMarkets <- rbind(el, broad.st, centerCity)
```
`allTracts` is then related to the `threeMarkets`. A spatial join (`st_join`) is used to ‘stamp’ each tract centroid with the submarket it falls into. Note a spatial selection will not work here because there are now 3 submarket groups instead of one unioned buffer.
The spatial join result then takes the polygon geometries with a `left_join` to `allTracts`. Any tract that is not overlayed by one of `threeMarkets` receives `NA`. The `mutate` then converts `NA` to a `Non-TOD` submarket with `replace_na`. `st_sf` finally converts the output with polygon geometries to an `sf` data frame which can be mapped as Figure 1\.15 above.
```
allTracts.threeMarkets <-
st_join(st_centroid(allTracts), threeMarkets) %>%
st_drop_geometry() %>%
left_join(allTracts) %>%
mutate(Submarket = replace_na(Submarket, "Non-TOD")) %>%
st_sf()
```
Finally, as before, rent is adjusted for inflation and a grouped bar plot is created.
It was previously estimated that Philadelphians were willing to pay an additional $100 on average, to rent in tracts with transit access. However, now that the Center City effect has been controlled for, it seems these differences are negligible. It is now clear that rents in Center City may have been driving these results.
1\.5 Conclusion: Are Philadelphians willing to pay for TOD?
-----------------------------------------------------------
Are Philadelphians willing to pay a premium to live in transit\-rich areas? I hate to tell you this, but not enough has been done to fully answer this question \- which is one of the two main takeaways of this chapter.
It is critical to understand how omitted variables affect the relevant spatial process and ultimately, the results of an analysis. We suggested that the Center City effect played a role. Another way to think about the Center City effect is in the context of decision\-making:
It could be that households are willing to pay more for transit amenities, or that they pay more for other amenities in neighborhoods that happen to also be transit\-rich. These selection dynamics will play a massive role in our analytics later in the book.
The second takeaway from this chapter is that although indicators enable the data scientist to simplify complex ideas, those ideas must be *interpreted* responsibly. This means acknowledging important assumptions in the data.
How useful are the indicators we’ve created? They are *relatable*. Philadelphia is gentrifying and there is a need to add new housing supply in areas with transit access. These indicators, built from Census data, calculated from means, and visualized as maps and bar plots are *simple*. Comparisons made across time and submarket, make them *relative*. The final reason these indicators are useful is that they have clearly generated more questions than answers. Should Philadelphia wish to learn more about how renters value transit, these results suggest a more thorough study is needed.
1\.6 Assignment \- Study TOD in your city
-----------------------------------------
Recreate this analysis in a city of your choosing and prepare a policy brief for local City Council representatives. Do households value transit\-rich neighborhoods compared to others? How certain can you be about your conclusions given some of the spatial biases we’ve discussed? You must choose a city with open transit station data and crime data.
Prepare an accessible (non\-technical) R markdown document with the following deliverables. Provide a **brief** motivation at the beginning, annotate each visualization appropriately, and then provide brief policy\-relevant conclusions. Please show all code blocks. Here are the specific deliverables:
1. Show your data wrangling work.
2. Four small\-multiple (2000 \& 2017\+) visualizations comparing four selected Census variables across time and space (TOD vs. non\-TOD).
3. One grouped bar plot making these same comparisons.
4. One table making these same comparisons.
5. Create two graduated symbol maps of population and rent within 0\.5 mile of each *transit station*. Google for more information, but a graduate symbol map represents quantities for each transit station proportionally.
6. Create a `geom_line` plot that shows mean rent as a function of distance to subway stations (Figure 1\.17\). To do this you will need to use the `multipleRingBuffer` function found in the `functions.R` script.
7. Download and wrangle point\-level crime data (pick a crime type). What is the relationship between crime, transit access and rents?
Below is an example of how the `multipleRingsBuffer` tool works. The first parameter of `st_join` are tract centroids. The second parameter is the buffer tool, drawing buffers in half mile intervals, out to a 9 mile distance.
```
allTracts.rings <-
st_join(st_centroid(dplyr::select(allTracts, GEOID, year)),
multipleRingBuffer(st_union(septaStops), 47520, 2640)) %>%
st_drop_geometry() %>%
left_join(dplyr::select(allTracts, GEOID, MedRent, year),
by=c("GEOID"="GEOID", "year"="year")) %>%
st_sf() %>%
mutate(distance = distance / 5280) #convert to miles
```
1\.1 Why Start With Indicators?
-------------------------------
According to the Federal Transit Administration, not one of America’s largest passenger subway systems saw fare revenues exceed operating expenses in 2015\.[6](#fn6)
This is an indicator \- a stylized fact that gives simple insight into a complicated phenomena. Mastering indicators is critical for conveying nuanced context to non\-technical audiences. Here are four suggestions on what makes a good indicator:
1. A *relatable* indicator is typically motivated by a pressing policy concern. “How is it possible that passenger rail in New York City has such widespread delays, service suspensions, and rider discontent?” A great indicator solicits interest from an audience.
2. A *simple* indicator may be used as an exploratory tool in place of more complex statistics. Simplicity helps the audience understand the indicator’s significance and keeps them engaged in the analysis.
3. A *relative* indicator draws a contrast. “How can New York City passenger rail, with the most trips, still loose more money than each of the next ten largest cities?” Contextualizing an indicator with a relevant comparison makes for greater impact.
4. A good indicator typically generates more questions than answers. Thus, a good indicator fits into a broader *narrative* which helps motivate a more robust research agenda and ultimately, more applied analytics.
Simplicity is an indicator’s strength, but it may also be its weakness. Most statistics make assumptions. You should be aware of these assumptions, how they affect your conclusions, and ultimately how the audience interprets your results.
In this first chapter, space/time indicators are built from Census data to explore Transit Oriented Development (TOD) potential in Philadelphia. Along the way, we will learn how assumptions can lead to incorrect policy conclusions.
TOD advocates for increased housing and amenity density around transit (rail, subway, bus, etc.). There are many benefits to promoting this density, but two examples are particularly noteworthy.
First, transit needs scale to exist. Transit demand is a function of density and the more households, customers, and businesses around transit, the more efficient it is to operate a transit system. Efficiency means less maintenance, staffing, etc. Interestingly, Figure 1\.1 suggests that most transit systems are remarkably inefficient despite being in cities with density of just about everything.
Second, TOD is important for land value capitalization, which is essential to both developers and governments. If renters and home buyers are willing to pay more to locate near transit amenities, it should be reflected in higher land values and property tax returns near stations.
In this chapter, we play the role of Transportation Planner for the City of Philadelphia, and assess whether rents are higher in transit\-rich areas relative to places without transit access. If residents value these locations, officials might consider changing the zoning code to allow increased density around transit.
As the analysis progresses, spatial data wrangling and visualization fundamentals are presented with the `tidyverse`, `sf` and `ggplot2` packages. The `tidycensus` package is used to gather U.S. Census tract data. We begin by identifying some of the key assumptions made when working with geospatial Census data.
### 1\.1\.1 Mapping \& scale bias in areal aggregate data
Data visualization is a data scientist’s strongest communication tool because a picture tells a thousand words. However, visualizations and maps in particular, can mislead. Figure x.x maps Median Household Income for Philadelphia Census tracts in 2000 and 2017\. The narrative suggests that incomes in Center City (around City Hall) have increased. While this is likely true, the precise narrative is in part, driven by how colors are assigned to incomes on the map.
Figure 1\.3 below illustrates two different approaches for coloring maps. The topmost plots use `ggplot`‘s default ’equal interval’ breaks, while those on the bottom bin income into 5 quintile groups, which are intervals at the 1st, 20th, 40th, 60th and 80th percentiles of the data.
Setting map breaks can alter the narrative of a map. Ultimately, breaks should ‘hug the cliffs’ of the distribution. Compared to the equal interval breaks, the quintile breaks map portrays a sharper contrast in incomes across the city.
Census tract maps also introduce bias related to scale. To start, when summary statistics like mean or median are used to summarize individuals, results may be biased by the ‘Ecological Fallacy’. For example, consider Figure 1\.4 below which visualizes three household income distributions in a tract.
One plot has a normal (i.e. bell curve) distribution with values close to the mean; one with much greater variance around the mean; and one skewed. These tracts are all very different, but they all share the same ‘mean income’.
Figure 1\.5 shows these differences geographically, drawing two of the above household income distributions across the same census tract. The “Normal Distribution” seems relatively homogeneous while the “Skewed Distribution”, appears like a mixed income community.
The ecological fallacy is made worse if we assume a Census tract polygon is akin to a neighborhood \- an area of social, economic, and built environment homogeneity. The Census uses tracts to count people, not to reference social phenomena. When counting occurs inside *arbitrarily* drawn aggregate units like tracts, another source of bias emerges. This kind of scale bias is known as the ‘Modifiable Areal Unit Problem’ (MAUP).[7](#fn7).
For example, Figure 1\.6 locates houses in one corner of a tract, assuming that the remaining area is a park, perhaps. Again, we can see how representing this area with a single summary statistic may be problematic.
Scale bias is an important consideration when creating and interpreting indicators. We will continue to make scale assumptions throughout the remainder of this chapter, and in every other spatial chapter throughout the book. With experience, you will find it possible to create analytics that are useful despite scale bias.
### 1\.1\.1 Mapping \& scale bias in areal aggregate data
Data visualization is a data scientist’s strongest communication tool because a picture tells a thousand words. However, visualizations and maps in particular, can mislead. Figure x.x maps Median Household Income for Philadelphia Census tracts in 2000 and 2017\. The narrative suggests that incomes in Center City (around City Hall) have increased. While this is likely true, the precise narrative is in part, driven by how colors are assigned to incomes on the map.
Figure 1\.3 below illustrates two different approaches for coloring maps. The topmost plots use `ggplot`‘s default ’equal interval’ breaks, while those on the bottom bin income into 5 quintile groups, which are intervals at the 1st, 20th, 40th, 60th and 80th percentiles of the data.
Setting map breaks can alter the narrative of a map. Ultimately, breaks should ‘hug the cliffs’ of the distribution. Compared to the equal interval breaks, the quintile breaks map portrays a sharper contrast in incomes across the city.
Census tract maps also introduce bias related to scale. To start, when summary statistics like mean or median are used to summarize individuals, results may be biased by the ‘Ecological Fallacy’. For example, consider Figure 1\.4 below which visualizes three household income distributions in a tract.
One plot has a normal (i.e. bell curve) distribution with values close to the mean; one with much greater variance around the mean; and one skewed. These tracts are all very different, but they all share the same ‘mean income’.
Figure 1\.5 shows these differences geographically, drawing two of the above household income distributions across the same census tract. The “Normal Distribution” seems relatively homogeneous while the “Skewed Distribution”, appears like a mixed income community.
The ecological fallacy is made worse if we assume a Census tract polygon is akin to a neighborhood \- an area of social, economic, and built environment homogeneity. The Census uses tracts to count people, not to reference social phenomena. When counting occurs inside *arbitrarily* drawn aggregate units like tracts, another source of bias emerges. This kind of scale bias is known as the ‘Modifiable Areal Unit Problem’ (MAUP).[7](#fn7).
For example, Figure 1\.6 locates houses in one corner of a tract, assuming that the remaining area is a park, perhaps. Again, we can see how representing this area with a single summary statistic may be problematic.
Scale bias is an important consideration when creating and interpreting indicators. We will continue to make scale assumptions throughout the remainder of this chapter, and in every other spatial chapter throughout the book. With experience, you will find it possible to create analytics that are useful despite scale bias.
1\.2 Setup
----------
The code blocks below include the libraries, options, and functions needed for the analysis. The `tidyverse` library enables data wrangling and visualization; `tidycensus` allows access to the Census API; `sf` wrangles and processes spatial data; and `kableExtra` helps create tables.
```
library(tidyverse)
library(tidycensus)
library(sf)
library(kableExtra)
options(scipen=999)
options(tigris_class = "sf")
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
```
Two global `options` are set to first tell R not to use scientific notation (`scipen`) and second, `tigris_class` tells `tidycensus` to download Census geometries in the `sf` or Simple Feature format. Finally, the functions used throughout this book are read in from a source file. This chapter uses the `mapTheme` and `plotTheme` to standardize the formatting of maps and plots, as well as the `qBr` and `q5` functions to create quintile map breaks.
`palette5` is a color palette comprised of a list of hex codes. The operation `c` ‘combines’ a set of values into a list. There are a variety of websites like ColorBrewer that can help you explore color ramps for maps.[8](#fn8)
```
palette5 <- c("#f0f9e8","#bae4bc","#7bccc4","#43a2ca","#0868ac")
```
### 1\.2\.1 Downloading \& wrangling Census data
The `tidycensus` package provides an excellent interface for querying Census data in R. The below table shows a set of variables, variable codes, and the `Short_name` we will use for this analysis. For a list of the 2000 Census variables, use `View(load_variables(2000, "sf3", cache = TRUE))`.
| Variable | Census\_code\_in\_2000 | Census\_code\_in\_2017 | Short\_name |
| --- | --- | --- | --- |
| Total Population | P001001 | B25026\_001E | TotalPop |
| Number of white residents | P006002 | B02001\_002E | NumberWhites |
| Total: Female: 18 to 24 years: Bachelor’s degree | PCT025050 | B15001\_050E | TotalFemaleBachelors |
| Total: Male: 18 to 24 years: Bachelor’s degree | PCT025009 | B15001\_009E | TotalMaleBacheors |
| Median Household Income | P053001 | B19013\_001E | MedHHInc |
| Median contract rent | H056001 | B25058\_001E | MedRent |
| Total living in poverty | P092001 | B06012\_002E | TotalPoverty |
| |
| --- |
| Table 1\.1 |
Before querying the Census API, you will need your own Census API key which can be downloaded for free.[9](#fn9) Once acquired, input the key with `census_api_key("myKey...")`.
The `get_decennial` function downloads tract data for the variables in the table above. `year` is set to 2000 and `state` and `county` are set to Pennsylvania and Philadelphia, respectively. Setting `geometry` to true (`T`) ensures tract geometries are downloaded with the data.
`st_transform` is used to project the data from decimal degrees to feet. ‘Projection’ is the mathematical process by which the Earth is ‘flattened’ onto a plane, like a map. It is preferable to work in coordinate systems projected in feet or meters where distance can be measured reliably.
In this chapter, the `tidycensus` is used to query and return Census data directly into R. At the time of publication however, the Census has disabled the year 2000 endpoint/data.[10](#fn10) Instead, the code below downloads an equivalent data frame called `tracts00`. A data frame is the most common way to store data in R. The code block at the end of this section demonstrates a `tidycensus` call to return 2017 data.
`class(tracts00)` shows that `tracts00` is a unique type of data frame as it is also a Simple Features or `sf` object with polygon geometries for each census tract.
```
tracts00 <-
st_read(file.path(root.dir,"/Chapter1/PHL_CT00.geojson")) %>%
st_transform('ESRI:102728')
```
`tracts00[1:3,]` is an example of matrix notation and can be used to reference rows and columns of a data frame. This tells R to return the first three rows and all of the columns from the `tracts00` data frame. `tracts00[1:3,1]` returns the first three rows and the first column. In both instances, the `geometry` field is also returned. A specific set of columns can be returned by specifying a list, like so \- `tracts00[1:3,c(1:3,5)]`. The first three rows are returned along with the first through third columns and column five.
`GEOID` is a unique identifier the Census gives for each geography nationwide. ‘42’ is the state of Pennsylvania; ‘101’ is Philadelphia County; and the remaining digits identify a Census tract.
Note the header information that outputs as well, including the `geometry type`, in this case, polygons; the layer’s spatial extent, or bounding box, `bbox`; and the coordinate system, `CRS`. Google the CRS and find it on spatialreference.org to reveal its code, `102728`. This parameter is passed to `st_transform` to project the layer into feet.[11](#fn11)
The seven variables downloaded are structured in a strange way. You may be most familiar with data in ‘wide’ form, where each row represents a Census tract and each column, a variable. This data is formatted in ‘long’ form, which provides some interesting advantages.
`table(tracts00$variable)` returns a frequency count of rows associated with each `variable`. Note the `$` operator, which points to a specific column in a data frame. The output shows seven variables have been downloaded, each consisting of 381 rows, suggesting there are 381 census tracts in Philadelphia. In this long form, 381 tracts across seven variables are ‘stacked’ atop one another for a total 2,667 rows.
As an intro to mapping with `sf` layers, the code below creates a new data frame, `totalPop00`, including only Total Population in 2000, `P001001`. The `tracts00` data frame is ‘piped’ into the `filter` function using the `%>%` operator. A pipe is a shorthand method for enchaining successive functions. Many larger code blocks throughout the book are spliced together with pipes. Running each function successively will help new R users learn. `filter` performs a subset by evaluating a specific query, in this case, looking for any row where the `variable` field equals `P001001`. The result is set equal to (`<-`) the `sf` data frame, `totalPop00`.
```
totalPop00 <-
tracts00 %>%
filter(variable == "P001001")
```
`nrow(totalPop00)` tells us that the data frame has 381 rows. `names(totalPop00)` returns the data frame’s column names. Finally, `head(totalPop00)` returns just the first six rows and all the columns.
A `geometry` field is included in a `sf` layer and the `sf` package has some quick and useful plotting functions. `plot` is the base R plotting function, but when called with an `sf` layer, a map of each variable is returned. Matrix notation can be used to map a single variable, such as, `plot(totalPop00[,4])`.
```
plot(totalPop00)
```
`ggplot2` is a powerful tool for designing maps and visualization. A `ggplot` is constructed from a series of ‘geoms’ or geometric objects. Many different geoms[12](#fn12) exist, but `geom_sf` is used to map a Simple Features data frame.
The code block below and Figure 1\.8 illustrates incrementally, how nuance can be added to a `ggplot`. Here, plot `A` maps `totalPop00`, using the `aes` or aesthetic parameter to `fill` the tract polygons by the `value` field.
Plot `B` converts `value` to 5 quintile categories using the `q5` function. These 5 categories are of class `factor`. Try `q5(totalPop00$value)`.
Plot `C` adds fill color and legend improvements using the `scale_fill_manual` function. Many different scale types are possible.[13](#fn13) `values` is set to a list of colors, `palette5`. `labels` is set to the values of the associated quintiles (try `qBr(totalPop00, "value")`). Finally, a legend `name` is added. `\n` inserts a hard return.
Plot `D` inserts a title using the `labs` parameter, as well as a `mapTheme`.
```
A <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = value))
B <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = q5(value)))
C <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = q5(value))) +
scale_fill_manual(values = palette5,
labels = qBr(totalPop00, "value"),
name = "Total\nPopluation\n(Quintile Breaks)")
D <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = q5(value))) +
scale_fill_manual(values = palette5,
labels = qBr(totalPop00, "value"),
name = "Popluation\n(Quintile Breaks)") +
labs(title = "Total Population", subtitle = "Philadelphia; 2000") +
mapTheme()
```
To demonstrate how to calculate new variables, the raw Census data is converted to rates. `tracts00` is converted from long to the more common, wide form using the `spread` function. The `select` function drops one of the tract identifiers and `rename` is used to rename the variables.
Note the wide form output now shows 381 rows, one for each unique tract. This is likely a more familiar format for most readers.
```
tracts00 <-
dplyr::select(tracts00, -NAME) %>%
spread(variable, value) %>%
dplyr::select(-geometry) %>%
rename(TotalPop = P001001, Whites = P006002, MaleBachelors = PCT025009,
FemaleBachelors = PCT025050, MedHHInc = P053001, MedRent = H056001,
TotalPoverty = P092001)
st_drop_geometry(tracts00)[1:3,]
```
```
## GEOID MedRent TotalPop Whites MedHHInc TotalPoverty MaleBachelors
## 1 42101000100 858 2576 2095 48886 1739 64
## 2 42101000200 339 1355 176 8349 505 23
## 3 42101000300 660 2577 1893 40625 1189 41
## FemaleBachelors
## 1 48
## 2 73
## 3 103
```
Next, `mutate` is used to create the new rate variables; then extraneous columns are dropped using `select`. The `ifelse` prevents divide by 0 errors.
```
tracts00 <-
tracts00 %>%
mutate(pctWhite = ifelse(TotalPop > 0, Whites / TotalPop, 0),
pctBachelors = ifelse(TotalPop > 0, ((FemaleBachelors + MaleBachelors) / TotalPop), 0),
pctPoverty = ifelse(TotalPop > 0, TotalPoverty / TotalPop, 0),
year = "2000") %>%
dplyr::select(-Whites,-FemaleBachelors,-MaleBachelors,-TotalPoverty)
```
`tracts00` is now a complete dataset for the year 2000, but the TOD study requires data in the future as well. To download the 2017 American Community Survey or ACS data (`get_acs`), the below code block uses the pipe, `%>%`, to enable a more concise workflow.
The 2017 variable names are different from 2000, and the 2017 equivalent `value` field is called `estimate`. Setting `output="wide"`, automatically downloads the data into wide form. `mutate` calculates rates. Finally, the `select` function is used with `starts_with` to remove all the original census codes (which happen to begin with `B`).
```
tracts17 <-
get_acs(geography = "tract", variables = c("B25026_001E","B02001_002E","B15001_050E",
"B15001_009E","B19013_001E","B25058_001E",
"B06012_002E"),
year=2017, state=42, county=101, geometry=T, output="wide") %>%
st_transform('ESRI:102728') %>%
rename(TotalPop = B25026_001E, Whites = B02001_002E,
FemaleBachelors = B15001_050E, MaleBachelors = B15001_009E,
MedHHInc = B19013_001E, MedRent = B25058_001E,
TotalPoverty = B06012_002E) %>%
dplyr::select(-NAME, -starts_with("B")) %>%
mutate(pctWhite = ifelse(TotalPop > 0, Whites / TotalPop,0),
pctBachelors = ifelse(TotalPop > 0, ((FemaleBachelors + MaleBachelors) / TotalPop),0),
pctPoverty = ifelse(TotalPop > 0, TotalPoverty / TotalPop, 0),
year = "2017") %>%
dplyr::select(-Whites, -FemaleBachelors, -MaleBachelors, -TotalPoverty)
```
The last step then is to combine the 2000 and 2017 tracts together, stacking both layers atop one another with `rbind`. `allTracts` is now a complete time/space dataset.
```
allTracts <- rbind(tracts00,tracts17)
```
### 1\.2\.2 Wrangling transit open data
The next task is to relate the Census tracts to subway stops, in space. Subway stations are downloaded directly into R from the SEPTA open data site.[14](#fn14) Philadelphia has two subway lines, the “El” or Elevated Subway which runs east to west down Market Street, and the Broad Street Line, which runs north to south on Broad Street.
The code below downloads and binds together El and Broad St. station locations into a single layer, `septaStops`. `st_read` downloads the data in geojson form (with geometries) from the web. A `Line` field is generated and selected along with the `Station` field. Lastly, the data is projected into the same coordinate system as `tracts00`.
```
septaStops <-
rbind(
st_read("https://opendata.arcgis.com/datasets/8c6e2575c8ad46eb887e6bb35825e1a6_0.geojson") %>%
mutate(Line = "El") %>%
select(Station, Line),
st_read("https://opendata.arcgis.com/datasets/2e9037fd5bef406488ffe5bb67d21312_0.geojson") %>%
mutate(Line ="Broad_St") %>%
select(Station, Line)) %>%
st_transform(st_crs(tracts00))
```
`septaStops` are mapped in Figure 1\.9 to illustrate a `ggplot` map overlay. The first `geom_sf` plots a Philadelphia basemap using `st_union` to ‘dissolve’ tract boundaries into a city boundary. The second `geom_sf` maps `septaStops`, assigning `colour` to the `Line` attribute. `show.legend` ensures the legend displays points. Note that a `data` parameter is specified for each `geom_sf`.
Above, `scale_fill_manual` was used to `fill` the `totalPop00` tract *polygons* with color. In this case `septaStops` *points* are `colour`ed using `scale_colour_manual`.
```
ggplot() +
geom_sf(data=st_union(tracts00)) +
geom_sf(data=septaStops, aes(colour = Line), show.legend = "point", size= 2) +
scale_colour_manual(values = c("orange","blue")) +
labs(title="Septa Stops", subtitle="Philadelphia, PA") +
mapTheme()
```
### 1\.2\.3 Relating tracts \& subway stops in space
To understand whether `TOD` tracts are valued more than `non-TOD` tracts, a methodology is needed to assign tracts to one of these two groups. Several overlay techniques are demonstrated below to find tracts *close* to subway stations. Defining ‘close’ provides another important lesson on spatial scale.
Human beings have a very instinctual understanding of closeness. You may be willing to ride a bike 3 miles to work everyday, but getting up to fetch the remote control from across the room is a noticeable burden. It’s fine for near and far to be subjective in the real world, but here, spatial relationships must be defined explicitly. Below, close is defined as tracts within a half mile (2,640 ft.) of stations.
`st_buffer` generates polygon ‘buffers’ with boundaries exactly half mile from the stations. A long form layer stacks two sets of station buffers, dissolving (`st_union`) the second set into one large polygon. `2640` is understood as feet because `septaBuffers` is projected in feet. Note the two `Legend` items and that the `st_union` output is converted to an `sf` layer with `st_sf`.
The resulting ‘small multiple’ map (Figure 1\.10\) is only possible when data is organized in long form. `facet_wrap` is used to create the small multiple map, ensuring both are perfectly aligned.
```
septaBuffers <-
rbind(
st_buffer(septaStops, 2640) %>%
mutate(Legend = "Buffer") %>%
dplyr::select(Legend),
st_union(st_buffer(septaStops, 2640)) %>%
st_sf() %>%
mutate(Legend = "Unioned Buffer"))
ggplot() +
geom_sf(data=septaBuffers) +
geom_sf(data=septaStops, show.legend = "point") +
facet_wrap(~Legend) +
mapTheme()
```
Now to select tracts that fall inside of the buffer. The `Unioned Buffer` is used because it enables a cleaner overlay. Below, three different approaches for selecting tracts that are with 0\.5 miles of subway stations are considered.
1. Intersection or `Clip` \- Relate tracts and the buffer using the latter to ‘cookie cutter’ the former.
2. `Spatial Selection` \- Select all tracts that intersect or touch the buffer.
3. `Select by Centroids` \- Select all the tract *centroids* that intersect the buffer.
A centroid is the gravitational center of a polygon. The centroid for a circle is the point at which you can balance it on your finger. Interestingly, very irregular shapes may have centroids outside of a polygon’s boundary as Figure 1\.11 illustrates.
In the code block below, the `buffer` object pulls out just the `Unioned Buffer`. The first approach, `clip` uses `st_intersection` to cookie cutter the tracts. `TotalPop` is selected and a `Legend` is created. The second approach, `selection` uses matrix notation to select all `tracts00` that touch the `buffer`.
The third approach, `selectCentroids` is more complicated. Run each line separately to see how it works. A spatial selection is used again, but this time, `st_centroid` returns tract centroid points instead of polygons. A polygon output is needed, so `st_drop_geometry` converts the `sf` to a data frame and `left_join` joins it back to the original `tracts00`, complete with the polygon geometries. `st_sf` then converts the data frame to `sf` with polygon geometries. Joining two tables together requires a unique identifier, in this case `GEOID`.
```
buffer <- filter(septaBuffers, Legend=="Unioned Buffer")
clip <-
st_intersection(buffer, tracts00) %>%
dplyr::select(TotalPop) %>%
mutate(Selection_Type = "Clip")
selection <-
tracts00[buffer,] %>%
dplyr::select(TotalPop) %>%
mutate(Selection_Type = "Spatial Selection")
selectCentroids <-
st_centroid(tracts00)[buffer,] %>%
st_drop_geometry() %>%
left_join(dplyr::select(tracts00, GEOID)) %>%
st_sf() %>%
dplyr::select(TotalPop) %>%
mutate(Selection_Type = "Select by Centroids")
```
Can you create the below small multiple map? To do so, `rbind` the three selection types together and use `facet_wrap` to return three maps. Remember, `st_union(tracts00)` creates a basemap. What are the apparent differences across the selection criteria?
`Clip` is the worst choice as changing the geometries exasperates ecological fallacy and MAUP bias. `Spatial Selection` is better, but a tract is included even if only sliver of its area touches the `buffer`. This approach selects perhaps too many tracts.
`Select by Centroids` is my choice because it captures a concise set of tracts without altering geometries. While well\-reasoned, it is still subjective, and each choice will ultimately lead to different results. Such is the impact of scale problems in spatial analysis.
### 1\.2\.1 Downloading \& wrangling Census data
The `tidycensus` package provides an excellent interface for querying Census data in R. The below table shows a set of variables, variable codes, and the `Short_name` we will use for this analysis. For a list of the 2000 Census variables, use `View(load_variables(2000, "sf3", cache = TRUE))`.
| Variable | Census\_code\_in\_2000 | Census\_code\_in\_2017 | Short\_name |
| --- | --- | --- | --- |
| Total Population | P001001 | B25026\_001E | TotalPop |
| Number of white residents | P006002 | B02001\_002E | NumberWhites |
| Total: Female: 18 to 24 years: Bachelor’s degree | PCT025050 | B15001\_050E | TotalFemaleBachelors |
| Total: Male: 18 to 24 years: Bachelor’s degree | PCT025009 | B15001\_009E | TotalMaleBacheors |
| Median Household Income | P053001 | B19013\_001E | MedHHInc |
| Median contract rent | H056001 | B25058\_001E | MedRent |
| Total living in poverty | P092001 | B06012\_002E | TotalPoverty |
| |
| --- |
| Table 1\.1 |
Before querying the Census API, you will need your own Census API key which can be downloaded for free.[9](#fn9) Once acquired, input the key with `census_api_key("myKey...")`.
The `get_decennial` function downloads tract data for the variables in the table above. `year` is set to 2000 and `state` and `county` are set to Pennsylvania and Philadelphia, respectively. Setting `geometry` to true (`T`) ensures tract geometries are downloaded with the data.
`st_transform` is used to project the data from decimal degrees to feet. ‘Projection’ is the mathematical process by which the Earth is ‘flattened’ onto a plane, like a map. It is preferable to work in coordinate systems projected in feet or meters where distance can be measured reliably.
In this chapter, the `tidycensus` is used to query and return Census data directly into R. At the time of publication however, the Census has disabled the year 2000 endpoint/data.[10](#fn10) Instead, the code below downloads an equivalent data frame called `tracts00`. A data frame is the most common way to store data in R. The code block at the end of this section demonstrates a `tidycensus` call to return 2017 data.
`class(tracts00)` shows that `tracts00` is a unique type of data frame as it is also a Simple Features or `sf` object with polygon geometries for each census tract.
```
tracts00 <-
st_read(file.path(root.dir,"/Chapter1/PHL_CT00.geojson")) %>%
st_transform('ESRI:102728')
```
`tracts00[1:3,]` is an example of matrix notation and can be used to reference rows and columns of a data frame. This tells R to return the first three rows and all of the columns from the `tracts00` data frame. `tracts00[1:3,1]` returns the first three rows and the first column. In both instances, the `geometry` field is also returned. A specific set of columns can be returned by specifying a list, like so \- `tracts00[1:3,c(1:3,5)]`. The first three rows are returned along with the first through third columns and column five.
`GEOID` is a unique identifier the Census gives for each geography nationwide. ‘42’ is the state of Pennsylvania; ‘101’ is Philadelphia County; and the remaining digits identify a Census tract.
Note the header information that outputs as well, including the `geometry type`, in this case, polygons; the layer’s spatial extent, or bounding box, `bbox`; and the coordinate system, `CRS`. Google the CRS and find it on spatialreference.org to reveal its code, `102728`. This parameter is passed to `st_transform` to project the layer into feet.[11](#fn11)
The seven variables downloaded are structured in a strange way. You may be most familiar with data in ‘wide’ form, where each row represents a Census tract and each column, a variable. This data is formatted in ‘long’ form, which provides some interesting advantages.
`table(tracts00$variable)` returns a frequency count of rows associated with each `variable`. Note the `$` operator, which points to a specific column in a data frame. The output shows seven variables have been downloaded, each consisting of 381 rows, suggesting there are 381 census tracts in Philadelphia. In this long form, 381 tracts across seven variables are ‘stacked’ atop one another for a total 2,667 rows.
As an intro to mapping with `sf` layers, the code below creates a new data frame, `totalPop00`, including only Total Population in 2000, `P001001`. The `tracts00` data frame is ‘piped’ into the `filter` function using the `%>%` operator. A pipe is a shorthand method for enchaining successive functions. Many larger code blocks throughout the book are spliced together with pipes. Running each function successively will help new R users learn. `filter` performs a subset by evaluating a specific query, in this case, looking for any row where the `variable` field equals `P001001`. The result is set equal to (`<-`) the `sf` data frame, `totalPop00`.
```
totalPop00 <-
tracts00 %>%
filter(variable == "P001001")
```
`nrow(totalPop00)` tells us that the data frame has 381 rows. `names(totalPop00)` returns the data frame’s column names. Finally, `head(totalPop00)` returns just the first six rows and all the columns.
A `geometry` field is included in a `sf` layer and the `sf` package has some quick and useful plotting functions. `plot` is the base R plotting function, but when called with an `sf` layer, a map of each variable is returned. Matrix notation can be used to map a single variable, such as, `plot(totalPop00[,4])`.
```
plot(totalPop00)
```
`ggplot2` is a powerful tool for designing maps and visualization. A `ggplot` is constructed from a series of ‘geoms’ or geometric objects. Many different geoms[12](#fn12) exist, but `geom_sf` is used to map a Simple Features data frame.
The code block below and Figure 1\.8 illustrates incrementally, how nuance can be added to a `ggplot`. Here, plot `A` maps `totalPop00`, using the `aes` or aesthetic parameter to `fill` the tract polygons by the `value` field.
Plot `B` converts `value` to 5 quintile categories using the `q5` function. These 5 categories are of class `factor`. Try `q5(totalPop00$value)`.
Plot `C` adds fill color and legend improvements using the `scale_fill_manual` function. Many different scale types are possible.[13](#fn13) `values` is set to a list of colors, `palette5`. `labels` is set to the values of the associated quintiles (try `qBr(totalPop00, "value")`). Finally, a legend `name` is added. `\n` inserts a hard return.
Plot `D` inserts a title using the `labs` parameter, as well as a `mapTheme`.
```
A <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = value))
B <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = q5(value)))
C <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = q5(value))) +
scale_fill_manual(values = palette5,
labels = qBr(totalPop00, "value"),
name = "Total\nPopluation\n(Quintile Breaks)")
D <-
ggplot() +
geom_sf(data = totalPop00, aes(fill = q5(value))) +
scale_fill_manual(values = palette5,
labels = qBr(totalPop00, "value"),
name = "Popluation\n(Quintile Breaks)") +
labs(title = "Total Population", subtitle = "Philadelphia; 2000") +
mapTheme()
```
To demonstrate how to calculate new variables, the raw Census data is converted to rates. `tracts00` is converted from long to the more common, wide form using the `spread` function. The `select` function drops one of the tract identifiers and `rename` is used to rename the variables.
Note the wide form output now shows 381 rows, one for each unique tract. This is likely a more familiar format for most readers.
```
tracts00 <-
dplyr::select(tracts00, -NAME) %>%
spread(variable, value) %>%
dplyr::select(-geometry) %>%
rename(TotalPop = P001001, Whites = P006002, MaleBachelors = PCT025009,
FemaleBachelors = PCT025050, MedHHInc = P053001, MedRent = H056001,
TotalPoverty = P092001)
st_drop_geometry(tracts00)[1:3,]
```
```
## GEOID MedRent TotalPop Whites MedHHInc TotalPoverty MaleBachelors
## 1 42101000100 858 2576 2095 48886 1739 64
## 2 42101000200 339 1355 176 8349 505 23
## 3 42101000300 660 2577 1893 40625 1189 41
## FemaleBachelors
## 1 48
## 2 73
## 3 103
```
Next, `mutate` is used to create the new rate variables; then extraneous columns are dropped using `select`. The `ifelse` prevents divide by 0 errors.
```
tracts00 <-
tracts00 %>%
mutate(pctWhite = ifelse(TotalPop > 0, Whites / TotalPop, 0),
pctBachelors = ifelse(TotalPop > 0, ((FemaleBachelors + MaleBachelors) / TotalPop), 0),
pctPoverty = ifelse(TotalPop > 0, TotalPoverty / TotalPop, 0),
year = "2000") %>%
dplyr::select(-Whites,-FemaleBachelors,-MaleBachelors,-TotalPoverty)
```
`tracts00` is now a complete dataset for the year 2000, but the TOD study requires data in the future as well. To download the 2017 American Community Survey or ACS data (`get_acs`), the below code block uses the pipe, `%>%`, to enable a more concise workflow.
The 2017 variable names are different from 2000, and the 2017 equivalent `value` field is called `estimate`. Setting `output="wide"`, automatically downloads the data into wide form. `mutate` calculates rates. Finally, the `select` function is used with `starts_with` to remove all the original census codes (which happen to begin with `B`).
```
tracts17 <-
get_acs(geography = "tract", variables = c("B25026_001E","B02001_002E","B15001_050E",
"B15001_009E","B19013_001E","B25058_001E",
"B06012_002E"),
year=2017, state=42, county=101, geometry=T, output="wide") %>%
st_transform('ESRI:102728') %>%
rename(TotalPop = B25026_001E, Whites = B02001_002E,
FemaleBachelors = B15001_050E, MaleBachelors = B15001_009E,
MedHHInc = B19013_001E, MedRent = B25058_001E,
TotalPoverty = B06012_002E) %>%
dplyr::select(-NAME, -starts_with("B")) %>%
mutate(pctWhite = ifelse(TotalPop > 0, Whites / TotalPop,0),
pctBachelors = ifelse(TotalPop > 0, ((FemaleBachelors + MaleBachelors) / TotalPop),0),
pctPoverty = ifelse(TotalPop > 0, TotalPoverty / TotalPop, 0),
year = "2017") %>%
dplyr::select(-Whites, -FemaleBachelors, -MaleBachelors, -TotalPoverty)
```
The last step then is to combine the 2000 and 2017 tracts together, stacking both layers atop one another with `rbind`. `allTracts` is now a complete time/space dataset.
```
allTracts <- rbind(tracts00,tracts17)
```
### 1\.2\.2 Wrangling transit open data
The next task is to relate the Census tracts to subway stops, in space. Subway stations are downloaded directly into R from the SEPTA open data site.[14](#fn14) Philadelphia has two subway lines, the “El” or Elevated Subway which runs east to west down Market Street, and the Broad Street Line, which runs north to south on Broad Street.
The code below downloads and binds together El and Broad St. station locations into a single layer, `septaStops`. `st_read` downloads the data in geojson form (with geometries) from the web. A `Line` field is generated and selected along with the `Station` field. Lastly, the data is projected into the same coordinate system as `tracts00`.
```
septaStops <-
rbind(
st_read("https://opendata.arcgis.com/datasets/8c6e2575c8ad46eb887e6bb35825e1a6_0.geojson") %>%
mutate(Line = "El") %>%
select(Station, Line),
st_read("https://opendata.arcgis.com/datasets/2e9037fd5bef406488ffe5bb67d21312_0.geojson") %>%
mutate(Line ="Broad_St") %>%
select(Station, Line)) %>%
st_transform(st_crs(tracts00))
```
`septaStops` are mapped in Figure 1\.9 to illustrate a `ggplot` map overlay. The first `geom_sf` plots a Philadelphia basemap using `st_union` to ‘dissolve’ tract boundaries into a city boundary. The second `geom_sf` maps `septaStops`, assigning `colour` to the `Line` attribute. `show.legend` ensures the legend displays points. Note that a `data` parameter is specified for each `geom_sf`.
Above, `scale_fill_manual` was used to `fill` the `totalPop00` tract *polygons* with color. In this case `septaStops` *points* are `colour`ed using `scale_colour_manual`.
```
ggplot() +
geom_sf(data=st_union(tracts00)) +
geom_sf(data=septaStops, aes(colour = Line), show.legend = "point", size= 2) +
scale_colour_manual(values = c("orange","blue")) +
labs(title="Septa Stops", subtitle="Philadelphia, PA") +
mapTheme()
```
### 1\.2\.3 Relating tracts \& subway stops in space
To understand whether `TOD` tracts are valued more than `non-TOD` tracts, a methodology is needed to assign tracts to one of these two groups. Several overlay techniques are demonstrated below to find tracts *close* to subway stations. Defining ‘close’ provides another important lesson on spatial scale.
Human beings have a very instinctual understanding of closeness. You may be willing to ride a bike 3 miles to work everyday, but getting up to fetch the remote control from across the room is a noticeable burden. It’s fine for near and far to be subjective in the real world, but here, spatial relationships must be defined explicitly. Below, close is defined as tracts within a half mile (2,640 ft.) of stations.
`st_buffer` generates polygon ‘buffers’ with boundaries exactly half mile from the stations. A long form layer stacks two sets of station buffers, dissolving (`st_union`) the second set into one large polygon. `2640` is understood as feet because `septaBuffers` is projected in feet. Note the two `Legend` items and that the `st_union` output is converted to an `sf` layer with `st_sf`.
The resulting ‘small multiple’ map (Figure 1\.10\) is only possible when data is organized in long form. `facet_wrap` is used to create the small multiple map, ensuring both are perfectly aligned.
```
septaBuffers <-
rbind(
st_buffer(septaStops, 2640) %>%
mutate(Legend = "Buffer") %>%
dplyr::select(Legend),
st_union(st_buffer(septaStops, 2640)) %>%
st_sf() %>%
mutate(Legend = "Unioned Buffer"))
ggplot() +
geom_sf(data=septaBuffers) +
geom_sf(data=septaStops, show.legend = "point") +
facet_wrap(~Legend) +
mapTheme()
```
Now to select tracts that fall inside of the buffer. The `Unioned Buffer` is used because it enables a cleaner overlay. Below, three different approaches for selecting tracts that are with 0\.5 miles of subway stations are considered.
1. Intersection or `Clip` \- Relate tracts and the buffer using the latter to ‘cookie cutter’ the former.
2. `Spatial Selection` \- Select all tracts that intersect or touch the buffer.
3. `Select by Centroids` \- Select all the tract *centroids* that intersect the buffer.
A centroid is the gravitational center of a polygon. The centroid for a circle is the point at which you can balance it on your finger. Interestingly, very irregular shapes may have centroids outside of a polygon’s boundary as Figure 1\.11 illustrates.
In the code block below, the `buffer` object pulls out just the `Unioned Buffer`. The first approach, `clip` uses `st_intersection` to cookie cutter the tracts. `TotalPop` is selected and a `Legend` is created. The second approach, `selection` uses matrix notation to select all `tracts00` that touch the `buffer`.
The third approach, `selectCentroids` is more complicated. Run each line separately to see how it works. A spatial selection is used again, but this time, `st_centroid` returns tract centroid points instead of polygons. A polygon output is needed, so `st_drop_geometry` converts the `sf` to a data frame and `left_join` joins it back to the original `tracts00`, complete with the polygon geometries. `st_sf` then converts the data frame to `sf` with polygon geometries. Joining two tables together requires a unique identifier, in this case `GEOID`.
```
buffer <- filter(septaBuffers, Legend=="Unioned Buffer")
clip <-
st_intersection(buffer, tracts00) %>%
dplyr::select(TotalPop) %>%
mutate(Selection_Type = "Clip")
selection <-
tracts00[buffer,] %>%
dplyr::select(TotalPop) %>%
mutate(Selection_Type = "Spatial Selection")
selectCentroids <-
st_centroid(tracts00)[buffer,] %>%
st_drop_geometry() %>%
left_join(dplyr::select(tracts00, GEOID)) %>%
st_sf() %>%
dplyr::select(TotalPop) %>%
mutate(Selection_Type = "Select by Centroids")
```
Can you create the below small multiple map? To do so, `rbind` the three selection types together and use `facet_wrap` to return three maps. Remember, `st_union(tracts00)` creates a basemap. What are the apparent differences across the selection criteria?
`Clip` is the worst choice as changing the geometries exasperates ecological fallacy and MAUP bias. `Spatial Selection` is better, but a tract is included even if only sliver of its area touches the `buffer`. This approach selects perhaps too many tracts.
`Select by Centroids` is my choice because it captures a concise set of tracts without altering geometries. While well\-reasoned, it is still subjective, and each choice will ultimately lead to different results. Such is the impact of scale problems in spatial analysis.
1\.3 Developing TOD Indicators
------------------------------
### 1\.3\.1 TOD indicator maps
Let us now explore the hypothesis that if residents value TOD, then rents should be higher in areas close to transit relative to places at greater distances.
The code block below replicates the select by centroid approach to return `allTracts` within and beyond the 0\.5 mile `buffer` (the `TOD` and `Non-TOD` groups, respectively). The second spatial selection uses `st_disjoint` to select centroids that *do not* intersect the `buffer` and are thus beyond half mile. `mutate` is used to adjust 2000 rents for inflation.
```
allTracts.group <-
rbind(
st_centroid(allTracts)[buffer,] %>%
st_drop_geometry() %>%
left_join(allTracts) %>%
st_sf() %>%
mutate(TOD = "TOD"),
st_centroid(allTracts)[buffer, op = st_disjoint] %>%
st_drop_geometry() %>%
left_join(allTracts) %>%
st_sf() %>%
mutate(TOD = "Non-TOD")) %>%
mutate(MedRent.inf = ifelse(year == "2000", MedRent * 1.42, MedRent))
```
The small multiple map in Figure 1\.13 below visualizes both the `year` and `TOD` groups. The rightmost figure then then maps inflation\-adjusted rent for 2000 and 2017\. Can you re\-create this figure using three `geom_sf` layers? The first is a basemap; the second maps rents using `fill = q5(MedRent.inf)`, removing tract boundaries by setting `colour=NA`; and the third overlays `buffer`, setting `colour = "red"` and `fill = NA`.
The map suggests that although rents increased dramatically in Philadelphia’s central business district, many areas close to transit did not see significant rent increases.
### 1\.3\.2 TOD indicator tables
Tables are often the least compelling approach for presenting data, but they can be useful. The table below shows the average difference for variables across `year` and `TOD_Group`.
The `tidyverse` package makes summary statistics easy. `group_by` defines the grouping variables and `summarize` calculates the across\-group means. `na.rm = T` removes any missing or `NA` values from the calculation. Without their removal, the calculation would return `NA`. The result is TOD by year group means.
A clean table is then generated using the `kable` function. Is this the best format for comparing across space and time?
```
allTracts.Summary <-
st_drop_geometry(allTracts.group) %>%
group_by(year, TOD) %>%
summarize(Rent = mean(MedRent, na.rm = T),
Population = mean(TotalPop, na.rm = T),
Percent_White = mean(pctWhite, na.rm = T),
Percent_Bach = mean(pctBachelors, na.rm = T),
Percent_Poverty = mean(pctPoverty, na.rm = T))
kable(allTracts.Summary) %>%
kable_styling() %>%
footnote(general_title = "\n",
general = "Table 1.2")
```
| year | TOD | Rent | Population | Percent\_White | Percent\_Bach | Percent\_Poverty |
| --- | --- | --- | --- | --- | --- | --- |
| 2000 | Non\-TOD | 470\.5458 | 3966\.789 | 0\.4695256 | 0\.0096146 | 0\.3735100 |
| 2000 | TOD | 469\.8247 | 4030\.742 | 0\.3848745 | 0\.0161826 | 0\.4031254 |
| 2017 | Non\-TOD | 821\.1642 | 4073\.547 | 0\.4396967 | 0\.0116228 | 0\.2373258 |
| 2017 | TOD | 913\.3750 | 3658\.500 | 0\.4803197 | 0\.0288166 | 0\.3080936 |
| |
| --- |
| Table 1\.2 |
How about an approach with variables as rows and the groups as columns? The `year` and `TOD` fields are ‘spliced’ together into a `year.TOD` field with the `unite` function. `gather` converts the data to long form using `year.TOD` as the grouping variable. `mutate` can `round` the `Value` field and `spread` transposes the rows and columns allowing for comparisons across both group types.
Is this table better? What can you conclude about our hypothesis?
```
allTracts.Summary %>%
unite(year.TOD, year, TOD, sep = ": ", remove = T) %>%
gather(Variable, Value, -year.TOD) %>%
mutate(Value = round(Value, 2)) %>%
spread(year.TOD, Value) %>%
kable() %>%
kable_styling() %>%
footnote(general_title = "\n",
general = "Table 1.3")
```
| Variable | 2000: Non\-TOD | 2000: TOD | 2017: Non\-TOD | 2017: TOD |
| --- | --- | --- | --- | --- |
| Percent\_Bach | 0\.01 | 0\.02 | 0\.01 | 0\.03 |
| Percent\_Poverty | 0\.37 | 0\.40 | 0\.24 | 0\.31 |
| Percent\_White | 0\.47 | 0\.38 | 0\.44 | 0\.48 |
| Population | 3966\.79 | 4030\.74 | 4073\.55 | 3658\.50 |
| Rent | 470\.55 | 469\.82 | 821\.16 | 913\.38 |
| |
| --- |
| Table 1\.3 |
### 1\.3\.3 TOD indicator plots
The best way to visualize group differences is with a grouped bar plot. Below one is created by moving the data into long form with `gather`. Explore how the minus sign works inside `gather`.
In the plotting code, `year` is defined on the x\-axis, with each bar color filled by `TOD`. `geom_bar` defines a bar plot with two critical parameters. `stat` tells `ggplot` that a y\-axis `Value` is provided and `position` ensures the bars are side\-by\-side. What happens when the `position` parameter is removed?
`facet_wrap` is used to create small multiple plots across `Variable`s, and `scales = "free"` allows the y\-axis to vary with the scale of each variable (percentages vs. dollars).
```
allTracts.Summary %>%
gather(Variable, Value, -year, -TOD) %>%
ggplot(aes(year, Value, fill = TOD)) +
geom_bar(stat = "identity", position = "dodge") +
facet_wrap(~Variable, scales = "free", ncol=5) +
scale_fill_manual(values = c("#bae4bc", "#0868ac")) +
labs(title = "Indicator differences across time and space") +
plotTheme() + theme(legend.position="bottom")
```
What do these indicators tell us about TOD in Philadelphia? Between 2000 and 2017, the City became slightly more educated and less impoverished while rents increased dramatically. In 2000, there was almost no difference in rents between TOD and Non\-TOD tracts, but in 2017, that difference increased to more than $100\. It appears that residents increasingly are willing to pay more for transit access.
This is not the end of the story, however. Thus far, our analysis has ignored a key question \- *what is the relevant spatial process*? The Introduction discusses how the spatial process or pattern relates to decision\-making. Residents may be willing to pay more for transit, but perhaps there are other reasons behind rent increases?
It turns out that some TOD areas also happen to be in Center City, Philadelphia’s central business district. Living in and around Center City affords access to many other amenities beyond transit. Could omitting this critical spatial process from our analysis bias our results?
Let’s find out by creating three housing submarkets, one for each of the two subway lines and a third, Center City area, where the two subway lines intersect.
### 1\.3\.1 TOD indicator maps
Let us now explore the hypothesis that if residents value TOD, then rents should be higher in areas close to transit relative to places at greater distances.
The code block below replicates the select by centroid approach to return `allTracts` within and beyond the 0\.5 mile `buffer` (the `TOD` and `Non-TOD` groups, respectively). The second spatial selection uses `st_disjoint` to select centroids that *do not* intersect the `buffer` and are thus beyond half mile. `mutate` is used to adjust 2000 rents for inflation.
```
allTracts.group <-
rbind(
st_centroid(allTracts)[buffer,] %>%
st_drop_geometry() %>%
left_join(allTracts) %>%
st_sf() %>%
mutate(TOD = "TOD"),
st_centroid(allTracts)[buffer, op = st_disjoint] %>%
st_drop_geometry() %>%
left_join(allTracts) %>%
st_sf() %>%
mutate(TOD = "Non-TOD")) %>%
mutate(MedRent.inf = ifelse(year == "2000", MedRent * 1.42, MedRent))
```
The small multiple map in Figure 1\.13 below visualizes both the `year` and `TOD` groups. The rightmost figure then then maps inflation\-adjusted rent for 2000 and 2017\. Can you re\-create this figure using three `geom_sf` layers? The first is a basemap; the second maps rents using `fill = q5(MedRent.inf)`, removing tract boundaries by setting `colour=NA`; and the third overlays `buffer`, setting `colour = "red"` and `fill = NA`.
The map suggests that although rents increased dramatically in Philadelphia’s central business district, many areas close to transit did not see significant rent increases.
### 1\.3\.2 TOD indicator tables
Tables are often the least compelling approach for presenting data, but they can be useful. The table below shows the average difference for variables across `year` and `TOD_Group`.
The `tidyverse` package makes summary statistics easy. `group_by` defines the grouping variables and `summarize` calculates the across\-group means. `na.rm = T` removes any missing or `NA` values from the calculation. Without their removal, the calculation would return `NA`. The result is TOD by year group means.
A clean table is then generated using the `kable` function. Is this the best format for comparing across space and time?
```
allTracts.Summary <-
st_drop_geometry(allTracts.group) %>%
group_by(year, TOD) %>%
summarize(Rent = mean(MedRent, na.rm = T),
Population = mean(TotalPop, na.rm = T),
Percent_White = mean(pctWhite, na.rm = T),
Percent_Bach = mean(pctBachelors, na.rm = T),
Percent_Poverty = mean(pctPoverty, na.rm = T))
kable(allTracts.Summary) %>%
kable_styling() %>%
footnote(general_title = "\n",
general = "Table 1.2")
```
| year | TOD | Rent | Population | Percent\_White | Percent\_Bach | Percent\_Poverty |
| --- | --- | --- | --- | --- | --- | --- |
| 2000 | Non\-TOD | 470\.5458 | 3966\.789 | 0\.4695256 | 0\.0096146 | 0\.3735100 |
| 2000 | TOD | 469\.8247 | 4030\.742 | 0\.3848745 | 0\.0161826 | 0\.4031254 |
| 2017 | Non\-TOD | 821\.1642 | 4073\.547 | 0\.4396967 | 0\.0116228 | 0\.2373258 |
| 2017 | TOD | 913\.3750 | 3658\.500 | 0\.4803197 | 0\.0288166 | 0\.3080936 |
| |
| --- |
| Table 1\.2 |
How about an approach with variables as rows and the groups as columns? The `year` and `TOD` fields are ‘spliced’ together into a `year.TOD` field with the `unite` function. `gather` converts the data to long form using `year.TOD` as the grouping variable. `mutate` can `round` the `Value` field and `spread` transposes the rows and columns allowing for comparisons across both group types.
Is this table better? What can you conclude about our hypothesis?
```
allTracts.Summary %>%
unite(year.TOD, year, TOD, sep = ": ", remove = T) %>%
gather(Variable, Value, -year.TOD) %>%
mutate(Value = round(Value, 2)) %>%
spread(year.TOD, Value) %>%
kable() %>%
kable_styling() %>%
footnote(general_title = "\n",
general = "Table 1.3")
```
| Variable | 2000: Non\-TOD | 2000: TOD | 2017: Non\-TOD | 2017: TOD |
| --- | --- | --- | --- | --- |
| Percent\_Bach | 0\.01 | 0\.02 | 0\.01 | 0\.03 |
| Percent\_Poverty | 0\.37 | 0\.40 | 0\.24 | 0\.31 |
| Percent\_White | 0\.47 | 0\.38 | 0\.44 | 0\.48 |
| Population | 3966\.79 | 4030\.74 | 4073\.55 | 3658\.50 |
| Rent | 470\.55 | 469\.82 | 821\.16 | 913\.38 |
| |
| --- |
| Table 1\.3 |
### 1\.3\.3 TOD indicator plots
The best way to visualize group differences is with a grouped bar plot. Below one is created by moving the data into long form with `gather`. Explore how the minus sign works inside `gather`.
In the plotting code, `year` is defined on the x\-axis, with each bar color filled by `TOD`. `geom_bar` defines a bar plot with two critical parameters. `stat` tells `ggplot` that a y\-axis `Value` is provided and `position` ensures the bars are side\-by\-side. What happens when the `position` parameter is removed?
`facet_wrap` is used to create small multiple plots across `Variable`s, and `scales = "free"` allows the y\-axis to vary with the scale of each variable (percentages vs. dollars).
```
allTracts.Summary %>%
gather(Variable, Value, -year, -TOD) %>%
ggplot(aes(year, Value, fill = TOD)) +
geom_bar(stat = "identity", position = "dodge") +
facet_wrap(~Variable, scales = "free", ncol=5) +
scale_fill_manual(values = c("#bae4bc", "#0868ac")) +
labs(title = "Indicator differences across time and space") +
plotTheme() + theme(legend.position="bottom")
```
What do these indicators tell us about TOD in Philadelphia? Between 2000 and 2017, the City became slightly more educated and less impoverished while rents increased dramatically. In 2000, there was almost no difference in rents between TOD and Non\-TOD tracts, but in 2017, that difference increased to more than $100\. It appears that residents increasingly are willing to pay more for transit access.
This is not the end of the story, however. Thus far, our analysis has ignored a key question \- *what is the relevant spatial process*? The Introduction discusses how the spatial process or pattern relates to decision\-making. Residents may be willing to pay more for transit, but perhaps there are other reasons behind rent increases?
It turns out that some TOD areas also happen to be in Center City, Philadelphia’s central business district. Living in and around Center City affords access to many other amenities beyond transit. Could omitting this critical spatial process from our analysis bias our results?
Let’s find out by creating three housing submarkets, one for each of the two subway lines and a third, Center City area, where the two subway lines intersect.
1\.4 Capturing three submarkets of interest
-------------------------------------------
In this section, three new submarkets are created. The `centerCity` submarket is created where the unioned El and Broad Street Line buffers intersect (`st_intersection`).
The `el` and `broad.st` submarket areas are created a bit differently. `st_difference` is used to *erase* any portion of the unioned El and Broad Street Line buffers areas that intersect `centerCity`. The three buffers are then bound into one layer and the result is mapped.
```
centerCity <-
st_intersection(
st_buffer(filter(septaStops, Line == "El"), 2640) %>% st_union(),
st_buffer(filter(septaStops, Line == "Broad_St"), 2640) %>% st_union()) %>%
st_sf() %>%
mutate(Submarket = "Center City")
el <-
st_buffer(filter(septaStops, Line == "El"), 2640) %>% st_union() %>%
st_sf() %>%
st_difference(centerCity) %>%
mutate(Submarket = "El")
broad.st <-
st_buffer(filter(septaStops, Line == "Broad_St"), 2640) %>% st_union() %>%
st_sf() %>%
st_difference(centerCity) %>%
mutate(Submarket = "Broad Street")
threeMarkets <- rbind(el, broad.st, centerCity)
```
`allTracts` is then related to the `threeMarkets`. A spatial join (`st_join`) is used to ‘stamp’ each tract centroid with the submarket it falls into. Note a spatial selection will not work here because there are now 3 submarket groups instead of one unioned buffer.
The spatial join result then takes the polygon geometries with a `left_join` to `allTracts`. Any tract that is not overlayed by one of `threeMarkets` receives `NA`. The `mutate` then converts `NA` to a `Non-TOD` submarket with `replace_na`. `st_sf` finally converts the output with polygon geometries to an `sf` data frame which can be mapped as Figure 1\.15 above.
```
allTracts.threeMarkets <-
st_join(st_centroid(allTracts), threeMarkets) %>%
st_drop_geometry() %>%
left_join(allTracts) %>%
mutate(Submarket = replace_na(Submarket, "Non-TOD")) %>%
st_sf()
```
Finally, as before, rent is adjusted for inflation and a grouped bar plot is created.
It was previously estimated that Philadelphians were willing to pay an additional $100 on average, to rent in tracts with transit access. However, now that the Center City effect has been controlled for, it seems these differences are negligible. It is now clear that rents in Center City may have been driving these results.
1\.5 Conclusion: Are Philadelphians willing to pay for TOD?
-----------------------------------------------------------
Are Philadelphians willing to pay a premium to live in transit\-rich areas? I hate to tell you this, but not enough has been done to fully answer this question \- which is one of the two main takeaways of this chapter.
It is critical to understand how omitted variables affect the relevant spatial process and ultimately, the results of an analysis. We suggested that the Center City effect played a role. Another way to think about the Center City effect is in the context of decision\-making:
It could be that households are willing to pay more for transit amenities, or that they pay more for other amenities in neighborhoods that happen to also be transit\-rich. These selection dynamics will play a massive role in our analytics later in the book.
The second takeaway from this chapter is that although indicators enable the data scientist to simplify complex ideas, those ideas must be *interpreted* responsibly. This means acknowledging important assumptions in the data.
How useful are the indicators we’ve created? They are *relatable*. Philadelphia is gentrifying and there is a need to add new housing supply in areas with transit access. These indicators, built from Census data, calculated from means, and visualized as maps and bar plots are *simple*. Comparisons made across time and submarket, make them *relative*. The final reason these indicators are useful is that they have clearly generated more questions than answers. Should Philadelphia wish to learn more about how renters value transit, these results suggest a more thorough study is needed.
1\.6 Assignment \- Study TOD in your city
-----------------------------------------
Recreate this analysis in a city of your choosing and prepare a policy brief for local City Council representatives. Do households value transit\-rich neighborhoods compared to others? How certain can you be about your conclusions given some of the spatial biases we’ve discussed? You must choose a city with open transit station data and crime data.
Prepare an accessible (non\-technical) R markdown document with the following deliverables. Provide a **brief** motivation at the beginning, annotate each visualization appropriately, and then provide brief policy\-relevant conclusions. Please show all code blocks. Here are the specific deliverables:
1. Show your data wrangling work.
2. Four small\-multiple (2000 \& 2017\+) visualizations comparing four selected Census variables across time and space (TOD vs. non\-TOD).
3. One grouped bar plot making these same comparisons.
4. One table making these same comparisons.
5. Create two graduated symbol maps of population and rent within 0\.5 mile of each *transit station*. Google for more information, but a graduate symbol map represents quantities for each transit station proportionally.
6. Create a `geom_line` plot that shows mean rent as a function of distance to subway stations (Figure 1\.17\). To do this you will need to use the `multipleRingBuffer` function found in the `functions.R` script.
7. Download and wrangle point\-level crime data (pick a crime type). What is the relationship between crime, transit access and rents?
Below is an example of how the `multipleRingsBuffer` tool works. The first parameter of `st_join` are tract centroids. The second parameter is the buffer tool, drawing buffers in half mile intervals, out to a 9 mile distance.
```
allTracts.rings <-
st_join(st_centroid(dplyr::select(allTracts, GEOID, year)),
multipleRingBuffer(st_union(septaStops), 47520, 2640)) %>%
st_drop_geometry() %>%
left_join(dplyr::select(allTracts, GEOID, MedRent, year),
by=c("GEOID"="GEOID", "year"="year")) %>%
st_sf() %>%
mutate(distance = distance / 5280) #convert to miles
```
| Field Specific |
urbanspatial.github.io | https://urbanspatial.github.io/PublicPolicyAnalytics/expanding-the-urban-growth-boundary.html |
Chapter 2 Expanding the Urban Growth Boundary
=============================================
2\.1 Introduction \- Lancaster development
------------------------------------------
Cheap housing, cheap gas, and a preference for ‘country living’ have lured American families to the suburbs for decades, but at what cost?
Development patterns that promote lengthy car commutes and oversized ‘McMansions’ are generally not sustainable. We know that sprawl generates air and water pollution and degrades the quality of ecosystem services \- but what about economic competitiveness?
Lancaster County, Pennsylvania, a lush, rolling agricultural county 60 miles west of Philadelphia, has been wrestling with sprawl for more than three decades, as new development swallows up the farms that help drive the local economy. The rate of farm workers in Lancaster is nearly three times the rate statewide.[15](#fn15) According to the 2017 Census of Agriculture, agricultural commodity sale totals were estimated at more than $1 billion dollars in Lancaster, making it the 55th highest grossing county nationally (top 2%) and the highest in Pennsylvania.[16](#fn16)
Lancaster’s farmland is quite valuable, but increasingly, that land is being paved over in favor of suburban housing. Figure 2\.1 visualizes the rate of suburban expansion in Lancaster since 1990\. Over the last 30 years, Between 1990 and 2018, Lancaster’s population grew 17% while developed land increased 140% \- a clear indicator of sprawl.
What does the relationship between Lancaster development and population growth teach us about sprawl? Figure 2\.2a plots Census tract population (y\-axis) by acres of developed land (x\-axis) by decade. Note the decreasing line slopes. In 1990, one acre of development is associated with \~2 people, but by 2018, despite an overall population increase, the figure halved to \~1 person per acre.
Figure 2\.2b visualizes the same relationship but with *housing units*. Here the trend is similar. In 1990, a one acre increase in development was associated with \~2/3 of a housing unit \- already at a very low density. Again, by 2018, despite population increases, that figure fell half.
To stem the sprawl, savvy Lancaster Planners in the 1990s worked with politicians, conservationists, and other stakeholders to develop a set of Urban Growth Areas (UGA) (Figure 2\.3\) to encourage development *inside* the urban core while discouraging development *outside* on outlying farmland. To what extent has the UGA succeeded to date?
Figure 2\.3 plots development change inside and outside the UGA boundary between 1990 and 2018\. Areas inside the UGA are developing much faster \- which is the purpose of the UGA. However, the UGA represents just 8\.6% of the total area of the County, and as it fills, the development trend flattens. On an acreage basis, development outside of the UGA has skyrocketted from more than 60,000 acres in 1990 to nearly 145,000 acres in 2018\. This suggests the UGA is not sufficiently containing sprawl.
One reasonable policy solution might be to increase development restrictions outside the UGA, while strategically expanding the existing UGA to promote infill development. In this chapter, we will develop a simple economic model to identify towns best suited for expanding its growth area.
In the previous chapter’s discussion of the Modifiable Areal Unit Problem (MAUP), we discussed how boundaries can be a liability for the spatial analyst. In this chapter, however, boundaries are an asset, leveraged for their discontinuous nature to understand their affect on development.
The next section introduces the economic model used to identify towns suitable for UGA expansion. In Section 2\.2, analytics are created to delineate areas inside and outside of the UGA. 2\.3 uses the result as an input to the economic model. 2\.4 concludes.
### 2\.1\.1 The bid\-rent model
The bid\-rent model of urban land markets is used to understand the spatial pattern of density, price, and land uses in market\-oriented cities.[17](#fn17) Illustrated in the top panel of Figure 2\.4, the model suggests that as distance from the city center increases, building density decreases. Building density is closely correlated to population density and it also reflects locational demand.
As demand for a place increases, firms and households are willing to pay a premium, even for marginally less space. Like all ‘models’, bid\-rent is a simple abstraction of reality, but this simplicity can provide important insight into the problem at hand.
The bottom panel of Figure 2\.4 shows the bid\-rent curve for Lancaster County with housing unit density on the y\-axis. Compared to the theoretical curve, density in Lancaster flattens dramatically at the 5 mile mark continuing flat out to the county boundary. As we’ll learn below, this sharp drop\-off pattern is heavily influenced by the UGA.
For contrast, Figure 2\.5 visualizes bid\-rent curves for 8 cities/metro areas throughout the U.S. In most dense, post\-industrial cities like New York, Philadelphia, and San Francisco, the bid rent curve decreases exponentially out from the urban core. In more sprawling metro areas like Houston and Indianapolis, the downward curve is more linear and flat.
How can bid\-rent help us understand where to expand the UGA? Let’s take a look at a map of the fictitious ‘Ken County’, its UGA, and the three towns within (Figure 2\.6\). Assume Emil City is the urban core and contains higher density development compared to Dianaville and New Baby Town.
In the map below, two polygons are drawn at 1/8 mile radial distance out from and in from the UGA, respectively. These buffers are drawn to delineate areas just outside and just inside the UGA, and can be used to calculate differences in housing unit density across the boundary. Comparing these densities on either side of the UGA in a bid\-rent context can help identify in which Ken County town to expand the UGA boundary.
Let’s visualize bid\-rent curves in Emil City and Dianaville in Figure 2\.7 below. Emil City is the urban core and while density is greatest within the UGA, areas outside are also relatively high density. Conversely, inside Dianaville’s UGA there is some density, but just outside, density falls dramatically.
Currently, density and land value just outside the UGA are artificially low because land use regulations make it difficult to develop. If that restriction were to suddenly be lifted, however, potential building density and land values would likely *rise* to meet those just inside the UGA. Assuming the optimal town for UGA expansion is the one with the maximum economic impact, Dianaville would be the choice.
Which is the optimal town in Lacaster County? Let’s find out.
### 2\.1\.2 Setup Lancaster data
Begin by loading the requisite libraries, turning off scientific notation, and reading in the `mapTheme` and `plotTheme`.
```
options(scipen=999)
library(tidyverse)
library(sf)
library(gridExtra)
library(grid)
library(kableExtra)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
```
The data for this analysis is read in, including:
1. `studyAreaTowns` \- A polygon layer of town polygons inside of the Lancaster County study area. Note that there is a `MUNI` or municipal town name for each town polygon.
2. `uga` \- A polygon layer of the spatial extent of Lancaster County’s Urban Growth Area.
3. `lancCounty` \- A polygon layer of the spatial extent of Lancaster County.
4. `buildings` \- A polygon layer of the footprint for all buildings in the study area.
5. `greenSpace` \- A polygon layer showing areas classified as non\-developed land cover as classified by the USGS.
```
lancCounty <- st_read(file.path(root.dir,"/Chapter2/LancasterCountyBoundary.geojson")) %>%
st_transform('ESRI:102728')
uga <- st_read(file.path(root.dir,"/Chapter2/Urban_Growth_Boundary.geojson")) %>%
st_transform('ESRI:102728')
studyAreaTowns <- st_read(file.path(root.dir,"/Chapter2/StudyAreaTowns.geojson")) %>%
st_transform('ESRI:102728')
buildings <- st_read(file.path(root.dir,"/Chapter2/LancasterCountyBuildings.geojson")) %>%
st_transform('ESRI:102728')
greenSpace <- st_read(file.path(root.dir,"/Chapter2/LancasterGreenSpace.geojson")) %>%
st_transform('ESRI:102728')
```
2\.2 Identifying areas inside \& outside of the Urban Growth Area
-----------------------------------------------------------------
In this section, a polygon layer is produced where each unit is assigned 1\) the town it is in and 2\) whether it is 1/8th mile inside the UGA or 1/8th mile outside. This layer is created from just two inputs: `studyAreaTowns`, which delineates the 14 towns in Lancaster County, and `uga`, the boundary of the Urban Growth Area. Figure 2\.12 visualizes those boundaries.
The `uga` layer is used to find areas 1/8th mile inside and outside of the UGA. To make this easier, the code block below dissolves away (`st_union`) the towns leaving just the UGA boundary. The `st_buffer` is a quick way to remove some of the extraneous slivers that otherwise appear in the middle of the UGA.
```
uga_union <-
st_union(uga) %>%
st_buffer(1) %>%
st_sf()
```
Next, steps are taken to find areas 1/8th mile inside and outside the UGA. Here is the process:
1. `st_buffer` is used to find areas 1/8th mile *outside* of the `uga_union`. This results in a polygon larger than the original `uga_union`.
2. `st_difference` is then used to erase the original extent of `uga_union`, leaving only the 1/8th mile `outsideBuffer`.
3. To get areas 1/8th mile *inside* of the `uga_union`, a negative parameter is input to `st_buffer` to shrink the diameter of `uga_union`.
4. `st_difference` is used again to erase part of the geometry. This time however, the placement of the `.` means that the negative buffer polygon cookie cutters the larger `uga_union` leaving just the area 1/8th mile `insideBuffer`.
Note the `Legend` mutated for each buffer denoting its position inside or outside, and the creation of `bothBuffers`.
```
outsideBuffer <-
st_buffer(uga_union, 660) %>%
st_difference(uga_union) %>%
mutate(Legend = "Outside")
insideBuffer <-
st_buffer(uga_union, dist = -660) %>%
st_difference(uga_union, .) %>%
mutate(Legend = "Inside")
bothBuffers <- rbind(insideBuffer, outsideBuffer)
```
Finally, `bothBuffers` are plotted below.
```
ggplot() +
geom_sf(data = bothBuffers, aes(fill = Legend)) +
scale_fill_manual(values = c("#F8766D", "#00BFC4")) +
labs(title = "1/8mi buffer inside & outside UGA") +
mapTheme()
```
### 2\.2\.1 Associate each inside/outside buffer with its respective town.
Additional wrangling associates a town with each buffer. `table(st_is_valid(studyAreaTowns))` will likely tell you that there is a broken geometry in this layer. In the code block below, `st_make_valid` corrects the issue.
The goal is for the intersection to yield an `Inside` feature and an `Outisde` feature for each town. Two towns only show up on one side of the boundary and are thus removed. `arrange(buffersAndTowns, MUNI) %>% print(n=25)` shows the remaining features.
```
buffersAndTowns <-
st_intersection(st_make_valid(studyAreaTowns), bothBuffers) %>%
filter(MUNI != "MOUNTVILLE BOROUGH" & MUNI != "MILLERSVILLE BOROUGH")
```
Finally, the below map can be produced which shows for each town, areas inside and outside the UGA. I have purposely mislabeled this map and omitted a legend. Can you build a better data visualization than this?
### 2\.2\.2 Building density by town \& by inside/outside the UGA
The bid\-rent model suggests that the town best suited for UGA expansion has the greatest infill potential, defined as the difference in building density on either side of the UGB. In this section, building density is calculated inside/outside the UGA by town, and the difference is taken.
To do so, each building must know its town and its place `Inside` or `Outside` the UGA. These relationships are calculated below.
The first code block below converts the `building` polygon layer to point centroids (`st_centroid`). A `counter` field is set to `1`, and will be used to sum the number of `buildingsCentroids` that fall into each town by inside/outside polygon.
The second code block uses `aggregate` to spatially join `buildingCentroids` to `buffersAndTowns`, taking the `sum` of `counter`, to get the count of buildings. `cbind` marries the spatial join to the original `buffersAndTowns` layer; `mutate` replaces `NA` with `0` for polygons with no buildings, and `Area` is calculated (`st_area`).
```
buildingCentroids <-
st_centroid(buildings) %>%
mutate(counter = 1) %>%
dplyr::select(counter)
buffersAndTowns_Buildings <-
aggregate(buildingCentroids, buffersAndTowns, sum) %>%
cbind(buffersAndTowns) %>%
mutate(counter = replace_na(counter, 0),
Area = as.numeric(st_area(.)))
```
The resulting data is grouped and summarized to get the sum of buildings and area for each town by inside/outside polygon. `Building_Density` is then calculated. Check out the `buffersAndTowns_Buildings_Summarize` layer to see the result.
```
buffersAndTowns_Buildings_Summarize <-
buffersAndTowns_Buildings %>%
group_by(MUNI, Legend) %>%
summarize(Building_Count = sum(counter),
Area = sum(Area)) %>%
mutate(Building_Density = Building_Count / Area)
```
The code block below calculates the inside/outside density difference by town. Note this requires the data be moved to wide form with `spread`. The resulting `Building_Difference` is sorted in descending order and the results can be used with other metrics to plan UGA expansion. 2\.3 below provides additional context for these differences.
Which towns seem like good candidates for UGA expansion?
```
buildingDifferenceTable <-
st_drop_geometry(buffersAndTowns_Buildings_Summarize) %>%
dplyr::select(MUNI, Legend, Building_Density) %>%
spread(Legend, Building_Density) %>%
mutate(Building_Difference = Inside - Outside) %>%
arrange(desc(Building_Difference))
```
| MUNI | Inside | Outside | Building\_Difference |
| --- | --- | --- | --- |
| EAST PETERSBURG BOROUGH | 0\.0000304 | 0\.0000000 | 0\.0000304 |
| WEST LAMPETER TOWNSHIP | 0\.0000190 | 0\.0000021 | 0\.0000169 |
| WEST HEMPFIELD TOWNSHIP | 0\.0000198 | 0\.0000033 | 0\.0000165 |
| UPPER LEACOCK TOWNSHIP | 0\.0000177 | 0\.0000033 | 0\.0000144 |
| EAST LAMPETER TOWNSHIP | 0\.0000170 | 0\.0000050 | 0\.0000120 |
| COLUMBIA BOROUGH | 0\.0000100 | 0\.0000000 | 0\.0000100 |
| MANHEIM TOWNSHIP | 0\.0000094 | 0\.0000020 | 0\.0000074 |
| PEQUEA TOWNSHIP | 0\.0000113 | 0\.0000048 | 0\.0000065 |
| LANCASTER TOWNSHIP | 0\.0000065 | 0\.0000000 | 0\.0000065 |
| MANOR TOWNSHIP | 0\.0000094 | 0\.0000038 | 0\.0000057 |
| EAST HEMPFIELD TOWNSHIP | 0\.0000061 | 0\.0000037 | 0\.0000024 |
| CITY OF LANCASTER | 0\.0000014 | 0\.0000000 | 0\.0000014 |
| |
| --- |
| Table 2\.1 |
### 2\.2\.3 Visualize buildings inside \& outside the UGA
For a more precise look a single town, the map below shows buildings inside and outside West Hempfield Township’s UGA, and a cutout map situating the town within the larger county.
The first step to create this map is to classify West Hempfield buildings by whether they are inside or outside the UGA. Start by intersecting the UGA and the West Hemptfield township to find areas in the UGA and the town (`uga_WH`).
`buildings_WH` is a layer of buildings labeled by their location `Inside UGA` and `Outside UGA`. This layer is created by first using a spatial selection to find all the `buildings` *within* the `uga_WH` polygon. To find buildings *without* (ie. outside) the UGA, a spatial selection finds all `buildings` in West Hempfield, which is then passed to another spatial selection to remove (`st_disjoint`) all buildings **not** in West Hempfield’s UGA. This leaves buildings in West Hempfield but not in its UGA.
```
westHempfield <-
filter(studyAreaTowns, MUNI == "WEST HEMPFIELD TOWNSHIP")
uga_WH <-
st_intersection(uga, westHempfield) %>%
st_union() %>%
st_sf()
buildings_WH <-
rbind(
buildings[uga_WH,] %>% #Within
mutate(Legend = "Inside UGA"),
buildings[westHempfield,] %>% #Without
.[uga_WH, , op = st_disjoint] %>%
mutate(Legend = "Outside UGA"))
```
Can you re\-create the below map? There are two maps that need to be created \- one for West Hempfield and one for the cutout.
The West Hempfield map contains three `geom_sf` calls that 1\) plot the boundary of `westHempfield`; 2\) plot the `st_intersection` of `greenSpace` and `westHempfield` to find all the green space in town; and 3\) plot `buildings_WH` colored by the Inside/Outside `Legend`.
The cutout includes two `geom_sf` calls that 1\) plot all `studyAreaTowns` in grey and 2\) plot `WestHempfield` in black.
The two `ggplot` layers are then `print` in succession below. The second line adds the cutout map and specifies where on the plot it should be located.
```
print(WestHempfield_BuildingsPlot)
print(studyAreaCutoutMap, vp=viewport(.32, .715, .35, .5))
```
2\.3 Return to Lancaster’s Bid Rent
-----------------------------------
Section 2\.1\.1 suggested that development restrictions just outside of the UGA keeps building density artificially low and if those restrictions were lifted, a short term increase to building density (and land values) might be expected just outside.
In this section, we visualize density as a function of distance to the UGA, much like a bid\-rent curve. These plots should provide visual evidence of a sharp discontinuity in density at the UGA boundary.
Figure 2\.7 calculated housing unit density from aggregated Census tract data, but here, building density is calculated with the actual `building_centroids`, as it was in 2\.2\.2\. Instead of calculating densities on either immediate side of the UGA, it is calculated at multiple distance intervals using the `multipleRing` buffer tool.
The `multipleRing` function iteratively draws buffers at successive intervals inside and outside the UGA. The code block below uses the function twice \- drawing negative buffers first and positive buffers second, both at 1/8 mile intervals. Note that the negative buffer can only extend so far into the interior of the UGA \- in this case, 2\.75 miles (14,250 ft).
The map plots the multiple ring buffer without buffer boundaries. `scale_fill_gradient2` is used to create the dual color gradient that diverges at `Distance == 0`.
```
multipleRing <-
rbind(
multipleRingBuffer(uga_union, -14520, -660) %>%
mutate(Legend = "Inside the UGA"),
multipleRingBuffer(uga_union, 26400, 660) %>%
mutate(Legend = "Outside the UGA"))
```
Figures 2\.13 and 2\.14 below visualize the bid\-rent curve for the study area and by town, respectively. To calculate these plots, the code in 2\.2\.2 is extended. As before, each building must know its location `Inside` or `Outside` the UGA; the `studyAreaTown` in which it is located; and now, its distance to/from the `uga`.
First, `buildingCentroids` is intersected with the `multipleRing` buffer. As before, `aggregate` spatial joins `buildingCentroids` to the `RingsAndTowns` polygons, calculating the sum of buildings for each. `Area` is calculated and ring/town polygons with no buildings have `NA` changed to 0\.
```
RingsAndTowns <-
st_intersection(multipleRing, st_make_valid(studyAreaTowns))
buildings.in.RingsAndTowns <-
aggregate(buildingCentroids,
RingsAndTowns, sum) %>%
cbind(RingsAndTowns) %>%
dplyr::select(-geometry.1) %>%
mutate(counter = replace_na(counter, 0),
Area = as.numeric(st_area(.)))
```
Figure 2\.13 plots building density as a function to and from the UGA. To create this plot, the sf layer is converted to a data frame with `st_drop_geometry`. The data is grouped by the `distance` interval and inside/outside and `Building_Density` is calculated.
`geom_vline` is used to create the vertical line at the UGA and the line break is created by setting `colour = Legend` in the `aes` parameter of `ggplot`. `geom_smooth(method="loess", se=F)` fits a ‘local regression’ (`loess`) line to the scatterplot relationship between `distance` and `Building_Density`.
This ‘boundary discontinuity’ plot highlights how the bid\-rent conditions change on either immediate side of the UGA. How does it look for each town?
Figure 2\.14 is created below in the same way that Figure 2\.13 is created but with the addition of `MUNI` in the `group_by`, as well as `facet_wrap`. Some towns are completely within the UGA, and thus have no `Building_Density` outside. These plots again help reveal towns likely suitable for UGA expansion.
2\.4 Conclusion \- On boundaries
--------------------------------
Boundaries are a ubiquitous feature of the American landscape. States, counties, school districts, Congressional districts, and municipalities are all delineated by exacting boundaries that divide the landscape into neighborhoods, enclaves, political regimes, and economies. Boundaries dictate the haves and have\-nots; who pays more taxes and who pays less; who can access resources, and who cannot. Boundaries dictate the allocation of resources across space.
At times, boundaries are drawn to prevent phenomena from spilling out across space. In this chapter, the concern was with suburban sprawl spilling out onto hinter farmlands and hurting Lancaster County’s agrarian economy. To make room for more development, the bid\-rent model was used to identify towns suitable for UGA expansion. Our assumption was that once expanded, building density, land values, and real estate prices just outside the UGA would adjust to those just inside.
Thus far, the reader has learned a series of analytical and visualization strategies for working with geospatial data. These methods will be invaluable as we move into more complex data science in the chapters to follow.
2\.5 Assignment \- Boundaries in your community
-----------------------------------------------
At the periphery of an Urban Growth Area is a hard boundary, a wall, with a different legal regime on either side. Interestingly, cities and regions are divided by many soft boundaries as well, with appreciably different conditions on either side, but not because of any legal mandate. Many historical inequities partitioned America into economic and racial enclaves, the results of which are still evident today.
What soft boundaries exist in your city, and how do they separate the communities on either side? In this assignment, you will wrangle together a significant street, avenue or other soft boundary of your choosing and create discontinuity plots similar to Figure 2\.13 above.
Take Philadelphia’s Girard Avenue (Figure 2\.15\), a clear dividing line between gentrified neighborhoods to the South (adjacent to Center City), and those to the North, which have been slower to change.
The code block below calculates distance from each tract to the Girard Avenue line (Figure 2\.16\). `tract.centroids.NS` stamps `tract.centroids` with their location in either a `North` or `South`\-side buffer. Note that these are one\-sided buffers (`singleSide = TRUE`). `tract.centroids.distance` column binds (`cbind`) together the resulting data frame with a column measuring distance to `girard`.
The `dist2Line` function from the `geosphere` package, takes two inputs. The first is a matrix of `tract.centroids` projected into decimal degrees (`4326`). The second is the `girard` avenue line, also in decimal degrees. `dist2Line` does not take an `sf` however, but a layer converted to the older R geospatial standard `sp`, with `as_Spatial`. Finally, `mutate` sets all distances north of Girard negative to enable the plots in Figure 2\.16 below. As always, run each line separately to understand the process.
```
tract.centroids.NS <-
st_intersection(tract.centroids,
rbind(
st_buffer(girard, 10000, singleSide = TRUE) %>% mutate(Side = "South"),
st_buffer(girard, -10000, singleSide = TRUE) %>% mutate(Side = "North")))
tract.centroids.distance <-
cbind(
tract.centroids.NS,
dist2Line(
st_coordinates(st_transform(tract.centroids.NS, 4326)),
as_Spatial(st_transform(girard, 4326)))) %>%
mutate(distance = ifelse(Side == "North", distance * -1, distance))
```
The resulting discontinuity plots (Figure 2\.16\) for both `Percent_White` and single\-family house price show significant differences at the Girard Ave. boundary, and these differences seem to increase with time. The deliverables for this assignment include:
1. Wrangle street (line) data, Census and other open data for your study area. Buffer the street line and use it to subset tracts/data from the larger city. Measure the distance from a tract centroid or other outcome to the boundary line with `geosphere::dist2Line`.
2. Choose 2 Census outcomes and two other point\-level outcomes from local open data, such as crime, home prices, construction permits etc. Develop maps and discontinuity plots to show across\-boundary differences.
3. Write a short research brief asking, ‘How does the such\-and\-such\-boundary partition my community?’ Motivate your analysis; provide some historical context in your community; present your maps and plots; and conclude.
4. Bonus: After you have finished this book and have a better understanding of fixed effects regression, estimate a regression of say, house price as a function of a boundary\-side fixed effect and year. Interact or multiply the side and year variables. What is the interpretation of the resulting estimate?
2\.1 Introduction \- Lancaster development
------------------------------------------
Cheap housing, cheap gas, and a preference for ‘country living’ have lured American families to the suburbs for decades, but at what cost?
Development patterns that promote lengthy car commutes and oversized ‘McMansions’ are generally not sustainable. We know that sprawl generates air and water pollution and degrades the quality of ecosystem services \- but what about economic competitiveness?
Lancaster County, Pennsylvania, a lush, rolling agricultural county 60 miles west of Philadelphia, has been wrestling with sprawl for more than three decades, as new development swallows up the farms that help drive the local economy. The rate of farm workers in Lancaster is nearly three times the rate statewide.[15](#fn15) According to the 2017 Census of Agriculture, agricultural commodity sale totals were estimated at more than $1 billion dollars in Lancaster, making it the 55th highest grossing county nationally (top 2%) and the highest in Pennsylvania.[16](#fn16)
Lancaster’s farmland is quite valuable, but increasingly, that land is being paved over in favor of suburban housing. Figure 2\.1 visualizes the rate of suburban expansion in Lancaster since 1990\. Over the last 30 years, Between 1990 and 2018, Lancaster’s population grew 17% while developed land increased 140% \- a clear indicator of sprawl.
What does the relationship between Lancaster development and population growth teach us about sprawl? Figure 2\.2a plots Census tract population (y\-axis) by acres of developed land (x\-axis) by decade. Note the decreasing line slopes. In 1990, one acre of development is associated with \~2 people, but by 2018, despite an overall population increase, the figure halved to \~1 person per acre.
Figure 2\.2b visualizes the same relationship but with *housing units*. Here the trend is similar. In 1990, a one acre increase in development was associated with \~2/3 of a housing unit \- already at a very low density. Again, by 2018, despite population increases, that figure fell half.
To stem the sprawl, savvy Lancaster Planners in the 1990s worked with politicians, conservationists, and other stakeholders to develop a set of Urban Growth Areas (UGA) (Figure 2\.3\) to encourage development *inside* the urban core while discouraging development *outside* on outlying farmland. To what extent has the UGA succeeded to date?
Figure 2\.3 plots development change inside and outside the UGA boundary between 1990 and 2018\. Areas inside the UGA are developing much faster \- which is the purpose of the UGA. However, the UGA represents just 8\.6% of the total area of the County, and as it fills, the development trend flattens. On an acreage basis, development outside of the UGA has skyrocketted from more than 60,000 acres in 1990 to nearly 145,000 acres in 2018\. This suggests the UGA is not sufficiently containing sprawl.
One reasonable policy solution might be to increase development restrictions outside the UGA, while strategically expanding the existing UGA to promote infill development. In this chapter, we will develop a simple economic model to identify towns best suited for expanding its growth area.
In the previous chapter’s discussion of the Modifiable Areal Unit Problem (MAUP), we discussed how boundaries can be a liability for the spatial analyst. In this chapter, however, boundaries are an asset, leveraged for their discontinuous nature to understand their affect on development.
The next section introduces the economic model used to identify towns suitable for UGA expansion. In Section 2\.2, analytics are created to delineate areas inside and outside of the UGA. 2\.3 uses the result as an input to the economic model. 2\.4 concludes.
### 2\.1\.1 The bid\-rent model
The bid\-rent model of urban land markets is used to understand the spatial pattern of density, price, and land uses in market\-oriented cities.[17](#fn17) Illustrated in the top panel of Figure 2\.4, the model suggests that as distance from the city center increases, building density decreases. Building density is closely correlated to population density and it also reflects locational demand.
As demand for a place increases, firms and households are willing to pay a premium, even for marginally less space. Like all ‘models’, bid\-rent is a simple abstraction of reality, but this simplicity can provide important insight into the problem at hand.
The bottom panel of Figure 2\.4 shows the bid\-rent curve for Lancaster County with housing unit density on the y\-axis. Compared to the theoretical curve, density in Lancaster flattens dramatically at the 5 mile mark continuing flat out to the county boundary. As we’ll learn below, this sharp drop\-off pattern is heavily influenced by the UGA.
For contrast, Figure 2\.5 visualizes bid\-rent curves for 8 cities/metro areas throughout the U.S. In most dense, post\-industrial cities like New York, Philadelphia, and San Francisco, the bid rent curve decreases exponentially out from the urban core. In more sprawling metro areas like Houston and Indianapolis, the downward curve is more linear and flat.
How can bid\-rent help us understand where to expand the UGA? Let’s take a look at a map of the fictitious ‘Ken County’, its UGA, and the three towns within (Figure 2\.6\). Assume Emil City is the urban core and contains higher density development compared to Dianaville and New Baby Town.
In the map below, two polygons are drawn at 1/8 mile radial distance out from and in from the UGA, respectively. These buffers are drawn to delineate areas just outside and just inside the UGA, and can be used to calculate differences in housing unit density across the boundary. Comparing these densities on either side of the UGA in a bid\-rent context can help identify in which Ken County town to expand the UGA boundary.
Let’s visualize bid\-rent curves in Emil City and Dianaville in Figure 2\.7 below. Emil City is the urban core and while density is greatest within the UGA, areas outside are also relatively high density. Conversely, inside Dianaville’s UGA there is some density, but just outside, density falls dramatically.
Currently, density and land value just outside the UGA are artificially low because land use regulations make it difficult to develop. If that restriction were to suddenly be lifted, however, potential building density and land values would likely *rise* to meet those just inside the UGA. Assuming the optimal town for UGA expansion is the one with the maximum economic impact, Dianaville would be the choice.
Which is the optimal town in Lacaster County? Let’s find out.
### 2\.1\.2 Setup Lancaster data
Begin by loading the requisite libraries, turning off scientific notation, and reading in the `mapTheme` and `plotTheme`.
```
options(scipen=999)
library(tidyverse)
library(sf)
library(gridExtra)
library(grid)
library(kableExtra)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
```
The data for this analysis is read in, including:
1. `studyAreaTowns` \- A polygon layer of town polygons inside of the Lancaster County study area. Note that there is a `MUNI` or municipal town name for each town polygon.
2. `uga` \- A polygon layer of the spatial extent of Lancaster County’s Urban Growth Area.
3. `lancCounty` \- A polygon layer of the spatial extent of Lancaster County.
4. `buildings` \- A polygon layer of the footprint for all buildings in the study area.
5. `greenSpace` \- A polygon layer showing areas classified as non\-developed land cover as classified by the USGS.
```
lancCounty <- st_read(file.path(root.dir,"/Chapter2/LancasterCountyBoundary.geojson")) %>%
st_transform('ESRI:102728')
uga <- st_read(file.path(root.dir,"/Chapter2/Urban_Growth_Boundary.geojson")) %>%
st_transform('ESRI:102728')
studyAreaTowns <- st_read(file.path(root.dir,"/Chapter2/StudyAreaTowns.geojson")) %>%
st_transform('ESRI:102728')
buildings <- st_read(file.path(root.dir,"/Chapter2/LancasterCountyBuildings.geojson")) %>%
st_transform('ESRI:102728')
greenSpace <- st_read(file.path(root.dir,"/Chapter2/LancasterGreenSpace.geojson")) %>%
st_transform('ESRI:102728')
```
### 2\.1\.1 The bid\-rent model
The bid\-rent model of urban land markets is used to understand the spatial pattern of density, price, and land uses in market\-oriented cities.[17](#fn17) Illustrated in the top panel of Figure 2\.4, the model suggests that as distance from the city center increases, building density decreases. Building density is closely correlated to population density and it also reflects locational demand.
As demand for a place increases, firms and households are willing to pay a premium, even for marginally less space. Like all ‘models’, bid\-rent is a simple abstraction of reality, but this simplicity can provide important insight into the problem at hand.
The bottom panel of Figure 2\.4 shows the bid\-rent curve for Lancaster County with housing unit density on the y\-axis. Compared to the theoretical curve, density in Lancaster flattens dramatically at the 5 mile mark continuing flat out to the county boundary. As we’ll learn below, this sharp drop\-off pattern is heavily influenced by the UGA.
For contrast, Figure 2\.5 visualizes bid\-rent curves for 8 cities/metro areas throughout the U.S. In most dense, post\-industrial cities like New York, Philadelphia, and San Francisco, the bid rent curve decreases exponentially out from the urban core. In more sprawling metro areas like Houston and Indianapolis, the downward curve is more linear and flat.
How can bid\-rent help us understand where to expand the UGA? Let’s take a look at a map of the fictitious ‘Ken County’, its UGA, and the three towns within (Figure 2\.6\). Assume Emil City is the urban core and contains higher density development compared to Dianaville and New Baby Town.
In the map below, two polygons are drawn at 1/8 mile radial distance out from and in from the UGA, respectively. These buffers are drawn to delineate areas just outside and just inside the UGA, and can be used to calculate differences in housing unit density across the boundary. Comparing these densities on either side of the UGA in a bid\-rent context can help identify in which Ken County town to expand the UGA boundary.
Let’s visualize bid\-rent curves in Emil City and Dianaville in Figure 2\.7 below. Emil City is the urban core and while density is greatest within the UGA, areas outside are also relatively high density. Conversely, inside Dianaville’s UGA there is some density, but just outside, density falls dramatically.
Currently, density and land value just outside the UGA are artificially low because land use regulations make it difficult to develop. If that restriction were to suddenly be lifted, however, potential building density and land values would likely *rise* to meet those just inside the UGA. Assuming the optimal town for UGA expansion is the one with the maximum economic impact, Dianaville would be the choice.
Which is the optimal town in Lacaster County? Let’s find out.
### 2\.1\.2 Setup Lancaster data
Begin by loading the requisite libraries, turning off scientific notation, and reading in the `mapTheme` and `plotTheme`.
```
options(scipen=999)
library(tidyverse)
library(sf)
library(gridExtra)
library(grid)
library(kableExtra)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
```
The data for this analysis is read in, including:
1. `studyAreaTowns` \- A polygon layer of town polygons inside of the Lancaster County study area. Note that there is a `MUNI` or municipal town name for each town polygon.
2. `uga` \- A polygon layer of the spatial extent of Lancaster County’s Urban Growth Area.
3. `lancCounty` \- A polygon layer of the spatial extent of Lancaster County.
4. `buildings` \- A polygon layer of the footprint for all buildings in the study area.
5. `greenSpace` \- A polygon layer showing areas classified as non\-developed land cover as classified by the USGS.
```
lancCounty <- st_read(file.path(root.dir,"/Chapter2/LancasterCountyBoundary.geojson")) %>%
st_transform('ESRI:102728')
uga <- st_read(file.path(root.dir,"/Chapter2/Urban_Growth_Boundary.geojson")) %>%
st_transform('ESRI:102728')
studyAreaTowns <- st_read(file.path(root.dir,"/Chapter2/StudyAreaTowns.geojson")) %>%
st_transform('ESRI:102728')
buildings <- st_read(file.path(root.dir,"/Chapter2/LancasterCountyBuildings.geojson")) %>%
st_transform('ESRI:102728')
greenSpace <- st_read(file.path(root.dir,"/Chapter2/LancasterGreenSpace.geojson")) %>%
st_transform('ESRI:102728')
```
2\.2 Identifying areas inside \& outside of the Urban Growth Area
-----------------------------------------------------------------
In this section, a polygon layer is produced where each unit is assigned 1\) the town it is in and 2\) whether it is 1/8th mile inside the UGA or 1/8th mile outside. This layer is created from just two inputs: `studyAreaTowns`, which delineates the 14 towns in Lancaster County, and `uga`, the boundary of the Urban Growth Area. Figure 2\.12 visualizes those boundaries.
The `uga` layer is used to find areas 1/8th mile inside and outside of the UGA. To make this easier, the code block below dissolves away (`st_union`) the towns leaving just the UGA boundary. The `st_buffer` is a quick way to remove some of the extraneous slivers that otherwise appear in the middle of the UGA.
```
uga_union <-
st_union(uga) %>%
st_buffer(1) %>%
st_sf()
```
Next, steps are taken to find areas 1/8th mile inside and outside the UGA. Here is the process:
1. `st_buffer` is used to find areas 1/8th mile *outside* of the `uga_union`. This results in a polygon larger than the original `uga_union`.
2. `st_difference` is then used to erase the original extent of `uga_union`, leaving only the 1/8th mile `outsideBuffer`.
3. To get areas 1/8th mile *inside* of the `uga_union`, a negative parameter is input to `st_buffer` to shrink the diameter of `uga_union`.
4. `st_difference` is used again to erase part of the geometry. This time however, the placement of the `.` means that the negative buffer polygon cookie cutters the larger `uga_union` leaving just the area 1/8th mile `insideBuffer`.
Note the `Legend` mutated for each buffer denoting its position inside or outside, and the creation of `bothBuffers`.
```
outsideBuffer <-
st_buffer(uga_union, 660) %>%
st_difference(uga_union) %>%
mutate(Legend = "Outside")
insideBuffer <-
st_buffer(uga_union, dist = -660) %>%
st_difference(uga_union, .) %>%
mutate(Legend = "Inside")
bothBuffers <- rbind(insideBuffer, outsideBuffer)
```
Finally, `bothBuffers` are plotted below.
```
ggplot() +
geom_sf(data = bothBuffers, aes(fill = Legend)) +
scale_fill_manual(values = c("#F8766D", "#00BFC4")) +
labs(title = "1/8mi buffer inside & outside UGA") +
mapTheme()
```
### 2\.2\.1 Associate each inside/outside buffer with its respective town.
Additional wrangling associates a town with each buffer. `table(st_is_valid(studyAreaTowns))` will likely tell you that there is a broken geometry in this layer. In the code block below, `st_make_valid` corrects the issue.
The goal is for the intersection to yield an `Inside` feature and an `Outisde` feature for each town. Two towns only show up on one side of the boundary and are thus removed. `arrange(buffersAndTowns, MUNI) %>% print(n=25)` shows the remaining features.
```
buffersAndTowns <-
st_intersection(st_make_valid(studyAreaTowns), bothBuffers) %>%
filter(MUNI != "MOUNTVILLE BOROUGH" & MUNI != "MILLERSVILLE BOROUGH")
```
Finally, the below map can be produced which shows for each town, areas inside and outside the UGA. I have purposely mislabeled this map and omitted a legend. Can you build a better data visualization than this?
### 2\.2\.2 Building density by town \& by inside/outside the UGA
The bid\-rent model suggests that the town best suited for UGA expansion has the greatest infill potential, defined as the difference in building density on either side of the UGB. In this section, building density is calculated inside/outside the UGA by town, and the difference is taken.
To do so, each building must know its town and its place `Inside` or `Outside` the UGA. These relationships are calculated below.
The first code block below converts the `building` polygon layer to point centroids (`st_centroid`). A `counter` field is set to `1`, and will be used to sum the number of `buildingsCentroids` that fall into each town by inside/outside polygon.
The second code block uses `aggregate` to spatially join `buildingCentroids` to `buffersAndTowns`, taking the `sum` of `counter`, to get the count of buildings. `cbind` marries the spatial join to the original `buffersAndTowns` layer; `mutate` replaces `NA` with `0` for polygons with no buildings, and `Area` is calculated (`st_area`).
```
buildingCentroids <-
st_centroid(buildings) %>%
mutate(counter = 1) %>%
dplyr::select(counter)
buffersAndTowns_Buildings <-
aggregate(buildingCentroids, buffersAndTowns, sum) %>%
cbind(buffersAndTowns) %>%
mutate(counter = replace_na(counter, 0),
Area = as.numeric(st_area(.)))
```
The resulting data is grouped and summarized to get the sum of buildings and area for each town by inside/outside polygon. `Building_Density` is then calculated. Check out the `buffersAndTowns_Buildings_Summarize` layer to see the result.
```
buffersAndTowns_Buildings_Summarize <-
buffersAndTowns_Buildings %>%
group_by(MUNI, Legend) %>%
summarize(Building_Count = sum(counter),
Area = sum(Area)) %>%
mutate(Building_Density = Building_Count / Area)
```
The code block below calculates the inside/outside density difference by town. Note this requires the data be moved to wide form with `spread`. The resulting `Building_Difference` is sorted in descending order and the results can be used with other metrics to plan UGA expansion. 2\.3 below provides additional context for these differences.
Which towns seem like good candidates for UGA expansion?
```
buildingDifferenceTable <-
st_drop_geometry(buffersAndTowns_Buildings_Summarize) %>%
dplyr::select(MUNI, Legend, Building_Density) %>%
spread(Legend, Building_Density) %>%
mutate(Building_Difference = Inside - Outside) %>%
arrange(desc(Building_Difference))
```
| MUNI | Inside | Outside | Building\_Difference |
| --- | --- | --- | --- |
| EAST PETERSBURG BOROUGH | 0\.0000304 | 0\.0000000 | 0\.0000304 |
| WEST LAMPETER TOWNSHIP | 0\.0000190 | 0\.0000021 | 0\.0000169 |
| WEST HEMPFIELD TOWNSHIP | 0\.0000198 | 0\.0000033 | 0\.0000165 |
| UPPER LEACOCK TOWNSHIP | 0\.0000177 | 0\.0000033 | 0\.0000144 |
| EAST LAMPETER TOWNSHIP | 0\.0000170 | 0\.0000050 | 0\.0000120 |
| COLUMBIA BOROUGH | 0\.0000100 | 0\.0000000 | 0\.0000100 |
| MANHEIM TOWNSHIP | 0\.0000094 | 0\.0000020 | 0\.0000074 |
| PEQUEA TOWNSHIP | 0\.0000113 | 0\.0000048 | 0\.0000065 |
| LANCASTER TOWNSHIP | 0\.0000065 | 0\.0000000 | 0\.0000065 |
| MANOR TOWNSHIP | 0\.0000094 | 0\.0000038 | 0\.0000057 |
| EAST HEMPFIELD TOWNSHIP | 0\.0000061 | 0\.0000037 | 0\.0000024 |
| CITY OF LANCASTER | 0\.0000014 | 0\.0000000 | 0\.0000014 |
| |
| --- |
| Table 2\.1 |
### 2\.2\.3 Visualize buildings inside \& outside the UGA
For a more precise look a single town, the map below shows buildings inside and outside West Hempfield Township’s UGA, and a cutout map situating the town within the larger county.
The first step to create this map is to classify West Hempfield buildings by whether they are inside or outside the UGA. Start by intersecting the UGA and the West Hemptfield township to find areas in the UGA and the town (`uga_WH`).
`buildings_WH` is a layer of buildings labeled by their location `Inside UGA` and `Outside UGA`. This layer is created by first using a spatial selection to find all the `buildings` *within* the `uga_WH` polygon. To find buildings *without* (ie. outside) the UGA, a spatial selection finds all `buildings` in West Hempfield, which is then passed to another spatial selection to remove (`st_disjoint`) all buildings **not** in West Hempfield’s UGA. This leaves buildings in West Hempfield but not in its UGA.
```
westHempfield <-
filter(studyAreaTowns, MUNI == "WEST HEMPFIELD TOWNSHIP")
uga_WH <-
st_intersection(uga, westHempfield) %>%
st_union() %>%
st_sf()
buildings_WH <-
rbind(
buildings[uga_WH,] %>% #Within
mutate(Legend = "Inside UGA"),
buildings[westHempfield,] %>% #Without
.[uga_WH, , op = st_disjoint] %>%
mutate(Legend = "Outside UGA"))
```
Can you re\-create the below map? There are two maps that need to be created \- one for West Hempfield and one for the cutout.
The West Hempfield map contains three `geom_sf` calls that 1\) plot the boundary of `westHempfield`; 2\) plot the `st_intersection` of `greenSpace` and `westHempfield` to find all the green space in town; and 3\) plot `buildings_WH` colored by the Inside/Outside `Legend`.
The cutout includes two `geom_sf` calls that 1\) plot all `studyAreaTowns` in grey and 2\) plot `WestHempfield` in black.
The two `ggplot` layers are then `print` in succession below. The second line adds the cutout map and specifies where on the plot it should be located.
```
print(WestHempfield_BuildingsPlot)
print(studyAreaCutoutMap, vp=viewport(.32, .715, .35, .5))
```
### 2\.2\.1 Associate each inside/outside buffer with its respective town.
Additional wrangling associates a town with each buffer. `table(st_is_valid(studyAreaTowns))` will likely tell you that there is a broken geometry in this layer. In the code block below, `st_make_valid` corrects the issue.
The goal is for the intersection to yield an `Inside` feature and an `Outisde` feature for each town. Two towns only show up on one side of the boundary and are thus removed. `arrange(buffersAndTowns, MUNI) %>% print(n=25)` shows the remaining features.
```
buffersAndTowns <-
st_intersection(st_make_valid(studyAreaTowns), bothBuffers) %>%
filter(MUNI != "MOUNTVILLE BOROUGH" & MUNI != "MILLERSVILLE BOROUGH")
```
Finally, the below map can be produced which shows for each town, areas inside and outside the UGA. I have purposely mislabeled this map and omitted a legend. Can you build a better data visualization than this?
### 2\.2\.2 Building density by town \& by inside/outside the UGA
The bid\-rent model suggests that the town best suited for UGA expansion has the greatest infill potential, defined as the difference in building density on either side of the UGB. In this section, building density is calculated inside/outside the UGA by town, and the difference is taken.
To do so, each building must know its town and its place `Inside` or `Outside` the UGA. These relationships are calculated below.
The first code block below converts the `building` polygon layer to point centroids (`st_centroid`). A `counter` field is set to `1`, and will be used to sum the number of `buildingsCentroids` that fall into each town by inside/outside polygon.
The second code block uses `aggregate` to spatially join `buildingCentroids` to `buffersAndTowns`, taking the `sum` of `counter`, to get the count of buildings. `cbind` marries the spatial join to the original `buffersAndTowns` layer; `mutate` replaces `NA` with `0` for polygons with no buildings, and `Area` is calculated (`st_area`).
```
buildingCentroids <-
st_centroid(buildings) %>%
mutate(counter = 1) %>%
dplyr::select(counter)
buffersAndTowns_Buildings <-
aggregate(buildingCentroids, buffersAndTowns, sum) %>%
cbind(buffersAndTowns) %>%
mutate(counter = replace_na(counter, 0),
Area = as.numeric(st_area(.)))
```
The resulting data is grouped and summarized to get the sum of buildings and area for each town by inside/outside polygon. `Building_Density` is then calculated. Check out the `buffersAndTowns_Buildings_Summarize` layer to see the result.
```
buffersAndTowns_Buildings_Summarize <-
buffersAndTowns_Buildings %>%
group_by(MUNI, Legend) %>%
summarize(Building_Count = sum(counter),
Area = sum(Area)) %>%
mutate(Building_Density = Building_Count / Area)
```
The code block below calculates the inside/outside density difference by town. Note this requires the data be moved to wide form with `spread`. The resulting `Building_Difference` is sorted in descending order and the results can be used with other metrics to plan UGA expansion. 2\.3 below provides additional context for these differences.
Which towns seem like good candidates for UGA expansion?
```
buildingDifferenceTable <-
st_drop_geometry(buffersAndTowns_Buildings_Summarize) %>%
dplyr::select(MUNI, Legend, Building_Density) %>%
spread(Legend, Building_Density) %>%
mutate(Building_Difference = Inside - Outside) %>%
arrange(desc(Building_Difference))
```
| MUNI | Inside | Outside | Building\_Difference |
| --- | --- | --- | --- |
| EAST PETERSBURG BOROUGH | 0\.0000304 | 0\.0000000 | 0\.0000304 |
| WEST LAMPETER TOWNSHIP | 0\.0000190 | 0\.0000021 | 0\.0000169 |
| WEST HEMPFIELD TOWNSHIP | 0\.0000198 | 0\.0000033 | 0\.0000165 |
| UPPER LEACOCK TOWNSHIP | 0\.0000177 | 0\.0000033 | 0\.0000144 |
| EAST LAMPETER TOWNSHIP | 0\.0000170 | 0\.0000050 | 0\.0000120 |
| COLUMBIA BOROUGH | 0\.0000100 | 0\.0000000 | 0\.0000100 |
| MANHEIM TOWNSHIP | 0\.0000094 | 0\.0000020 | 0\.0000074 |
| PEQUEA TOWNSHIP | 0\.0000113 | 0\.0000048 | 0\.0000065 |
| LANCASTER TOWNSHIP | 0\.0000065 | 0\.0000000 | 0\.0000065 |
| MANOR TOWNSHIP | 0\.0000094 | 0\.0000038 | 0\.0000057 |
| EAST HEMPFIELD TOWNSHIP | 0\.0000061 | 0\.0000037 | 0\.0000024 |
| CITY OF LANCASTER | 0\.0000014 | 0\.0000000 | 0\.0000014 |
| |
| --- |
| Table 2\.1 |
### 2\.2\.3 Visualize buildings inside \& outside the UGA
For a more precise look a single town, the map below shows buildings inside and outside West Hempfield Township’s UGA, and a cutout map situating the town within the larger county.
The first step to create this map is to classify West Hempfield buildings by whether they are inside or outside the UGA. Start by intersecting the UGA and the West Hemptfield township to find areas in the UGA and the town (`uga_WH`).
`buildings_WH` is a layer of buildings labeled by their location `Inside UGA` and `Outside UGA`. This layer is created by first using a spatial selection to find all the `buildings` *within* the `uga_WH` polygon. To find buildings *without* (ie. outside) the UGA, a spatial selection finds all `buildings` in West Hempfield, which is then passed to another spatial selection to remove (`st_disjoint`) all buildings **not** in West Hempfield’s UGA. This leaves buildings in West Hempfield but not in its UGA.
```
westHempfield <-
filter(studyAreaTowns, MUNI == "WEST HEMPFIELD TOWNSHIP")
uga_WH <-
st_intersection(uga, westHempfield) %>%
st_union() %>%
st_sf()
buildings_WH <-
rbind(
buildings[uga_WH,] %>% #Within
mutate(Legend = "Inside UGA"),
buildings[westHempfield,] %>% #Without
.[uga_WH, , op = st_disjoint] %>%
mutate(Legend = "Outside UGA"))
```
Can you re\-create the below map? There are two maps that need to be created \- one for West Hempfield and one for the cutout.
The West Hempfield map contains three `geom_sf` calls that 1\) plot the boundary of `westHempfield`; 2\) plot the `st_intersection` of `greenSpace` and `westHempfield` to find all the green space in town; and 3\) plot `buildings_WH` colored by the Inside/Outside `Legend`.
The cutout includes two `geom_sf` calls that 1\) plot all `studyAreaTowns` in grey and 2\) plot `WestHempfield` in black.
The two `ggplot` layers are then `print` in succession below. The second line adds the cutout map and specifies where on the plot it should be located.
```
print(WestHempfield_BuildingsPlot)
print(studyAreaCutoutMap, vp=viewport(.32, .715, .35, .5))
```
2\.3 Return to Lancaster’s Bid Rent
-----------------------------------
Section 2\.1\.1 suggested that development restrictions just outside of the UGA keeps building density artificially low and if those restrictions were lifted, a short term increase to building density (and land values) might be expected just outside.
In this section, we visualize density as a function of distance to the UGA, much like a bid\-rent curve. These plots should provide visual evidence of a sharp discontinuity in density at the UGA boundary.
Figure 2\.7 calculated housing unit density from aggregated Census tract data, but here, building density is calculated with the actual `building_centroids`, as it was in 2\.2\.2\. Instead of calculating densities on either immediate side of the UGA, it is calculated at multiple distance intervals using the `multipleRing` buffer tool.
The `multipleRing` function iteratively draws buffers at successive intervals inside and outside the UGA. The code block below uses the function twice \- drawing negative buffers first and positive buffers second, both at 1/8 mile intervals. Note that the negative buffer can only extend so far into the interior of the UGA \- in this case, 2\.75 miles (14,250 ft).
The map plots the multiple ring buffer without buffer boundaries. `scale_fill_gradient2` is used to create the dual color gradient that diverges at `Distance == 0`.
```
multipleRing <-
rbind(
multipleRingBuffer(uga_union, -14520, -660) %>%
mutate(Legend = "Inside the UGA"),
multipleRingBuffer(uga_union, 26400, 660) %>%
mutate(Legend = "Outside the UGA"))
```
Figures 2\.13 and 2\.14 below visualize the bid\-rent curve for the study area and by town, respectively. To calculate these plots, the code in 2\.2\.2 is extended. As before, each building must know its location `Inside` or `Outside` the UGA; the `studyAreaTown` in which it is located; and now, its distance to/from the `uga`.
First, `buildingCentroids` is intersected with the `multipleRing` buffer. As before, `aggregate` spatial joins `buildingCentroids` to the `RingsAndTowns` polygons, calculating the sum of buildings for each. `Area` is calculated and ring/town polygons with no buildings have `NA` changed to 0\.
```
RingsAndTowns <-
st_intersection(multipleRing, st_make_valid(studyAreaTowns))
buildings.in.RingsAndTowns <-
aggregate(buildingCentroids,
RingsAndTowns, sum) %>%
cbind(RingsAndTowns) %>%
dplyr::select(-geometry.1) %>%
mutate(counter = replace_na(counter, 0),
Area = as.numeric(st_area(.)))
```
Figure 2\.13 plots building density as a function to and from the UGA. To create this plot, the sf layer is converted to a data frame with `st_drop_geometry`. The data is grouped by the `distance` interval and inside/outside and `Building_Density` is calculated.
`geom_vline` is used to create the vertical line at the UGA and the line break is created by setting `colour = Legend` in the `aes` parameter of `ggplot`. `geom_smooth(method="loess", se=F)` fits a ‘local regression’ (`loess`) line to the scatterplot relationship between `distance` and `Building_Density`.
This ‘boundary discontinuity’ plot highlights how the bid\-rent conditions change on either immediate side of the UGA. How does it look for each town?
Figure 2\.14 is created below in the same way that Figure 2\.13 is created but with the addition of `MUNI` in the `group_by`, as well as `facet_wrap`. Some towns are completely within the UGA, and thus have no `Building_Density` outside. These plots again help reveal towns likely suitable for UGA expansion.
2\.4 Conclusion \- On boundaries
--------------------------------
Boundaries are a ubiquitous feature of the American landscape. States, counties, school districts, Congressional districts, and municipalities are all delineated by exacting boundaries that divide the landscape into neighborhoods, enclaves, political regimes, and economies. Boundaries dictate the haves and have\-nots; who pays more taxes and who pays less; who can access resources, and who cannot. Boundaries dictate the allocation of resources across space.
At times, boundaries are drawn to prevent phenomena from spilling out across space. In this chapter, the concern was with suburban sprawl spilling out onto hinter farmlands and hurting Lancaster County’s agrarian economy. To make room for more development, the bid\-rent model was used to identify towns suitable for UGA expansion. Our assumption was that once expanded, building density, land values, and real estate prices just outside the UGA would adjust to those just inside.
Thus far, the reader has learned a series of analytical and visualization strategies for working with geospatial data. These methods will be invaluable as we move into more complex data science in the chapters to follow.
2\.5 Assignment \- Boundaries in your community
-----------------------------------------------
At the periphery of an Urban Growth Area is a hard boundary, a wall, with a different legal regime on either side. Interestingly, cities and regions are divided by many soft boundaries as well, with appreciably different conditions on either side, but not because of any legal mandate. Many historical inequities partitioned America into economic and racial enclaves, the results of which are still evident today.
What soft boundaries exist in your city, and how do they separate the communities on either side? In this assignment, you will wrangle together a significant street, avenue or other soft boundary of your choosing and create discontinuity plots similar to Figure 2\.13 above.
Take Philadelphia’s Girard Avenue (Figure 2\.15\), a clear dividing line between gentrified neighborhoods to the South (adjacent to Center City), and those to the North, which have been slower to change.
The code block below calculates distance from each tract to the Girard Avenue line (Figure 2\.16\). `tract.centroids.NS` stamps `tract.centroids` with their location in either a `North` or `South`\-side buffer. Note that these are one\-sided buffers (`singleSide = TRUE`). `tract.centroids.distance` column binds (`cbind`) together the resulting data frame with a column measuring distance to `girard`.
The `dist2Line` function from the `geosphere` package, takes two inputs. The first is a matrix of `tract.centroids` projected into decimal degrees (`4326`). The second is the `girard` avenue line, also in decimal degrees. `dist2Line` does not take an `sf` however, but a layer converted to the older R geospatial standard `sp`, with `as_Spatial`. Finally, `mutate` sets all distances north of Girard negative to enable the plots in Figure 2\.16 below. As always, run each line separately to understand the process.
```
tract.centroids.NS <-
st_intersection(tract.centroids,
rbind(
st_buffer(girard, 10000, singleSide = TRUE) %>% mutate(Side = "South"),
st_buffer(girard, -10000, singleSide = TRUE) %>% mutate(Side = "North")))
tract.centroids.distance <-
cbind(
tract.centroids.NS,
dist2Line(
st_coordinates(st_transform(tract.centroids.NS, 4326)),
as_Spatial(st_transform(girard, 4326)))) %>%
mutate(distance = ifelse(Side == "North", distance * -1, distance))
```
The resulting discontinuity plots (Figure 2\.16\) for both `Percent_White` and single\-family house price show significant differences at the Girard Ave. boundary, and these differences seem to increase with time. The deliverables for this assignment include:
1. Wrangle street (line) data, Census and other open data for your study area. Buffer the street line and use it to subset tracts/data from the larger city. Measure the distance from a tract centroid or other outcome to the boundary line with `geosphere::dist2Line`.
2. Choose 2 Census outcomes and two other point\-level outcomes from local open data, such as crime, home prices, construction permits etc. Develop maps and discontinuity plots to show across\-boundary differences.
3. Write a short research brief asking, ‘How does the such\-and\-such\-boundary partition my community?’ Motivate your analysis; provide some historical context in your community; present your maps and plots; and conclude.
4. Bonus: After you have finished this book and have a better understanding of fixed effects regression, estimate a regression of say, house price as a function of a boundary\-side fixed effect and year. Interact or multiply the side and year variables. What is the interpretation of the resulting estimate?
| Field Specific |
urbanspatial.github.io | https://urbanspatial.github.io/PublicPolicyAnalytics/intro-to-geospatial-machine-learning-part-1.html |
Chapter 3 Intro to geospatial machine learning, Part 1
======================================================
3\.1 Machine learning as a Planning tool
----------------------------------------
The descriptive analytics in the first two chapters provide context to non\-technical decision\-makers. The predictive analytics in this and subsequent chapters help convert those insights into actionable intelligence.
Prediction is not new to Planners. Throughout history, Planners have made ill\-fated forecasts into an uncertain future. The 1925 Plan for Cincinnati is one such example. Cincinnati, delighted with the prosperity it had achieved up to 1925, set out to plan for the next hundred years by understanding future demand for land, housing, employment and more.
Population forecasting, the author wrote, is the “obvious…basis upon which scientific city planning must be formulated”. Rebuilt from the actual plan, Figure 3\.1 visualizes the city’s dubious population forecast[18](#fn18), which made the assumption that historical growth would continue or ‘generalize’ long into the future. In reality, a great depression, suburbanization, and deindustrialization caused the city’s population to fall nearly 70% below its 1950 peak. So much for the ‘the plan’.
To reasonably assume Cincinnati’s past would generalize to its future, Planners would have had to understand how complex systems like housing markets, labor markets, credit markets, immigration, politics, and technology all interrelate. If those systems were understood, Planners would know which levers to pull to bring health, happiness and prosperity to all.
We are far from such an understanding. Thus, our goal will be to make predictions in more bounded systems, where recent experiences more reasonably *generalize* to the near future. We will ‘borrow’ experiences from observed data, and test whether they can be used to predict outcomes where and when they are unknown. Throughout the remainder of the book, we will learn that generalizability is the most important concept for applying machine learning in government.
### 3\.1\.1 Accuracy \& generalizability
This chapter focuses on home price prediction, which is a common use case in cities that use data to assess property taxes. The goal is to *train* a model from recent transactions, the ‘training set’, and test whether that model generalizes to properties that have not recently sold. This is comparable to the *Zestimate* algorithm that Zillow uses to estimate home prices.
One way to evaluate such a model is to judge its *accuracy*, defined as the difference between predicted and observed home prices. Another, more nuanced criteria is *generalizability*, which has two different meanings:
Imagine training an age\-prediction robot on data from 1000 people, including you and I. It is far less impressive for the robot to predict my age compared to a random person, because it was trained in part, on my data. Thus, a generalizable model is one that accurately predicts on *new* data \- like every house that hasn’t sold in recent years.[19](#fn19)
Now imagine the age\-prediction robot was trained on data from a retirement community and tasked to predict in a middle school. The robot might be accurate for seniors but would likely fail for young adults. Thus, a generalizable model is also one that predicts with comparable accuracy across different groups \- like houses in different neighborhoods. As we will learn, a predictive model lacking accuracy and generalizability will not be a useful decision\-making tool.
### 3\.1\.2 The machine learning process
Curiosity, creativity and problem solving are the key to useful data science solutions, but organization ensures a reproducible work flow. The below framework highlights the major steps in the predictive modeling process:
**Data wrangling**: The first step is to compile the required data into one dataset, often from multiple sources. This includes the outcome of interest (the ‘dependent variable’) and the ‘features’ needed to predict that outcome. Data wrangling often involves data cleaning, which is both arduous and critical. If mistakes are made at the data wrangling stage, all the downstream work may be for not.
**Exploratory analysis**: Exploratory analysis, like the indicators we have already discussed, is critical for understanding the system of interest. Exploratory analysis investigates both the underlying spatial process in the outcome of interest as well as trends and correlations between the outcome and the predictive features.
**Feature engineering**: Feature engineering is the difference between a good machine learning model and a great one. Features are the variables used to predict the outcome of interest by mining the data for predictive insight. Social scientists refer to these as ‘independent variables’, but features are a bit different.
Social scientists fear that transforming a variable (ie. changing its context without a good theoretical reason) may muddle the interpretation of a statistical result. In prediction, interpretation is not as important as accuracy and generalizability, so transformation or feature engineering becomes imperative.
The first key to strong feature engineering is experience with feature engineering. While we practice here, many transformation and ‘dimensionality reduction’ techniques are beyond the scope of this book, but critical to machine learning. The second key is domain expertise. It may seem that reducing machine learning success to accuracy and generalizability negates the importance of context. In fact, the more the data scientist understands the problem, the better she will be at parameterizing the underlying system.
Geospatial feature engineering is about measuring ‘exposure’ from an outcome, like a house sale, to the locational phenomena that can help predict it, like crime. We will spend a great deal of time discussing this.
**Feature selection**: While hundreds of features may be engineered for a single project, often only a concise set is included in a model. Many features may be correlated with each other, a phenomenon known as ‘colinearity’, and feature selection is the process of whittling features down to a parsimonious set.
**Model estimation and validation**: A statistical model is an abstraction of reality that produces ‘estimates’, not facts. There are many different models, some more simple than others. In this book, the focus is on Linear and Generalized Linear regression models because they are more transparent and computationally more efficient. Once one is familiar with the machine learning framework however, more advanced algorithms can be substituted.
### 3\.1\.3 The hedonic model
The hedonic model is a theoretical framework for predicting home prices by deconstructing house price into the value of its constituent parts, like an additional bedroom, the presence of a pool, or the amount of local crime.[20](#fn20)
For our purposes, home prices can be deconstructed into three constituent parts \- 1\) physical characteristics, like the number of bedrooms; 2\) public services/(dis)amenities, like crime; and 3\) the spatial process of prices \- namely that house prices cluster at the neighborhood, city and regional scales. The regression model developed below omits the spatial process, which is then added in Chapter 4\. Pay close attention to how this omission leads to a less accurate and generalizable model.
In this chapter, key concepts like colinearity and feature engineering are returned to at different stages throughout. While this makes for a less linear narrative, a focus on these skills is necessary to prepare us for the more nuanced use cases that lie ahead. In the next section, data is wrangled, followed by an introduction to feature engineering. Ordinary Least Squares regression is introduced, and models are validated for their generalizability using cross\-validation.
3\.2 Data wrangling \- Home price \& crime data
-----------------------------------------------
Libraries are loaded in the code block below.
```
library(tidyverse)
library(sf)
library(spdep)
library(caret)
library(ckanr)
library(FNN)
library(grid)
library(gridExtra)
library(ggcorrplot)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
palette5 <- c("#25CB10", "#5AB60C", "#8FA108", "#C48C04", "#FA7800")
```
Our model will be trained on home price data from Boston, Massachusetts. The code block below downloads a neighborhoods geojson from the Boston Open Data site; reads in the home sale price data as a csv; converts to an sf layer (`st_as_sf`); and projects.
```
nhoods <-
st_read("http://bostonopendata-boston.opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson") %>%
st_transform('ESRI:102286')
boston <-
read.csv(file.path(root.dir,"/Chapter3_4/bostonHousePriceData_clean.csv"))
boston.sf <-
boston %>%
st_as_sf(coords = c("Longitude", "Latitude"), crs = 4326, agr = "constant") %>%
st_transform('ESRI:102286')
```
Sale prices are then mapped in quintile breaks using the `nhoods` basemap. What do you notice about the spatial process of home prices? Are they randomly distributed throughout the city or do they seem clustered? Why do you think prices are spatially distributed the way they are?
```
ggplot() +
geom_sf(data = nhoods, fill = "grey40") +
geom_sf(data = boston.sf, aes(colour = q5(PricePerSq)),
show.legend = "point", size = .75) +
scale_colour_manual(values = palette5,
labels=qBr(boston,"PricePerSq"),
name="Quintile\nBreaks") +
labs(title="Price Per Square Foot, Boston") +
mapTheme()
```
`names(boston.sf)` suggests the data includes many parcel and building\-specific features, but no neighborhood characteristics. The Analyze Boston open data site has many datasets that could be engineered into useful features.[21](#fn21)
The site runs on the Comprehensive Knowledge Archive Network or CKAN open data framework which includes an API on ‘Crime Incident Reports’.[22](#fn22) For a glimpse into the API, paste the commented out query in the code block below in to your browser.
The R package, `ckanr`, can talk directly to open data APIs built on CKAN technology, like Analyze Boston. In the code block below, `resource_id` corresponds to the Incidents dataset. The below code block returns the first row and several select fields of the `records` table. Note for this resource, the maximum number of rows the API will return is 100, but a function can be created to return all the data incrementally.
```
#https://data.boston.gov/api/3/action/datastore_search_sql?sql=SELECT * from "12cb3883-56f5-47de-afa5-3b1cf61b257b" WHERE "OCCURRED_ON_DATE" = '2019-06-21 11:00:00'
ds_search(resource_id = '12cb3883-56f5-47de-afa5-3b1cf61b257b',
url = "https://data.boston.gov/",
as = "table")$records[1,c(2,7,11,13)]
```
To keep it simple, a downloaded and wrangled crime dataset has been provided as `bostonCrimes.csv`, which can be read in with `read.csv`. `length(unique(bostonCrimes$OFFENSE_CODE_GROUP))` tells us there are 64 unique offenses in the data. Below, the five most frequent incident types are output.
```
group_by(bostonCrimes, OFFENSE_CODE_GROUP) %>%
summarize(count = n()) %>%
arrange(-count) %>% top_n(5)
```
```
## # A tibble: 5 x 2
## OFFENSE_CODE_GROUP count
## <chr> <int>
## 1 Motor Vehicle Accident Response 23717
## 2 Larceny 16869
## 3 Drug Violation 15815
## 4 Other 12797
## 5 Medical Assistance 12626
```
For now, the code block below subsets `Aggravated Assault` crimes with XY coordinates. Note the `st_as_sf` function converts a data frame of coordinates to `sf`, and that the original `crs` of `bostonCrimes.sf` is in decimal degrees (`4326`). Map the points or visualize assault hotspots with the `stat_density2d` function, as below.
```
bostonCrimes.sf <-
bostonCrimes %>%
filter(OFFENSE_CODE_GROUP == "Aggravated Assault",
Lat > -1) %>%
dplyr::select(Lat, Long) %>%
na.omit() %>%
st_as_sf(coords = c("Long", "Lat"), crs = 4326, agr = "constant") %>%
st_transform('ESRI:102286') %>%
distinct()
ggplot() + geom_sf(data = nhoods, fill = "grey40") +
stat_density2d(data = data.frame(st_coordinates(bostonCrimes.sf)),
aes(X, Y, fill = ..level.., alpha = ..level..),
size = 0.01, bins = 40, geom = 'polygon') +
scale_fill_gradient(low = "#25CB10", high = "#FA7800",
breaks=c(0.000000003,0.00000003),
labels=c("Minimum","Maximum"), name = "Density") +
scale_alpha(range = c(0.00, 0.35), guide = FALSE) +
labs(title = "Density of Aggravated Assaults, Boston") +
mapTheme()
```
### 3\.2\.1 Feature Engineering \- Measuring exposure to crime
Of a potential home purchase, many buyers ask, “Is there a lot of crime in this neighborhood?” What is ‘a lot’, how should one define ‘neighborhood’, and what is a good indicator of crime? Any set of choices will suffer from both ‘measurement error’ and scale bias, and different buyers in different neighborhoods will value crime exposure differently. Feature engineering is the art and science of defining these relationships in a model and below, three possible approaches are discussed.
The first is to sum crime incidents for an arbitrary areal unit, like Census tract. This is the least optimal as it introduces scale bias related to the Modifiable Areal Unit Problem (Section 1\.1\.1\).
The second is to sum crimes within a fixed buffer distance of each home sale observation. This approach implies that the scale relationship between crime and home prices is uniform citywide, which is not likely true.
The code block below creates a new feature, `crimes.Buffer`, that uses a spatial join (`aggregate`) to count crimes within a 1/8 mi. buffer of each home sale observation. `pull` converts the output from an `sf` layer to a numeric vector.
```
boston.sf$crimes.Buffer =
st_buffer(boston.sf, 660) %>%
aggregate(mutate(bostonCrimes.sf, counter = 1),., sum) %>%
pull(counter)
```
A third method calculates the ‘average nearest neighbor distance’ from each home sale to its *k* nearest neighbor crimes. Figure 3\.4 provides an example when `k=4`. The average nearest neighbor distance for the “close” and “far” groups is 15 and 34, respectively, suggesting the close group is ‘more exposed’ to crime than the far group.
How is this approach advantageous over a fixed buffer? There are still scale biases in assuming one parameter of *k* is the ‘correct’ one. In my experience however, this approach allows for a model to capitalize on very small continuous variations in distance.
The `functions.R` file includes the function, `nn_function` for calculating average nearest neighbor distance. The function takes 3 parameters \- coordinates of the point layer we wish to `measureFrom`; coordinates of the point layer we wish to `measureTo`; and the number of `k` nearest neighbors.
It is easier to understand how the `nn_function` works if the reader is willing to run through it line\-by\-line. The logic is as follows:
1. The `get.knnx` function creates a matrix of nearest neighbor distances, `nn.dist`, from each `measureFrom` point to `k` `measureTo` points.
2. The `nn.dist` matrix is converted into a data frame.
3. `rownames_to_column` creates a unique field denoting each unique `measureFrom` point.
4. `gather` converts from wide to long form.
5. `arrange` sorts the `measureFrom` field ascending.
6. `group_by` each `measureFrom` point and use `summarize` to take the mean `pointDistance`.
7. Convert `thisPoint` to numeric, sort ascending again, then remove `thisPoint`.
8. `pull` the average nearest neighbor distance.
The `nn_function` is embedded in `mutate` to create five new features in `boston.sf` with up to 5 *k* nearest neighbors. `st_c` (shorthand for `st_coordinates`) converts the data frame to a matrix of XY coordinates. This section has provided a short introduction in geospatial feature engineering. In the next section, each of these features are correlated with house price.
```
st_c <- st_coordinates
boston.sf <-
boston.sf %>%
mutate(
crime_nn1 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 1),
crime_nn2 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 2),
crime_nn3 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 3),
crime_nn4 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 4),
crime_nn5 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 5))
```
### 3\.2\.2 Exploratory analysis: Correlation
```
st_drop_geometry(boston.sf) %>%
mutate(Age = 2015 - YR_BUILT) %>%
dplyr::select(SalePrice, LivingArea, Age, GROSS_AREA) %>%
filter(SalePrice <= 1000000, Age < 500) %>%
gather(Variable, Value, -SalePrice) %>%
ggplot(aes(Value, SalePrice)) +
geom_point(size = .5) + geom_smooth(method = "lm", se=F, colour = "#FA7800") +
facet_wrap(~Variable, ncol = 3, scales = "free") +
labs(title = "Price as a function of continuous variables") +
plotTheme()
```
Correlation is an important form of exploratory analysis, identifying features that may be useful for predicting `SalePrice`. In this section, correlation is visualized and in the next, correlation is estimated.
In Figure 3\.5, `SalePrice` is plotted as a function of three numeric features, `Age` (a feature created from `YR_BUILT`), `LivingArea`, and `GROSS_AREA`. Is home sale price related to these features? A ‘least squares’ line is drawn through the point cloud and the more the point cloud ‘hugs’ the line, the greater the correlation. In the code block above, the least squares line is generated with `geom_smooth(method = "lm")`.
```
boston %>%
dplyr::select(SalePrice, Style, OWN_OCC, NUM_FLOORS) %>%
mutate(NUM_FLOORS = as.factor(NUM_FLOORS)) %>%
filter(SalePrice <= 1000000) %>%
gather(Variable, Value, -SalePrice) %>%
ggplot(aes(Value, SalePrice)) +
geom_bar(position = "dodge", stat = "summary", fun.y = "mean") +
facet_wrap(~Variable, ncol = 1, scales = "free") +
labs(title = "Price as a function of\ncategorical variables", y = "Mean_Price") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
These plots suggest a correlation exists. In all cases, the regression line slopes upward from left to right, meaning that on average, as age and house size increase, so does price. Correlation can also be described by the slope of the line as well. The greater the slope, the greater the feature’s effect on price.
Many features in the `boston.sf` dataset are not numeric but categorical, making correlation more complex. Slope cannot be calculated when the x\-axis is categorical. Instead, a significant *difference* in *mean* price is hypothesized across each category. For example, Figure 3\.6 outputs bar plots for three categorical features, using `geom_bar` to calculate `mean` `SalePrice` by category.
`OWN_OCC` or ‘owner\-occupied’, describes likely rental properties. Note that there is no difference on average, between owner and non\-owner\-occupied home sales. This is a good indicator that `OWN_OCC` may not be a good predictor of `SalePrice`. Conversely, there appears a significant premium associated with the `Victorian` architectural `Style`. Section 3\.2\.2 below discusses why feature engineering like recoding `Style` into fewer categories, or converting `NUM_FLOORS` from numeric to categorical, could lead to a significant improvement in a predictive model.
Finally, the relationship between `SalePrice` and crime exposure is visualized below for the six crime features. The small multiple below subsets with `dplyr::select(starts_with("crime")`. `filter` is used to remove sales less than $1000 and greater than $1 million.
These points do not ‘hug’ the line like the scatterplots above, suggesting little correlation between crime and price. However, the plots consider just two variables (ie. ‘bivariate’), while a regression is able to account for multiple features simultaneously (ie. multivariate). We will see how the multivariate regression makes crime a significant predictor.
A correlation matrix is another way to visualize correlation across numeric variables. In the code block below, `cor` and `cor_pmat`, calculate bivariate correlation and statistical significance, respectively. These statistics are explained below. In Figure 3\.8, the darker the shade of orange or green, the stronger the correlation. The `SalePrice` row shows correlations relevant to our model, but this plot also shows features that may be colinear, like `LivingArea` and `GROSS_AREA`.
Correlation analysis is a critical component of the machine learning work flow, but exploratory analysis goes beyond just correlation. Keep in mind that good exploratory analysis adds valuable context, particularly for non\-technical audiences. Next, statistical correlation and regression is introduced.
```
numericVars <-
select_if(st_drop_geometry(boston.sf), is.numeric) %>% na.omit()
ggcorrplot(
round(cor(numericVars), 1),
p.mat = cor_pmat(numericVars),
colors = c("#25CB10", "white", "#FA7800"),
type="lower",
insig = "blank") +
labs(title = "Correlation across numeric variables")
```
3\.3 Introduction to Ordinary Least Squares Regression
------------------------------------------------------
This section gives a quick and applied introduction to regression that may not satisfy readers looking for a mathematical framework. The applied approach is designed for those primarily interested in interpretation and communication.
The purpose of Linear Regression or Ordinary Least Squares Regression (OLS), is to predict the `SalePrice` of house *i*, as a function of several components, as illustrated in the equation below. The first is the ‘intercept’, \\(\\beta \_{0}\\), which is the predicted value of `SalePrice` if nothing was known about any other features. Next, the regression ‘coefficient’, \\(\\beta \_{1}\\), is interpreted as the average change in `SalePrice` given a unit increase in *X*. *X* is a feature, such as the living area of a house.
Finally, \\(\\varepsilon\_{i}\\), is the error term or residual, which represents all the variation in `SalePrice` not explained by *X*. The goal is to account for all the systematic variation in `SalePrice`, such that anything leftover (\\(\\varepsilon\_{i}\\)) is just random noise. Consider the regression equation below can be used to explore the constituent parts of home prices as discussed in the hedonic model.
Of all the nuances of the regression model, one of the most important is the idea that regression models look for predictive ‘signals’ at the mean. This is very intuitive so long as a predictive feature does not vary substantially from its mean. For example, regressing home price and crime exposure assumes that the average relationship (the spatial process) is comparable for all neighborhoods. This is a significant assumption and one that will be discussed in the next chapter.
\\\[SalePrice\_{i}\=\\beta \_{0}\+\\beta \_{1}X\_{i}\+\\varepsilon\_{i}\\]
Let us now try to understand how a regression is estimated starting with the Pearson correlation coefficient, *r*. Correlation was visualized above, and here it is tested empirically. An *r* of 0 suggests no correlation between the two variables; \-1 indicates a strong negative relationship; and 1, a strong positive relationship. The results of the `cor.test` below describe a marginal (*r* \= 0\.36\) but statistically significant (p \< 0\.001\) positive correlation between `SalePrice` and `LivingArea`.
```
cor.test(boston$LivingArea, boston$SalePrice, method = "pearson")
```
In correlation and regression, the slope of the linear relationship between `SalePrice` and `LivingArea`, \\(\\beta \_{1}\\), is calculated with a ‘least squares’ line fit to the data by minimizing the squared difference between the observed `SalePrice` and the predicted `SalePrice`. The algebra used to fit this line is relatively uncomplicated and provides the slope and y\-intercept.
One very special characteristic of this line is that it represents the prediction. Figure 3\.9 visualizes the observed relationship between `SalePrice` and `LivingArea` in green along with the resulting sale price regression prediction, in orange. Note that these predictions fit perfectly on the least squares line. Two larger points are shown for the *same observation* to illustrate the error or residual between predicted and observed `SalePrice`.
The large residual difference (or error), \\(\\varepsilon\_{i}\\), between the orange and green points, suggests that while `LivingArea` is a good predictor, other features are needed to make a more robust prediction.
### 3\.3\.1 Our first regression model
In R, OLS regression is performed with the `lm` or ‘linear model’ function. The dependent variable `SalePrice`, is modeled as a function of (`~`) `LivingArea` and output as an `lm` object called `livingReg`. The `summary` function is used to see the results of the regression.
```
livingReg <- lm(SalePrice ~ LivingArea, data = boston)
summary(livingReg)
```
**Table 3\.1 livingReg**
| | |
| | SalePrice |
| | |
| LivingArea | 216\.539\*\*\* (14\.466\) |
| Constant | 157,968\.300\*\*\* (35,855\.590\) |
| N | 1,485 |
| R2 | 0\.131 |
| Adjusted R2 | 0\.131 |
| Residual Std. Error | 563,811\.900 (df \= 1483\) |
| F Statistic | 224\.077\*\*\* (df \= 1; 1483\) |
| | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | |
The actual `summary` looks different from the regression output above. The `Intercept` or ‘Constant’ is the value of `SalePrice` if living area was `0`. In many instances, including this one, the `Intercept` interpretation is not particularly useful.
The estimated coefficient for `LivingArea`, is 216\.54\. The interpretation is that, “On average, a one foot increase in living area is associated with a $216\.54 increase in sale price”. Note that the coefficient is on the scale of the dependent variable, dollars.
The `Std. Error` refers to the standard error of the coefficient and is a measure of precision. The best way to interpret the standard error is in the context of the coefficient. If the standard errors are large relative to the coefficient, then the coefficient may not be reliable.
The p\-value is a more direct way to measure if the coefficient is reliable and is a standard measure of statistical significance.[23](#fn23) It is a hypothesis test based on the ‘null hypothesis’ that there is *no* relationship between `SalePrice` and `LivingArea`. If the p\-value is say, 0\.05, then there is 5% probability that the null hypothesis was mistakenly rejected. In other words, we can be 95% confident that our coefficient estimation is reliable, and thus a useful predictor of `SalePrice`.[24](#fn24) In the regression summary, the p\-values are labeled with asterisks (\*) and referenced with a legend at the bottom of the output.
The final component of the regression summary is (Adjusted) R^2 \- a common regression ‘goodness of fit’ indicator that describes how well the features explain the outcome. R^2 runs from 0 to 1 and is defined as the proportion of variation in the dependent variable, `SalePrice`, that is explained by the linear combination of the features. R^2 is interpreted on a percentage basis, so in this model, `LivingArea` explains roughly 13% of the variation in price.[25](#fn25)
The ‘proportion of variation’ interpretation is useful for comparing across models. R^2 is also linear, so a model with an R^2 of 0\.80 accounts for twice as much variation as a model with an R^2 of 0\.40\. Despite its interpretability, below we will learn other goodness of fit indicators better suited for prediction.
To understand how regression is used to predict, consider how coefficients relate to the regression equation. In the equation below, the \\(\\beta \_{1}\\) has been substituted for the estimated coefficient. Substitute *X* in the equation below, for the `LivingArea` of any Boston house, and a price prediction can be calculated.
\\\[SalePrice\_{i}\=157968\+216\.54X\_{i}\+\\varepsilon\_{i}\\]
Assuming a house with the mean living area, 2,262 square feet, the equation would yield the following prediction:
\\\[647781\.50\=157968\+(216\.54 \* 2262\)\+\\varepsilon\_{i}\\]
The error for this model, \\(\\varepsilon\_{i}\\), is high given that only 13% of the variation in `SalePrice` has been explained. These errors are explored in detail below, but for now, more features are added to reduce the error and improve the model.[26](#fn26) `lm` does not take `sf` layers as an input, so geometries are dropped from `boston.sf`. Only certain variables are `select`ed for the model, which is useful syntax for quickly adding and subtracting features.
```
reg1 <- lm(SalePrice ~ ., data = st_drop_geometry(boston.sf) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, R_TOTAL_RM, NUM_FLOORS,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE))
summary(reg1)
```
**Table 3\.2 Regression 1**
| | |
| | SalePrice |
| | |
| LivingArea | 609\.346\*\*\* (48\.076\) |
| StyleCape | \-140,506\.900\* (79,096\.260\) |
| StyleColonial | \-343,096\.200\*\*\* (78,518\.600\) |
| StyleConventional | \-261,936\.800\*\*\* (84,123\.070\) |
| StyleDecker | \-365,755\.900\*\*\* (102,531\.300\) |
| StyleDuplex | \-183,868\.500 (128,816\.900\) |
| StyleRaised Ranch | \-390,167\.100\*\*\* (109,706\.600\) |
| StyleRanch | \-92,823\.330 (95,704\.750\) |
| StyleRow End | \-68,636\.710 (98,864\.510\) |
| StyleRow Middle | 172,722\.600\* (100,981\.400\) |
| StyleSemi?Det | \-274,146\.000\*\*\* (96,970\.880\) |
| StyleSplit Level | \-232,288\.100 (168,146\.100\) |
| StyleTri?Level | \-803,632\.100\*\* (408,127\.000\) |
| StyleTudor | \-394,103\.100 (408,553\.700\) |
| StyleTwo Fam Stack | \-147,538\.200\* (84,835\.410\) |
| StyleUnknown | \-656,090\.500\*\* (291,530\.300\) |
| StyleVictorian | \-507,379\.700\*\*\* (130,751\.100\) |
| GROSS\_AREA | \-206\.257\*\*\* (29\.108\) |
| R\_TOTAL\_RM | \-19,589\.190\*\* (8,268\.468\) |
| NUM\_FLOORS | 163,990\.700\*\*\* (38,373\.070\) |
| R\_BDRMS | \-33,713\.420\*\*\* (11,174\.750\) |
| R\_FULL\_BTH | 179,093\.600\*\*\* (23,072\.960\) |
| R\_HALF\_BTH | 85,186\.150\*\*\* (22,298\.990\) |
| R\_KITCH | \-257,206\.200\*\*\* (33,090\.900\) |
| R\_ACD | \-203,205\.700 (401,281\.500\) |
| R\_ACN | \-108,018\.900\*\*\* (35,149\.110\) |
| R\_ACU | 487,882\.600\*\*\* (127,385\.500\) |
| R\_FPLACE | 172,366\.200\*\*\* (16,240\.410\) |
| Constant | 294,677\.700\*\*\* (90,767\.260\) |
| N | 1,485 |
| R2 | 0\.571 |
| Adjusted R2 | 0\.563 |
| Residual Std. Error | 399,781\.200 (df \= 1456\) |
| F Statistic | 69\.260\*\*\* (df \= 28; 1456\) |
| | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | |
These additional features now explain 56% of the variation in price, a significant improvement over `livingReg`. Many more coefficients are now estimated, including the architectural `Style` feature, which R automatically converts to many categorical features. These ‘dummy variables’ or ‘fixed effects’ as they are called, hypothesize a statistically significant *difference* in price across each `Style` category relative to a *reference* category \- Bungalow (`levels(boston.sf$Style)`). More intuition for fixed effects is provided in the next chapter.
### 3\.3\.2 More feature engineering \& colinearity
Before going forward, this section further emphasizes two critical concepts, feature engineering and colinearity. Beginning with feature engineering, `reg1` estimates the effect of `NUM_FLOORS` encoded as a continuous feature. `table(boston$NUM_FLOORS)` shows that houses have between 0 and 5 floors, and some have half floors. What happens when this feature is re\-engineered as categorical? The below `mutate` does just this with `case_when`, a more nuanced `ifelse`.
```
boston.sf <-
boston.sf %>%
mutate(NUM_FLOORS.cat = case_when(
NUM_FLOORS >= 0 & NUM_FLOORS < 3 ~ "Up to 2 Floors",
NUM_FLOORS >= 3 & NUM_FLOORS < 4 ~ "3 Floors",
NUM_FLOORS > 4 ~ "4+ Floors"))
```
The newly recoded `NUM_FLOORS.cat` feature is input into `reg2` below. In the results (not pictured), R automatically removed `3 Floors` as the reference. The remaining two groups are statistically significant. More importantly, judging from the increase in Adjusted R^2, this marginal amount of feature engineering added significant predictive power to the model.
```
reg2 <- lm(SalePrice ~ ., data = st_drop_geometry(boston.sf) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, R_TOTAL_RM, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH, R_KITCH,
R_AC, R_FPLACE))
```
More now on colinearity, as Figure 3\.10 indicates a strong (and obvious) correlation between total number of rooms, `R_TOTAL_RM`, and number of bedrooms `R_BDRMS`. In `reg2`, `R_BDRMS` is significant, but `R_TOTAL_RM` is not. These two features are ‘colinear’, or correlated with one another, so if both are input into the regression, one is insignificant. In such an instance, retain the feature that leads to a more accurate and generalizable model.
As the crime features are also colinear, only one should be entered in the model. Try iteratively entering each of the crime features into the regression in addition to `reg2`. `crimes.Buffer` seems to have the greatest predictive impact. A model (not pictured) suggests that conditional on the other features, an additional crime within the 1/8th mile buffer is associated with an average price decrease of \-$4,390\.82\.
Although the correlation plots in Figure 3\.7 suggested little correlation between crime and price, once other variables are controlled for in a multivariate regression, crime becomes significant.
Thus far, our most accurate model (with `crimes.buffer`) explains just 64% of the variation in `SalePrice`. In chapter 4, new geospatial features are added to increase the model’s predictive power. In the final section of this chapter however, cross\-validation and a new set of goodness of fit metrics are introduced.
3\.4 Cross\-validation \& return to goodness of fit
---------------------------------------------------
Generalizability is the most important concept for the public\-sector data scientist, defined in section 3\.1\.1 as a model that can 1\) predict accurately on new data and 2\) predict accurately across different group contexts, like neighborhoods. In this section, the first definition is explored.
Note that R^2 judges model accuracy on the data used to train the model. Like the age prediction robot, this is not an impressive feat. A higher bar is set below by randomly splitting `boston.sf` into `boston.training` and `boston.test` datasets, *training* a model on the former and *testing* on the latter. New goodness of fit indicators are introduced, and we will see how validation on new data is a better way to gauge accuracy and generalizability. This ultimately helps dictate how useful a model is for decision\-making.
### 3\.4\.1 Accuracy \- Mean Absolute Error
Below, the `createDataPartition` function randomly splits `boston.sf` into a 60% `boston.training` dataset and a 40% `boston.test` dataset.[27](#fn27) Note that the air conditioning feature, `R_AC`, contains three possible categories, and `D` appears just once in the data (`table(boston.sf$R_AC)`). The `D` observations must be moved into the training set or removed altogether, otherwise, no \\(\\beta\\) coefficient would be estimated, and the model would fail to `predict`. The parameter `y` balances factors for three such categories across training and test sets.
Another model is then estimated on the training set, `reg.training`.
```
inTrain <- createDataPartition(
y = paste(boston.sf$NUM_FLOORS.cat, boston.sf$Style, boston.sf$R_AC),
p = .60, list = FALSE)
boston.training <- boston.sf[inTrain,]
boston.test <- boston.sf[-inTrain,]
reg.training <- lm(SalePrice ~ ., data = st_drop_geometry(boston.training) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE,
crimes.Buffer))
```
Three new fields are created in `boston.test`. `SalePrice.Predict` is the sale price prediction calculated using `reg.training` to `predict` onto `boston.test`. The `SalePrice.Error` and `SalePrice.AbsError` calculate differences in predicted and observed prices. Absolute values (`abs`) may be suitable if over or under\-predictions are less of a concern. `SalePrice.APE` is the ‘Absolute Percent Error’ \- the difference between predicted and observed prices on a percentage basis. Any sale with a price greater than $5 million is removed from `boston.test`.
Keep in mind, these statistics reflect how well the model predicts for data it has never seen before. Relative to R^2, which tests goodness of fit on the training data, this is a more reliable validation approach.
```
boston.test <-
boston.test %>%
mutate(SalePrice.Predict = predict(reg.training, boston.test),
SalePrice.Error = SalePrice.Predict - SalePrice,
SalePrice.AbsError = abs(SalePrice.Predict - SalePrice),
SalePrice.APE = (abs(SalePrice.Predict - SalePrice)) / SalePrice.Predict)%>%
filter(SalePrice < 5000000)
```
Now that measures of error are attributed to each sale, some basic summary statistics describe goodness of fit. First, Mean Absolute Error (MAE) is calculated. The error is not trivial given the mean `SalePrice` for `boston.test` is $619,070\.
```
mean(boston.test$SalePrice.AbsError, na.rm = T)
```
```
## [1] 176536
```
Next, Mean Absolute Percent Error is calculated by taking the mean `SalePrice.APE`. The ‘MAPE’ confirms our suspicion, suggesting the model errs by 34%.
```
mean(boston.test$SalePrice.APE, na.rm = T)
```
```
## [1] 0.3364525
```
Data visualizations are also useful for diagnosing models. The `geom_histogram` in Figure 3\.11 reveals some very high, outlying errors. In this plot, `scale_x_continuous` ensures x\-axis labels at $100k intervals.
Perhaps the most useful visualization is the leftmost of Figure 3\.12 which plots `SalePrice` as a function of `SalePrice.Predict`. The orange line represents a perfect fit and the green line represents the average predicted fit. If the model were perfectly fit, the green and orange lines would overlap. The deviation suggests that across the range of prices, model predictions are slightly higher than observed prices, on average.
That is not the entire story, however. The rightmost panel in Figure 3\.12 is the same as the left, but divides prices in to three groups to show that the extent of over\-prediction is much higher for lower\-priced sales. A good machine learner will use diagnostic plots like these to understand what additional features may be helpful for improving the model. Here the lesson is more features are needed to account for lower prices.
### 3\.4\.2 Generalizability \- Cross\-validation
Predicting for a single hold out test set is a good way to gauge performance on new data, but testing on many holdouts is even better. Enter cross\-validation.
Cross\-validation ensures that the goodness of fit results for a single hold out is not a fluke. While there are many forms of cross\-validation, Figure 3\.13 visualizes an algorithm called ‘k\-fold’ cross\-validation, which works as such:
1. Partition the `boston.sf` data frame into k equal sized subsets (also known as “folds”).
2. For a given fold, train on a subset of observations, predict on a test set, and measure goodness of fit.
3. Average goodness of fit across all *k* folds.
The `caret` package and the `train` function is used for cross\-validating. Below, a parameter called `fitControl` is set to specify the number of k\-fold partitions \- in this case 100\. In the code below, `set.seed` ensures reproducible folds. An object `reg.cv`, is estimated using the same regression as specified in `reg.training`.
```
fitControl <- trainControl(method = "cv", number = 100)
set.seed(825)
reg.cv <-
train(SalePrice ~ ., data = st_drop_geometry(boston.sf) %>%
dplyr::select(SalePrice,
LivingArea, Style, GROSS_AREA,
NUM_FLOORS.cat, R_BDRMS, R_FULL_BTH,
R_HALF_BTH, R_KITCH, R_AC,
R_FPLACE, crimes.Buffer),
method = "lm", trControl = fitControl, na.action = na.pass)
reg.cv
```
```
## Linear Regression
##
## 1485 samples
## 11 predictor
##
## No pre-processing
## Resampling: Cross-Validated (100 fold)
## Summary of sample sizes: 1471, 1469, 1470, 1471, 1471, 1469, ...
## Resampling results:
##
## RMSE Rsquared MAE
## 272949.7 0.4866642 181828.1
##
## Tuning parameter 'intercept' was held constant at a value of TRUE
```
The cross\-validation output provides very important goodness of fit information. The value of each metric is actually the *mean* value across *all* folds. The `train` function returns many objects (`names(reg.cv)`), one of which is `resample` which provides goodness of fit for each of the 100 folds. Below, the first 5 are output.
```
reg.cv$resample[1:5,]
```
```
## RMSE Rsquared MAE Resample
## 1 183082.9 0.4978145 138100.8 Fold001
## 2 580261.8 0.9449456 299042.0 Fold002
## 3 314298.3 0.1778472 217442.4 Fold003
## 4 441588.1 0.7750248 250324.4 Fold004
## 5 193053.7 0.4765171 138188.8 Fold005
```
`mean(reg.cv$resample[,3])` returns the mean for all 100 `MAE` observations, which should be exactly the same as the average MAE shown above.
If the model is generalizable to new data, we should expect comparable goodness of fit metrics across each fold. There are two ways to see if this is true. The first is simply by taking the standard deviation, `sd`, of the `MAE` across all folds. $74,391 suggests significant variation across folds. This variation can also be visualized with a histogram of across\-fold `MAE`. If the model generalized well, the distribution of errors would cluster tightly together. Instead, this range of errors suggests the model predicts inconsistently, and would likely be unreliable for predicting houses that have not recently sold. This is an important connection for readers to make.
One reason the model may not generalize to new data is that it is simply not powerful enough, as indicated by the high MAE. Figure 3\.15 below maps sale prices and absolute errors for `boston.test`. What do you notice about the errors? Do they look like random noise or are they systematically distributed across space? What is the spatial process behind these errors? These questions will be explored in the next chapter.
3\.5 Conclusion \- Our first model
----------------------------------
In this chapter, geospatial machine learning was introduced by way of home price prediction, an important use case for local governments that use prediction to assess property taxes. The goal of geospatial machine learning is to ‘borrow the experience’ of places where data exists (the training set) and test whether that experience generalizes to new places (the test set). Accuracy and generalizability were introduced as two critical themes of prediction, and we will return to these frequently in the coming chapters.
If this chapter is your first exposure to regression, than I think it would be helpful to really understand key concepts before moving forward. Why is feature engineering so important? Why is identifying colinearity important? What does it mean for regression errors to be random noise? How does cross\-validation help to understand generalizability?
Keep in mind that the three components of home prices that need to be modeled are internal/parcel characteristics, public services/amenities and the spatial process of prices. Omitting the spatial process lead to errors that are clearly non\-random across space. Chapter 4 will teach us how to account for this missing variation.
3\.6 Assignment \- Predict house prices
---------------------------------------
When I teach this module to my students, the homework is a three\-week long home price predictive modeling competition. I give the students a training set of prices and tax parcel ids in a city, keeping a subset of prices hidden. Students then work in pairs to wrangle these data with other open datsets, and build models with minimal errors.
Cash prizes are awarded for the top two best performing teams and a third prize for data visualization and R markdown presentation. Unless you are taking my class, I have no cash for you (sorry to say), but that should not stop you from replicating this analysis on any number of open home sale datasets across the country.
Your focus should not only be on developing an accurate and generalizable model but on presenting the work flow for a non\-technical decision\-maker.
3\.1 Machine learning as a Planning tool
----------------------------------------
The descriptive analytics in the first two chapters provide context to non\-technical decision\-makers. The predictive analytics in this and subsequent chapters help convert those insights into actionable intelligence.
Prediction is not new to Planners. Throughout history, Planners have made ill\-fated forecasts into an uncertain future. The 1925 Plan for Cincinnati is one such example. Cincinnati, delighted with the prosperity it had achieved up to 1925, set out to plan for the next hundred years by understanding future demand for land, housing, employment and more.
Population forecasting, the author wrote, is the “obvious…basis upon which scientific city planning must be formulated”. Rebuilt from the actual plan, Figure 3\.1 visualizes the city’s dubious population forecast[18](#fn18), which made the assumption that historical growth would continue or ‘generalize’ long into the future. In reality, a great depression, suburbanization, and deindustrialization caused the city’s population to fall nearly 70% below its 1950 peak. So much for the ‘the plan’.
To reasonably assume Cincinnati’s past would generalize to its future, Planners would have had to understand how complex systems like housing markets, labor markets, credit markets, immigration, politics, and technology all interrelate. If those systems were understood, Planners would know which levers to pull to bring health, happiness and prosperity to all.
We are far from such an understanding. Thus, our goal will be to make predictions in more bounded systems, where recent experiences more reasonably *generalize* to the near future. We will ‘borrow’ experiences from observed data, and test whether they can be used to predict outcomes where and when they are unknown. Throughout the remainder of the book, we will learn that generalizability is the most important concept for applying machine learning in government.
### 3\.1\.1 Accuracy \& generalizability
This chapter focuses on home price prediction, which is a common use case in cities that use data to assess property taxes. The goal is to *train* a model from recent transactions, the ‘training set’, and test whether that model generalizes to properties that have not recently sold. This is comparable to the *Zestimate* algorithm that Zillow uses to estimate home prices.
One way to evaluate such a model is to judge its *accuracy*, defined as the difference between predicted and observed home prices. Another, more nuanced criteria is *generalizability*, which has two different meanings:
Imagine training an age\-prediction robot on data from 1000 people, including you and I. It is far less impressive for the robot to predict my age compared to a random person, because it was trained in part, on my data. Thus, a generalizable model is one that accurately predicts on *new* data \- like every house that hasn’t sold in recent years.[19](#fn19)
Now imagine the age\-prediction robot was trained on data from a retirement community and tasked to predict in a middle school. The robot might be accurate for seniors but would likely fail for young adults. Thus, a generalizable model is also one that predicts with comparable accuracy across different groups \- like houses in different neighborhoods. As we will learn, a predictive model lacking accuracy and generalizability will not be a useful decision\-making tool.
### 3\.1\.2 The machine learning process
Curiosity, creativity and problem solving are the key to useful data science solutions, but organization ensures a reproducible work flow. The below framework highlights the major steps in the predictive modeling process:
**Data wrangling**: The first step is to compile the required data into one dataset, often from multiple sources. This includes the outcome of interest (the ‘dependent variable’) and the ‘features’ needed to predict that outcome. Data wrangling often involves data cleaning, which is both arduous and critical. If mistakes are made at the data wrangling stage, all the downstream work may be for not.
**Exploratory analysis**: Exploratory analysis, like the indicators we have already discussed, is critical for understanding the system of interest. Exploratory analysis investigates both the underlying spatial process in the outcome of interest as well as trends and correlations between the outcome and the predictive features.
**Feature engineering**: Feature engineering is the difference between a good machine learning model and a great one. Features are the variables used to predict the outcome of interest by mining the data for predictive insight. Social scientists refer to these as ‘independent variables’, but features are a bit different.
Social scientists fear that transforming a variable (ie. changing its context without a good theoretical reason) may muddle the interpretation of a statistical result. In prediction, interpretation is not as important as accuracy and generalizability, so transformation or feature engineering becomes imperative.
The first key to strong feature engineering is experience with feature engineering. While we practice here, many transformation and ‘dimensionality reduction’ techniques are beyond the scope of this book, but critical to machine learning. The second key is domain expertise. It may seem that reducing machine learning success to accuracy and generalizability negates the importance of context. In fact, the more the data scientist understands the problem, the better she will be at parameterizing the underlying system.
Geospatial feature engineering is about measuring ‘exposure’ from an outcome, like a house sale, to the locational phenomena that can help predict it, like crime. We will spend a great deal of time discussing this.
**Feature selection**: While hundreds of features may be engineered for a single project, often only a concise set is included in a model. Many features may be correlated with each other, a phenomenon known as ‘colinearity’, and feature selection is the process of whittling features down to a parsimonious set.
**Model estimation and validation**: A statistical model is an abstraction of reality that produces ‘estimates’, not facts. There are many different models, some more simple than others. In this book, the focus is on Linear and Generalized Linear regression models because they are more transparent and computationally more efficient. Once one is familiar with the machine learning framework however, more advanced algorithms can be substituted.
### 3\.1\.3 The hedonic model
The hedonic model is a theoretical framework for predicting home prices by deconstructing house price into the value of its constituent parts, like an additional bedroom, the presence of a pool, or the amount of local crime.[20](#fn20)
For our purposes, home prices can be deconstructed into three constituent parts \- 1\) physical characteristics, like the number of bedrooms; 2\) public services/(dis)amenities, like crime; and 3\) the spatial process of prices \- namely that house prices cluster at the neighborhood, city and regional scales. The regression model developed below omits the spatial process, which is then added in Chapter 4\. Pay close attention to how this omission leads to a less accurate and generalizable model.
In this chapter, key concepts like colinearity and feature engineering are returned to at different stages throughout. While this makes for a less linear narrative, a focus on these skills is necessary to prepare us for the more nuanced use cases that lie ahead. In the next section, data is wrangled, followed by an introduction to feature engineering. Ordinary Least Squares regression is introduced, and models are validated for their generalizability using cross\-validation.
### 3\.1\.1 Accuracy \& generalizability
This chapter focuses on home price prediction, which is a common use case in cities that use data to assess property taxes. The goal is to *train* a model from recent transactions, the ‘training set’, and test whether that model generalizes to properties that have not recently sold. This is comparable to the *Zestimate* algorithm that Zillow uses to estimate home prices.
One way to evaluate such a model is to judge its *accuracy*, defined as the difference between predicted and observed home prices. Another, more nuanced criteria is *generalizability*, which has two different meanings:
Imagine training an age\-prediction robot on data from 1000 people, including you and I. It is far less impressive for the robot to predict my age compared to a random person, because it was trained in part, on my data. Thus, a generalizable model is one that accurately predicts on *new* data \- like every house that hasn’t sold in recent years.[19](#fn19)
Now imagine the age\-prediction robot was trained on data from a retirement community and tasked to predict in a middle school. The robot might be accurate for seniors but would likely fail for young adults. Thus, a generalizable model is also one that predicts with comparable accuracy across different groups \- like houses in different neighborhoods. As we will learn, a predictive model lacking accuracy and generalizability will not be a useful decision\-making tool.
### 3\.1\.2 The machine learning process
Curiosity, creativity and problem solving are the key to useful data science solutions, but organization ensures a reproducible work flow. The below framework highlights the major steps in the predictive modeling process:
**Data wrangling**: The first step is to compile the required data into one dataset, often from multiple sources. This includes the outcome of interest (the ‘dependent variable’) and the ‘features’ needed to predict that outcome. Data wrangling often involves data cleaning, which is both arduous and critical. If mistakes are made at the data wrangling stage, all the downstream work may be for not.
**Exploratory analysis**: Exploratory analysis, like the indicators we have already discussed, is critical for understanding the system of interest. Exploratory analysis investigates both the underlying spatial process in the outcome of interest as well as trends and correlations between the outcome and the predictive features.
**Feature engineering**: Feature engineering is the difference between a good machine learning model and a great one. Features are the variables used to predict the outcome of interest by mining the data for predictive insight. Social scientists refer to these as ‘independent variables’, but features are a bit different.
Social scientists fear that transforming a variable (ie. changing its context without a good theoretical reason) may muddle the interpretation of a statistical result. In prediction, interpretation is not as important as accuracy and generalizability, so transformation or feature engineering becomes imperative.
The first key to strong feature engineering is experience with feature engineering. While we practice here, many transformation and ‘dimensionality reduction’ techniques are beyond the scope of this book, but critical to machine learning. The second key is domain expertise. It may seem that reducing machine learning success to accuracy and generalizability negates the importance of context. In fact, the more the data scientist understands the problem, the better she will be at parameterizing the underlying system.
Geospatial feature engineering is about measuring ‘exposure’ from an outcome, like a house sale, to the locational phenomena that can help predict it, like crime. We will spend a great deal of time discussing this.
**Feature selection**: While hundreds of features may be engineered for a single project, often only a concise set is included in a model. Many features may be correlated with each other, a phenomenon known as ‘colinearity’, and feature selection is the process of whittling features down to a parsimonious set.
**Model estimation and validation**: A statistical model is an abstraction of reality that produces ‘estimates’, not facts. There are many different models, some more simple than others. In this book, the focus is on Linear and Generalized Linear regression models because they are more transparent and computationally more efficient. Once one is familiar with the machine learning framework however, more advanced algorithms can be substituted.
### 3\.1\.3 The hedonic model
The hedonic model is a theoretical framework for predicting home prices by deconstructing house price into the value of its constituent parts, like an additional bedroom, the presence of a pool, or the amount of local crime.[20](#fn20)
For our purposes, home prices can be deconstructed into three constituent parts \- 1\) physical characteristics, like the number of bedrooms; 2\) public services/(dis)amenities, like crime; and 3\) the spatial process of prices \- namely that house prices cluster at the neighborhood, city and regional scales. The regression model developed below omits the spatial process, which is then added in Chapter 4\. Pay close attention to how this omission leads to a less accurate and generalizable model.
In this chapter, key concepts like colinearity and feature engineering are returned to at different stages throughout. While this makes for a less linear narrative, a focus on these skills is necessary to prepare us for the more nuanced use cases that lie ahead. In the next section, data is wrangled, followed by an introduction to feature engineering. Ordinary Least Squares regression is introduced, and models are validated for their generalizability using cross\-validation.
3\.2 Data wrangling \- Home price \& crime data
-----------------------------------------------
Libraries are loaded in the code block below.
```
library(tidyverse)
library(sf)
library(spdep)
library(caret)
library(ckanr)
library(FNN)
library(grid)
library(gridExtra)
library(ggcorrplot)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
palette5 <- c("#25CB10", "#5AB60C", "#8FA108", "#C48C04", "#FA7800")
```
Our model will be trained on home price data from Boston, Massachusetts. The code block below downloads a neighborhoods geojson from the Boston Open Data site; reads in the home sale price data as a csv; converts to an sf layer (`st_as_sf`); and projects.
```
nhoods <-
st_read("http://bostonopendata-boston.opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson") %>%
st_transform('ESRI:102286')
boston <-
read.csv(file.path(root.dir,"/Chapter3_4/bostonHousePriceData_clean.csv"))
boston.sf <-
boston %>%
st_as_sf(coords = c("Longitude", "Latitude"), crs = 4326, agr = "constant") %>%
st_transform('ESRI:102286')
```
Sale prices are then mapped in quintile breaks using the `nhoods` basemap. What do you notice about the spatial process of home prices? Are they randomly distributed throughout the city or do they seem clustered? Why do you think prices are spatially distributed the way they are?
```
ggplot() +
geom_sf(data = nhoods, fill = "grey40") +
geom_sf(data = boston.sf, aes(colour = q5(PricePerSq)),
show.legend = "point", size = .75) +
scale_colour_manual(values = palette5,
labels=qBr(boston,"PricePerSq"),
name="Quintile\nBreaks") +
labs(title="Price Per Square Foot, Boston") +
mapTheme()
```
`names(boston.sf)` suggests the data includes many parcel and building\-specific features, but no neighborhood characteristics. The Analyze Boston open data site has many datasets that could be engineered into useful features.[21](#fn21)
The site runs on the Comprehensive Knowledge Archive Network or CKAN open data framework which includes an API on ‘Crime Incident Reports’.[22](#fn22) For a glimpse into the API, paste the commented out query in the code block below in to your browser.
The R package, `ckanr`, can talk directly to open data APIs built on CKAN technology, like Analyze Boston. In the code block below, `resource_id` corresponds to the Incidents dataset. The below code block returns the first row and several select fields of the `records` table. Note for this resource, the maximum number of rows the API will return is 100, but a function can be created to return all the data incrementally.
```
#https://data.boston.gov/api/3/action/datastore_search_sql?sql=SELECT * from "12cb3883-56f5-47de-afa5-3b1cf61b257b" WHERE "OCCURRED_ON_DATE" = '2019-06-21 11:00:00'
ds_search(resource_id = '12cb3883-56f5-47de-afa5-3b1cf61b257b',
url = "https://data.boston.gov/",
as = "table")$records[1,c(2,7,11,13)]
```
To keep it simple, a downloaded and wrangled crime dataset has been provided as `bostonCrimes.csv`, which can be read in with `read.csv`. `length(unique(bostonCrimes$OFFENSE_CODE_GROUP))` tells us there are 64 unique offenses in the data. Below, the five most frequent incident types are output.
```
group_by(bostonCrimes, OFFENSE_CODE_GROUP) %>%
summarize(count = n()) %>%
arrange(-count) %>% top_n(5)
```
```
## # A tibble: 5 x 2
## OFFENSE_CODE_GROUP count
## <chr> <int>
## 1 Motor Vehicle Accident Response 23717
## 2 Larceny 16869
## 3 Drug Violation 15815
## 4 Other 12797
## 5 Medical Assistance 12626
```
For now, the code block below subsets `Aggravated Assault` crimes with XY coordinates. Note the `st_as_sf` function converts a data frame of coordinates to `sf`, and that the original `crs` of `bostonCrimes.sf` is in decimal degrees (`4326`). Map the points or visualize assault hotspots with the `stat_density2d` function, as below.
```
bostonCrimes.sf <-
bostonCrimes %>%
filter(OFFENSE_CODE_GROUP == "Aggravated Assault",
Lat > -1) %>%
dplyr::select(Lat, Long) %>%
na.omit() %>%
st_as_sf(coords = c("Long", "Lat"), crs = 4326, agr = "constant") %>%
st_transform('ESRI:102286') %>%
distinct()
ggplot() + geom_sf(data = nhoods, fill = "grey40") +
stat_density2d(data = data.frame(st_coordinates(bostonCrimes.sf)),
aes(X, Y, fill = ..level.., alpha = ..level..),
size = 0.01, bins = 40, geom = 'polygon') +
scale_fill_gradient(low = "#25CB10", high = "#FA7800",
breaks=c(0.000000003,0.00000003),
labels=c("Minimum","Maximum"), name = "Density") +
scale_alpha(range = c(0.00, 0.35), guide = FALSE) +
labs(title = "Density of Aggravated Assaults, Boston") +
mapTheme()
```
### 3\.2\.1 Feature Engineering \- Measuring exposure to crime
Of a potential home purchase, many buyers ask, “Is there a lot of crime in this neighborhood?” What is ‘a lot’, how should one define ‘neighborhood’, and what is a good indicator of crime? Any set of choices will suffer from both ‘measurement error’ and scale bias, and different buyers in different neighborhoods will value crime exposure differently. Feature engineering is the art and science of defining these relationships in a model and below, three possible approaches are discussed.
The first is to sum crime incidents for an arbitrary areal unit, like Census tract. This is the least optimal as it introduces scale bias related to the Modifiable Areal Unit Problem (Section 1\.1\.1\).
The second is to sum crimes within a fixed buffer distance of each home sale observation. This approach implies that the scale relationship between crime and home prices is uniform citywide, which is not likely true.
The code block below creates a new feature, `crimes.Buffer`, that uses a spatial join (`aggregate`) to count crimes within a 1/8 mi. buffer of each home sale observation. `pull` converts the output from an `sf` layer to a numeric vector.
```
boston.sf$crimes.Buffer =
st_buffer(boston.sf, 660) %>%
aggregate(mutate(bostonCrimes.sf, counter = 1),., sum) %>%
pull(counter)
```
A third method calculates the ‘average nearest neighbor distance’ from each home sale to its *k* nearest neighbor crimes. Figure 3\.4 provides an example when `k=4`. The average nearest neighbor distance for the “close” and “far” groups is 15 and 34, respectively, suggesting the close group is ‘more exposed’ to crime than the far group.
How is this approach advantageous over a fixed buffer? There are still scale biases in assuming one parameter of *k* is the ‘correct’ one. In my experience however, this approach allows for a model to capitalize on very small continuous variations in distance.
The `functions.R` file includes the function, `nn_function` for calculating average nearest neighbor distance. The function takes 3 parameters \- coordinates of the point layer we wish to `measureFrom`; coordinates of the point layer we wish to `measureTo`; and the number of `k` nearest neighbors.
It is easier to understand how the `nn_function` works if the reader is willing to run through it line\-by\-line. The logic is as follows:
1. The `get.knnx` function creates a matrix of nearest neighbor distances, `nn.dist`, from each `measureFrom` point to `k` `measureTo` points.
2. The `nn.dist` matrix is converted into a data frame.
3. `rownames_to_column` creates a unique field denoting each unique `measureFrom` point.
4. `gather` converts from wide to long form.
5. `arrange` sorts the `measureFrom` field ascending.
6. `group_by` each `measureFrom` point and use `summarize` to take the mean `pointDistance`.
7. Convert `thisPoint` to numeric, sort ascending again, then remove `thisPoint`.
8. `pull` the average nearest neighbor distance.
The `nn_function` is embedded in `mutate` to create five new features in `boston.sf` with up to 5 *k* nearest neighbors. `st_c` (shorthand for `st_coordinates`) converts the data frame to a matrix of XY coordinates. This section has provided a short introduction in geospatial feature engineering. In the next section, each of these features are correlated with house price.
```
st_c <- st_coordinates
boston.sf <-
boston.sf %>%
mutate(
crime_nn1 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 1),
crime_nn2 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 2),
crime_nn3 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 3),
crime_nn4 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 4),
crime_nn5 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 5))
```
### 3\.2\.2 Exploratory analysis: Correlation
```
st_drop_geometry(boston.sf) %>%
mutate(Age = 2015 - YR_BUILT) %>%
dplyr::select(SalePrice, LivingArea, Age, GROSS_AREA) %>%
filter(SalePrice <= 1000000, Age < 500) %>%
gather(Variable, Value, -SalePrice) %>%
ggplot(aes(Value, SalePrice)) +
geom_point(size = .5) + geom_smooth(method = "lm", se=F, colour = "#FA7800") +
facet_wrap(~Variable, ncol = 3, scales = "free") +
labs(title = "Price as a function of continuous variables") +
plotTheme()
```
Correlation is an important form of exploratory analysis, identifying features that may be useful for predicting `SalePrice`. In this section, correlation is visualized and in the next, correlation is estimated.
In Figure 3\.5, `SalePrice` is plotted as a function of three numeric features, `Age` (a feature created from `YR_BUILT`), `LivingArea`, and `GROSS_AREA`. Is home sale price related to these features? A ‘least squares’ line is drawn through the point cloud and the more the point cloud ‘hugs’ the line, the greater the correlation. In the code block above, the least squares line is generated with `geom_smooth(method = "lm")`.
```
boston %>%
dplyr::select(SalePrice, Style, OWN_OCC, NUM_FLOORS) %>%
mutate(NUM_FLOORS = as.factor(NUM_FLOORS)) %>%
filter(SalePrice <= 1000000) %>%
gather(Variable, Value, -SalePrice) %>%
ggplot(aes(Value, SalePrice)) +
geom_bar(position = "dodge", stat = "summary", fun.y = "mean") +
facet_wrap(~Variable, ncol = 1, scales = "free") +
labs(title = "Price as a function of\ncategorical variables", y = "Mean_Price") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
These plots suggest a correlation exists. In all cases, the regression line slopes upward from left to right, meaning that on average, as age and house size increase, so does price. Correlation can also be described by the slope of the line as well. The greater the slope, the greater the feature’s effect on price.
Many features in the `boston.sf` dataset are not numeric but categorical, making correlation more complex. Slope cannot be calculated when the x\-axis is categorical. Instead, a significant *difference* in *mean* price is hypothesized across each category. For example, Figure 3\.6 outputs bar plots for three categorical features, using `geom_bar` to calculate `mean` `SalePrice` by category.
`OWN_OCC` or ‘owner\-occupied’, describes likely rental properties. Note that there is no difference on average, between owner and non\-owner\-occupied home sales. This is a good indicator that `OWN_OCC` may not be a good predictor of `SalePrice`. Conversely, there appears a significant premium associated with the `Victorian` architectural `Style`. Section 3\.2\.2 below discusses why feature engineering like recoding `Style` into fewer categories, or converting `NUM_FLOORS` from numeric to categorical, could lead to a significant improvement in a predictive model.
Finally, the relationship between `SalePrice` and crime exposure is visualized below for the six crime features. The small multiple below subsets with `dplyr::select(starts_with("crime")`. `filter` is used to remove sales less than $1000 and greater than $1 million.
These points do not ‘hug’ the line like the scatterplots above, suggesting little correlation between crime and price. However, the plots consider just two variables (ie. ‘bivariate’), while a regression is able to account for multiple features simultaneously (ie. multivariate). We will see how the multivariate regression makes crime a significant predictor.
A correlation matrix is another way to visualize correlation across numeric variables. In the code block below, `cor` and `cor_pmat`, calculate bivariate correlation and statistical significance, respectively. These statistics are explained below. In Figure 3\.8, the darker the shade of orange or green, the stronger the correlation. The `SalePrice` row shows correlations relevant to our model, but this plot also shows features that may be colinear, like `LivingArea` and `GROSS_AREA`.
Correlation analysis is a critical component of the machine learning work flow, but exploratory analysis goes beyond just correlation. Keep in mind that good exploratory analysis adds valuable context, particularly for non\-technical audiences. Next, statistical correlation and regression is introduced.
```
numericVars <-
select_if(st_drop_geometry(boston.sf), is.numeric) %>% na.omit()
ggcorrplot(
round(cor(numericVars), 1),
p.mat = cor_pmat(numericVars),
colors = c("#25CB10", "white", "#FA7800"),
type="lower",
insig = "blank") +
labs(title = "Correlation across numeric variables")
```
### 3\.2\.1 Feature Engineering \- Measuring exposure to crime
Of a potential home purchase, many buyers ask, “Is there a lot of crime in this neighborhood?” What is ‘a lot’, how should one define ‘neighborhood’, and what is a good indicator of crime? Any set of choices will suffer from both ‘measurement error’ and scale bias, and different buyers in different neighborhoods will value crime exposure differently. Feature engineering is the art and science of defining these relationships in a model and below, three possible approaches are discussed.
The first is to sum crime incidents for an arbitrary areal unit, like Census tract. This is the least optimal as it introduces scale bias related to the Modifiable Areal Unit Problem (Section 1\.1\.1\).
The second is to sum crimes within a fixed buffer distance of each home sale observation. This approach implies that the scale relationship between crime and home prices is uniform citywide, which is not likely true.
The code block below creates a new feature, `crimes.Buffer`, that uses a spatial join (`aggregate`) to count crimes within a 1/8 mi. buffer of each home sale observation. `pull` converts the output from an `sf` layer to a numeric vector.
```
boston.sf$crimes.Buffer =
st_buffer(boston.sf, 660) %>%
aggregate(mutate(bostonCrimes.sf, counter = 1),., sum) %>%
pull(counter)
```
A third method calculates the ‘average nearest neighbor distance’ from each home sale to its *k* nearest neighbor crimes. Figure 3\.4 provides an example when `k=4`. The average nearest neighbor distance for the “close” and “far” groups is 15 and 34, respectively, suggesting the close group is ‘more exposed’ to crime than the far group.
How is this approach advantageous over a fixed buffer? There are still scale biases in assuming one parameter of *k* is the ‘correct’ one. In my experience however, this approach allows for a model to capitalize on very small continuous variations in distance.
The `functions.R` file includes the function, `nn_function` for calculating average nearest neighbor distance. The function takes 3 parameters \- coordinates of the point layer we wish to `measureFrom`; coordinates of the point layer we wish to `measureTo`; and the number of `k` nearest neighbors.
It is easier to understand how the `nn_function` works if the reader is willing to run through it line\-by\-line. The logic is as follows:
1. The `get.knnx` function creates a matrix of nearest neighbor distances, `nn.dist`, from each `measureFrom` point to `k` `measureTo` points.
2. The `nn.dist` matrix is converted into a data frame.
3. `rownames_to_column` creates a unique field denoting each unique `measureFrom` point.
4. `gather` converts from wide to long form.
5. `arrange` sorts the `measureFrom` field ascending.
6. `group_by` each `measureFrom` point and use `summarize` to take the mean `pointDistance`.
7. Convert `thisPoint` to numeric, sort ascending again, then remove `thisPoint`.
8. `pull` the average nearest neighbor distance.
The `nn_function` is embedded in `mutate` to create five new features in `boston.sf` with up to 5 *k* nearest neighbors. `st_c` (shorthand for `st_coordinates`) converts the data frame to a matrix of XY coordinates. This section has provided a short introduction in geospatial feature engineering. In the next section, each of these features are correlated with house price.
```
st_c <- st_coordinates
boston.sf <-
boston.sf %>%
mutate(
crime_nn1 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 1),
crime_nn2 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 2),
crime_nn3 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 3),
crime_nn4 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 4),
crime_nn5 = nn_function(st_c(boston.sf), st_c(bostonCrimes.sf), 5))
```
### 3\.2\.2 Exploratory analysis: Correlation
```
st_drop_geometry(boston.sf) %>%
mutate(Age = 2015 - YR_BUILT) %>%
dplyr::select(SalePrice, LivingArea, Age, GROSS_AREA) %>%
filter(SalePrice <= 1000000, Age < 500) %>%
gather(Variable, Value, -SalePrice) %>%
ggplot(aes(Value, SalePrice)) +
geom_point(size = .5) + geom_smooth(method = "lm", se=F, colour = "#FA7800") +
facet_wrap(~Variable, ncol = 3, scales = "free") +
labs(title = "Price as a function of continuous variables") +
plotTheme()
```
Correlation is an important form of exploratory analysis, identifying features that may be useful for predicting `SalePrice`. In this section, correlation is visualized and in the next, correlation is estimated.
In Figure 3\.5, `SalePrice` is plotted as a function of three numeric features, `Age` (a feature created from `YR_BUILT`), `LivingArea`, and `GROSS_AREA`. Is home sale price related to these features? A ‘least squares’ line is drawn through the point cloud and the more the point cloud ‘hugs’ the line, the greater the correlation. In the code block above, the least squares line is generated with `geom_smooth(method = "lm")`.
```
boston %>%
dplyr::select(SalePrice, Style, OWN_OCC, NUM_FLOORS) %>%
mutate(NUM_FLOORS = as.factor(NUM_FLOORS)) %>%
filter(SalePrice <= 1000000) %>%
gather(Variable, Value, -SalePrice) %>%
ggplot(aes(Value, SalePrice)) +
geom_bar(position = "dodge", stat = "summary", fun.y = "mean") +
facet_wrap(~Variable, ncol = 1, scales = "free") +
labs(title = "Price as a function of\ncategorical variables", y = "Mean_Price") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
These plots suggest a correlation exists. In all cases, the regression line slopes upward from left to right, meaning that on average, as age and house size increase, so does price. Correlation can also be described by the slope of the line as well. The greater the slope, the greater the feature’s effect on price.
Many features in the `boston.sf` dataset are not numeric but categorical, making correlation more complex. Slope cannot be calculated when the x\-axis is categorical. Instead, a significant *difference* in *mean* price is hypothesized across each category. For example, Figure 3\.6 outputs bar plots for three categorical features, using `geom_bar` to calculate `mean` `SalePrice` by category.
`OWN_OCC` or ‘owner\-occupied’, describes likely rental properties. Note that there is no difference on average, between owner and non\-owner\-occupied home sales. This is a good indicator that `OWN_OCC` may not be a good predictor of `SalePrice`. Conversely, there appears a significant premium associated with the `Victorian` architectural `Style`. Section 3\.2\.2 below discusses why feature engineering like recoding `Style` into fewer categories, or converting `NUM_FLOORS` from numeric to categorical, could lead to a significant improvement in a predictive model.
Finally, the relationship between `SalePrice` and crime exposure is visualized below for the six crime features. The small multiple below subsets with `dplyr::select(starts_with("crime")`. `filter` is used to remove sales less than $1000 and greater than $1 million.
These points do not ‘hug’ the line like the scatterplots above, suggesting little correlation between crime and price. However, the plots consider just two variables (ie. ‘bivariate’), while a regression is able to account for multiple features simultaneously (ie. multivariate). We will see how the multivariate regression makes crime a significant predictor.
A correlation matrix is another way to visualize correlation across numeric variables. In the code block below, `cor` and `cor_pmat`, calculate bivariate correlation and statistical significance, respectively. These statistics are explained below. In Figure 3\.8, the darker the shade of orange or green, the stronger the correlation. The `SalePrice` row shows correlations relevant to our model, but this plot also shows features that may be colinear, like `LivingArea` and `GROSS_AREA`.
Correlation analysis is a critical component of the machine learning work flow, but exploratory analysis goes beyond just correlation. Keep in mind that good exploratory analysis adds valuable context, particularly for non\-technical audiences. Next, statistical correlation and regression is introduced.
```
numericVars <-
select_if(st_drop_geometry(boston.sf), is.numeric) %>% na.omit()
ggcorrplot(
round(cor(numericVars), 1),
p.mat = cor_pmat(numericVars),
colors = c("#25CB10", "white", "#FA7800"),
type="lower",
insig = "blank") +
labs(title = "Correlation across numeric variables")
```
3\.3 Introduction to Ordinary Least Squares Regression
------------------------------------------------------
This section gives a quick and applied introduction to regression that may not satisfy readers looking for a mathematical framework. The applied approach is designed for those primarily interested in interpretation and communication.
The purpose of Linear Regression or Ordinary Least Squares Regression (OLS), is to predict the `SalePrice` of house *i*, as a function of several components, as illustrated in the equation below. The first is the ‘intercept’, \\(\\beta \_{0}\\), which is the predicted value of `SalePrice` if nothing was known about any other features. Next, the regression ‘coefficient’, \\(\\beta \_{1}\\), is interpreted as the average change in `SalePrice` given a unit increase in *X*. *X* is a feature, such as the living area of a house.
Finally, \\(\\varepsilon\_{i}\\), is the error term or residual, which represents all the variation in `SalePrice` not explained by *X*. The goal is to account for all the systematic variation in `SalePrice`, such that anything leftover (\\(\\varepsilon\_{i}\\)) is just random noise. Consider the regression equation below can be used to explore the constituent parts of home prices as discussed in the hedonic model.
Of all the nuances of the regression model, one of the most important is the idea that regression models look for predictive ‘signals’ at the mean. This is very intuitive so long as a predictive feature does not vary substantially from its mean. For example, regressing home price and crime exposure assumes that the average relationship (the spatial process) is comparable for all neighborhoods. This is a significant assumption and one that will be discussed in the next chapter.
\\\[SalePrice\_{i}\=\\beta \_{0}\+\\beta \_{1}X\_{i}\+\\varepsilon\_{i}\\]
Let us now try to understand how a regression is estimated starting with the Pearson correlation coefficient, *r*. Correlation was visualized above, and here it is tested empirically. An *r* of 0 suggests no correlation between the two variables; \-1 indicates a strong negative relationship; and 1, a strong positive relationship. The results of the `cor.test` below describe a marginal (*r* \= 0\.36\) but statistically significant (p \< 0\.001\) positive correlation between `SalePrice` and `LivingArea`.
```
cor.test(boston$LivingArea, boston$SalePrice, method = "pearson")
```
In correlation and regression, the slope of the linear relationship between `SalePrice` and `LivingArea`, \\(\\beta \_{1}\\), is calculated with a ‘least squares’ line fit to the data by minimizing the squared difference between the observed `SalePrice` and the predicted `SalePrice`. The algebra used to fit this line is relatively uncomplicated and provides the slope and y\-intercept.
One very special characteristic of this line is that it represents the prediction. Figure 3\.9 visualizes the observed relationship between `SalePrice` and `LivingArea` in green along with the resulting sale price regression prediction, in orange. Note that these predictions fit perfectly on the least squares line. Two larger points are shown for the *same observation* to illustrate the error or residual between predicted and observed `SalePrice`.
The large residual difference (or error), \\(\\varepsilon\_{i}\\), between the orange and green points, suggests that while `LivingArea` is a good predictor, other features are needed to make a more robust prediction.
### 3\.3\.1 Our first regression model
In R, OLS regression is performed with the `lm` or ‘linear model’ function. The dependent variable `SalePrice`, is modeled as a function of (`~`) `LivingArea` and output as an `lm` object called `livingReg`. The `summary` function is used to see the results of the regression.
```
livingReg <- lm(SalePrice ~ LivingArea, data = boston)
summary(livingReg)
```
**Table 3\.1 livingReg**
| | |
| | SalePrice |
| | |
| LivingArea | 216\.539\*\*\* (14\.466\) |
| Constant | 157,968\.300\*\*\* (35,855\.590\) |
| N | 1,485 |
| R2 | 0\.131 |
| Adjusted R2 | 0\.131 |
| Residual Std. Error | 563,811\.900 (df \= 1483\) |
| F Statistic | 224\.077\*\*\* (df \= 1; 1483\) |
| | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | |
The actual `summary` looks different from the regression output above. The `Intercept` or ‘Constant’ is the value of `SalePrice` if living area was `0`. In many instances, including this one, the `Intercept` interpretation is not particularly useful.
The estimated coefficient for `LivingArea`, is 216\.54\. The interpretation is that, “On average, a one foot increase in living area is associated with a $216\.54 increase in sale price”. Note that the coefficient is on the scale of the dependent variable, dollars.
The `Std. Error` refers to the standard error of the coefficient and is a measure of precision. The best way to interpret the standard error is in the context of the coefficient. If the standard errors are large relative to the coefficient, then the coefficient may not be reliable.
The p\-value is a more direct way to measure if the coefficient is reliable and is a standard measure of statistical significance.[23](#fn23) It is a hypothesis test based on the ‘null hypothesis’ that there is *no* relationship between `SalePrice` and `LivingArea`. If the p\-value is say, 0\.05, then there is 5% probability that the null hypothesis was mistakenly rejected. In other words, we can be 95% confident that our coefficient estimation is reliable, and thus a useful predictor of `SalePrice`.[24](#fn24) In the regression summary, the p\-values are labeled with asterisks (\*) and referenced with a legend at the bottom of the output.
The final component of the regression summary is (Adjusted) R^2 \- a common regression ‘goodness of fit’ indicator that describes how well the features explain the outcome. R^2 runs from 0 to 1 and is defined as the proportion of variation in the dependent variable, `SalePrice`, that is explained by the linear combination of the features. R^2 is interpreted on a percentage basis, so in this model, `LivingArea` explains roughly 13% of the variation in price.[25](#fn25)
The ‘proportion of variation’ interpretation is useful for comparing across models. R^2 is also linear, so a model with an R^2 of 0\.80 accounts for twice as much variation as a model with an R^2 of 0\.40\. Despite its interpretability, below we will learn other goodness of fit indicators better suited for prediction.
To understand how regression is used to predict, consider how coefficients relate to the regression equation. In the equation below, the \\(\\beta \_{1}\\) has been substituted for the estimated coefficient. Substitute *X* in the equation below, for the `LivingArea` of any Boston house, and a price prediction can be calculated.
\\\[SalePrice\_{i}\=157968\+216\.54X\_{i}\+\\varepsilon\_{i}\\]
Assuming a house with the mean living area, 2,262 square feet, the equation would yield the following prediction:
\\\[647781\.50\=157968\+(216\.54 \* 2262\)\+\\varepsilon\_{i}\\]
The error for this model, \\(\\varepsilon\_{i}\\), is high given that only 13% of the variation in `SalePrice` has been explained. These errors are explored in detail below, but for now, more features are added to reduce the error and improve the model.[26](#fn26) `lm` does not take `sf` layers as an input, so geometries are dropped from `boston.sf`. Only certain variables are `select`ed for the model, which is useful syntax for quickly adding and subtracting features.
```
reg1 <- lm(SalePrice ~ ., data = st_drop_geometry(boston.sf) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, R_TOTAL_RM, NUM_FLOORS,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE))
summary(reg1)
```
**Table 3\.2 Regression 1**
| | |
| | SalePrice |
| | |
| LivingArea | 609\.346\*\*\* (48\.076\) |
| StyleCape | \-140,506\.900\* (79,096\.260\) |
| StyleColonial | \-343,096\.200\*\*\* (78,518\.600\) |
| StyleConventional | \-261,936\.800\*\*\* (84,123\.070\) |
| StyleDecker | \-365,755\.900\*\*\* (102,531\.300\) |
| StyleDuplex | \-183,868\.500 (128,816\.900\) |
| StyleRaised Ranch | \-390,167\.100\*\*\* (109,706\.600\) |
| StyleRanch | \-92,823\.330 (95,704\.750\) |
| StyleRow End | \-68,636\.710 (98,864\.510\) |
| StyleRow Middle | 172,722\.600\* (100,981\.400\) |
| StyleSemi?Det | \-274,146\.000\*\*\* (96,970\.880\) |
| StyleSplit Level | \-232,288\.100 (168,146\.100\) |
| StyleTri?Level | \-803,632\.100\*\* (408,127\.000\) |
| StyleTudor | \-394,103\.100 (408,553\.700\) |
| StyleTwo Fam Stack | \-147,538\.200\* (84,835\.410\) |
| StyleUnknown | \-656,090\.500\*\* (291,530\.300\) |
| StyleVictorian | \-507,379\.700\*\*\* (130,751\.100\) |
| GROSS\_AREA | \-206\.257\*\*\* (29\.108\) |
| R\_TOTAL\_RM | \-19,589\.190\*\* (8,268\.468\) |
| NUM\_FLOORS | 163,990\.700\*\*\* (38,373\.070\) |
| R\_BDRMS | \-33,713\.420\*\*\* (11,174\.750\) |
| R\_FULL\_BTH | 179,093\.600\*\*\* (23,072\.960\) |
| R\_HALF\_BTH | 85,186\.150\*\*\* (22,298\.990\) |
| R\_KITCH | \-257,206\.200\*\*\* (33,090\.900\) |
| R\_ACD | \-203,205\.700 (401,281\.500\) |
| R\_ACN | \-108,018\.900\*\*\* (35,149\.110\) |
| R\_ACU | 487,882\.600\*\*\* (127,385\.500\) |
| R\_FPLACE | 172,366\.200\*\*\* (16,240\.410\) |
| Constant | 294,677\.700\*\*\* (90,767\.260\) |
| N | 1,485 |
| R2 | 0\.571 |
| Adjusted R2 | 0\.563 |
| Residual Std. Error | 399,781\.200 (df \= 1456\) |
| F Statistic | 69\.260\*\*\* (df \= 28; 1456\) |
| | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | |
These additional features now explain 56% of the variation in price, a significant improvement over `livingReg`. Many more coefficients are now estimated, including the architectural `Style` feature, which R automatically converts to many categorical features. These ‘dummy variables’ or ‘fixed effects’ as they are called, hypothesize a statistically significant *difference* in price across each `Style` category relative to a *reference* category \- Bungalow (`levels(boston.sf$Style)`). More intuition for fixed effects is provided in the next chapter.
### 3\.3\.2 More feature engineering \& colinearity
Before going forward, this section further emphasizes two critical concepts, feature engineering and colinearity. Beginning with feature engineering, `reg1` estimates the effect of `NUM_FLOORS` encoded as a continuous feature. `table(boston$NUM_FLOORS)` shows that houses have between 0 and 5 floors, and some have half floors. What happens when this feature is re\-engineered as categorical? The below `mutate` does just this with `case_when`, a more nuanced `ifelse`.
```
boston.sf <-
boston.sf %>%
mutate(NUM_FLOORS.cat = case_when(
NUM_FLOORS >= 0 & NUM_FLOORS < 3 ~ "Up to 2 Floors",
NUM_FLOORS >= 3 & NUM_FLOORS < 4 ~ "3 Floors",
NUM_FLOORS > 4 ~ "4+ Floors"))
```
The newly recoded `NUM_FLOORS.cat` feature is input into `reg2` below. In the results (not pictured), R automatically removed `3 Floors` as the reference. The remaining two groups are statistically significant. More importantly, judging from the increase in Adjusted R^2, this marginal amount of feature engineering added significant predictive power to the model.
```
reg2 <- lm(SalePrice ~ ., data = st_drop_geometry(boston.sf) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, R_TOTAL_RM, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH, R_KITCH,
R_AC, R_FPLACE))
```
More now on colinearity, as Figure 3\.10 indicates a strong (and obvious) correlation between total number of rooms, `R_TOTAL_RM`, and number of bedrooms `R_BDRMS`. In `reg2`, `R_BDRMS` is significant, but `R_TOTAL_RM` is not. These two features are ‘colinear’, or correlated with one another, so if both are input into the regression, one is insignificant. In such an instance, retain the feature that leads to a more accurate and generalizable model.
As the crime features are also colinear, only one should be entered in the model. Try iteratively entering each of the crime features into the regression in addition to `reg2`. `crimes.Buffer` seems to have the greatest predictive impact. A model (not pictured) suggests that conditional on the other features, an additional crime within the 1/8th mile buffer is associated with an average price decrease of \-$4,390\.82\.
Although the correlation plots in Figure 3\.7 suggested little correlation between crime and price, once other variables are controlled for in a multivariate regression, crime becomes significant.
Thus far, our most accurate model (with `crimes.buffer`) explains just 64% of the variation in `SalePrice`. In chapter 4, new geospatial features are added to increase the model’s predictive power. In the final section of this chapter however, cross\-validation and a new set of goodness of fit metrics are introduced.
### 3\.3\.1 Our first regression model
In R, OLS regression is performed with the `lm` or ‘linear model’ function. The dependent variable `SalePrice`, is modeled as a function of (`~`) `LivingArea` and output as an `lm` object called `livingReg`. The `summary` function is used to see the results of the regression.
```
livingReg <- lm(SalePrice ~ LivingArea, data = boston)
summary(livingReg)
```
**Table 3\.1 livingReg**
| | |
| | SalePrice |
| | |
| LivingArea | 216\.539\*\*\* (14\.466\) |
| Constant | 157,968\.300\*\*\* (35,855\.590\) |
| N | 1,485 |
| R2 | 0\.131 |
| Adjusted R2 | 0\.131 |
| Residual Std. Error | 563,811\.900 (df \= 1483\) |
| F Statistic | 224\.077\*\*\* (df \= 1; 1483\) |
| | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | |
The actual `summary` looks different from the regression output above. The `Intercept` or ‘Constant’ is the value of `SalePrice` if living area was `0`. In many instances, including this one, the `Intercept` interpretation is not particularly useful.
The estimated coefficient for `LivingArea`, is 216\.54\. The interpretation is that, “On average, a one foot increase in living area is associated with a $216\.54 increase in sale price”. Note that the coefficient is on the scale of the dependent variable, dollars.
The `Std. Error` refers to the standard error of the coefficient and is a measure of precision. The best way to interpret the standard error is in the context of the coefficient. If the standard errors are large relative to the coefficient, then the coefficient may not be reliable.
The p\-value is a more direct way to measure if the coefficient is reliable and is a standard measure of statistical significance.[23](#fn23) It is a hypothesis test based on the ‘null hypothesis’ that there is *no* relationship between `SalePrice` and `LivingArea`. If the p\-value is say, 0\.05, then there is 5% probability that the null hypothesis was mistakenly rejected. In other words, we can be 95% confident that our coefficient estimation is reliable, and thus a useful predictor of `SalePrice`.[24](#fn24) In the regression summary, the p\-values are labeled with asterisks (\*) and referenced with a legend at the bottom of the output.
The final component of the regression summary is (Adjusted) R^2 \- a common regression ‘goodness of fit’ indicator that describes how well the features explain the outcome. R^2 runs from 0 to 1 and is defined as the proportion of variation in the dependent variable, `SalePrice`, that is explained by the linear combination of the features. R^2 is interpreted on a percentage basis, so in this model, `LivingArea` explains roughly 13% of the variation in price.[25](#fn25)
The ‘proportion of variation’ interpretation is useful for comparing across models. R^2 is also linear, so a model with an R^2 of 0\.80 accounts for twice as much variation as a model with an R^2 of 0\.40\. Despite its interpretability, below we will learn other goodness of fit indicators better suited for prediction.
To understand how regression is used to predict, consider how coefficients relate to the regression equation. In the equation below, the \\(\\beta \_{1}\\) has been substituted for the estimated coefficient. Substitute *X* in the equation below, for the `LivingArea` of any Boston house, and a price prediction can be calculated.
\\\[SalePrice\_{i}\=157968\+216\.54X\_{i}\+\\varepsilon\_{i}\\]
Assuming a house with the mean living area, 2,262 square feet, the equation would yield the following prediction:
\\\[647781\.50\=157968\+(216\.54 \* 2262\)\+\\varepsilon\_{i}\\]
The error for this model, \\(\\varepsilon\_{i}\\), is high given that only 13% of the variation in `SalePrice` has been explained. These errors are explored in detail below, but for now, more features are added to reduce the error and improve the model.[26](#fn26) `lm` does not take `sf` layers as an input, so geometries are dropped from `boston.sf`. Only certain variables are `select`ed for the model, which is useful syntax for quickly adding and subtracting features.
```
reg1 <- lm(SalePrice ~ ., data = st_drop_geometry(boston.sf) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, R_TOTAL_RM, NUM_FLOORS,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE))
summary(reg1)
```
**Table 3\.2 Regression 1**
| | |
| | SalePrice |
| | |
| LivingArea | 609\.346\*\*\* (48\.076\) |
| StyleCape | \-140,506\.900\* (79,096\.260\) |
| StyleColonial | \-343,096\.200\*\*\* (78,518\.600\) |
| StyleConventional | \-261,936\.800\*\*\* (84,123\.070\) |
| StyleDecker | \-365,755\.900\*\*\* (102,531\.300\) |
| StyleDuplex | \-183,868\.500 (128,816\.900\) |
| StyleRaised Ranch | \-390,167\.100\*\*\* (109,706\.600\) |
| StyleRanch | \-92,823\.330 (95,704\.750\) |
| StyleRow End | \-68,636\.710 (98,864\.510\) |
| StyleRow Middle | 172,722\.600\* (100,981\.400\) |
| StyleSemi?Det | \-274,146\.000\*\*\* (96,970\.880\) |
| StyleSplit Level | \-232,288\.100 (168,146\.100\) |
| StyleTri?Level | \-803,632\.100\*\* (408,127\.000\) |
| StyleTudor | \-394,103\.100 (408,553\.700\) |
| StyleTwo Fam Stack | \-147,538\.200\* (84,835\.410\) |
| StyleUnknown | \-656,090\.500\*\* (291,530\.300\) |
| StyleVictorian | \-507,379\.700\*\*\* (130,751\.100\) |
| GROSS\_AREA | \-206\.257\*\*\* (29\.108\) |
| R\_TOTAL\_RM | \-19,589\.190\*\* (8,268\.468\) |
| NUM\_FLOORS | 163,990\.700\*\*\* (38,373\.070\) |
| R\_BDRMS | \-33,713\.420\*\*\* (11,174\.750\) |
| R\_FULL\_BTH | 179,093\.600\*\*\* (23,072\.960\) |
| R\_HALF\_BTH | 85,186\.150\*\*\* (22,298\.990\) |
| R\_KITCH | \-257,206\.200\*\*\* (33,090\.900\) |
| R\_ACD | \-203,205\.700 (401,281\.500\) |
| R\_ACN | \-108,018\.900\*\*\* (35,149\.110\) |
| R\_ACU | 487,882\.600\*\*\* (127,385\.500\) |
| R\_FPLACE | 172,366\.200\*\*\* (16,240\.410\) |
| Constant | 294,677\.700\*\*\* (90,767\.260\) |
| N | 1,485 |
| R2 | 0\.571 |
| Adjusted R2 | 0\.563 |
| Residual Std. Error | 399,781\.200 (df \= 1456\) |
| F Statistic | 69\.260\*\*\* (df \= 28; 1456\) |
| | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | |
These additional features now explain 56% of the variation in price, a significant improvement over `livingReg`. Many more coefficients are now estimated, including the architectural `Style` feature, which R automatically converts to many categorical features. These ‘dummy variables’ or ‘fixed effects’ as they are called, hypothesize a statistically significant *difference* in price across each `Style` category relative to a *reference* category \- Bungalow (`levels(boston.sf$Style)`). More intuition for fixed effects is provided in the next chapter.
### 3\.3\.2 More feature engineering \& colinearity
Before going forward, this section further emphasizes two critical concepts, feature engineering and colinearity. Beginning with feature engineering, `reg1` estimates the effect of `NUM_FLOORS` encoded as a continuous feature. `table(boston$NUM_FLOORS)` shows that houses have between 0 and 5 floors, and some have half floors. What happens when this feature is re\-engineered as categorical? The below `mutate` does just this with `case_when`, a more nuanced `ifelse`.
```
boston.sf <-
boston.sf %>%
mutate(NUM_FLOORS.cat = case_when(
NUM_FLOORS >= 0 & NUM_FLOORS < 3 ~ "Up to 2 Floors",
NUM_FLOORS >= 3 & NUM_FLOORS < 4 ~ "3 Floors",
NUM_FLOORS > 4 ~ "4+ Floors"))
```
The newly recoded `NUM_FLOORS.cat` feature is input into `reg2` below. In the results (not pictured), R automatically removed `3 Floors` as the reference. The remaining two groups are statistically significant. More importantly, judging from the increase in Adjusted R^2, this marginal amount of feature engineering added significant predictive power to the model.
```
reg2 <- lm(SalePrice ~ ., data = st_drop_geometry(boston.sf) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, R_TOTAL_RM, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH, R_KITCH,
R_AC, R_FPLACE))
```
More now on colinearity, as Figure 3\.10 indicates a strong (and obvious) correlation between total number of rooms, `R_TOTAL_RM`, and number of bedrooms `R_BDRMS`. In `reg2`, `R_BDRMS` is significant, but `R_TOTAL_RM` is not. These two features are ‘colinear’, or correlated with one another, so if both are input into the regression, one is insignificant. In such an instance, retain the feature that leads to a more accurate and generalizable model.
As the crime features are also colinear, only one should be entered in the model. Try iteratively entering each of the crime features into the regression in addition to `reg2`. `crimes.Buffer` seems to have the greatest predictive impact. A model (not pictured) suggests that conditional on the other features, an additional crime within the 1/8th mile buffer is associated with an average price decrease of \-$4,390\.82\.
Although the correlation plots in Figure 3\.7 suggested little correlation between crime and price, once other variables are controlled for in a multivariate regression, crime becomes significant.
Thus far, our most accurate model (with `crimes.buffer`) explains just 64% of the variation in `SalePrice`. In chapter 4, new geospatial features are added to increase the model’s predictive power. In the final section of this chapter however, cross\-validation and a new set of goodness of fit metrics are introduced.
3\.4 Cross\-validation \& return to goodness of fit
---------------------------------------------------
Generalizability is the most important concept for the public\-sector data scientist, defined in section 3\.1\.1 as a model that can 1\) predict accurately on new data and 2\) predict accurately across different group contexts, like neighborhoods. In this section, the first definition is explored.
Note that R^2 judges model accuracy on the data used to train the model. Like the age prediction robot, this is not an impressive feat. A higher bar is set below by randomly splitting `boston.sf` into `boston.training` and `boston.test` datasets, *training* a model on the former and *testing* on the latter. New goodness of fit indicators are introduced, and we will see how validation on new data is a better way to gauge accuracy and generalizability. This ultimately helps dictate how useful a model is for decision\-making.
### 3\.4\.1 Accuracy \- Mean Absolute Error
Below, the `createDataPartition` function randomly splits `boston.sf` into a 60% `boston.training` dataset and a 40% `boston.test` dataset.[27](#fn27) Note that the air conditioning feature, `R_AC`, contains three possible categories, and `D` appears just once in the data (`table(boston.sf$R_AC)`). The `D` observations must be moved into the training set or removed altogether, otherwise, no \\(\\beta\\) coefficient would be estimated, and the model would fail to `predict`. The parameter `y` balances factors for three such categories across training and test sets.
Another model is then estimated on the training set, `reg.training`.
```
inTrain <- createDataPartition(
y = paste(boston.sf$NUM_FLOORS.cat, boston.sf$Style, boston.sf$R_AC),
p = .60, list = FALSE)
boston.training <- boston.sf[inTrain,]
boston.test <- boston.sf[-inTrain,]
reg.training <- lm(SalePrice ~ ., data = st_drop_geometry(boston.training) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE,
crimes.Buffer))
```
Three new fields are created in `boston.test`. `SalePrice.Predict` is the sale price prediction calculated using `reg.training` to `predict` onto `boston.test`. The `SalePrice.Error` and `SalePrice.AbsError` calculate differences in predicted and observed prices. Absolute values (`abs`) may be suitable if over or under\-predictions are less of a concern. `SalePrice.APE` is the ‘Absolute Percent Error’ \- the difference between predicted and observed prices on a percentage basis. Any sale with a price greater than $5 million is removed from `boston.test`.
Keep in mind, these statistics reflect how well the model predicts for data it has never seen before. Relative to R^2, which tests goodness of fit on the training data, this is a more reliable validation approach.
```
boston.test <-
boston.test %>%
mutate(SalePrice.Predict = predict(reg.training, boston.test),
SalePrice.Error = SalePrice.Predict - SalePrice,
SalePrice.AbsError = abs(SalePrice.Predict - SalePrice),
SalePrice.APE = (abs(SalePrice.Predict - SalePrice)) / SalePrice.Predict)%>%
filter(SalePrice < 5000000)
```
Now that measures of error are attributed to each sale, some basic summary statistics describe goodness of fit. First, Mean Absolute Error (MAE) is calculated. The error is not trivial given the mean `SalePrice` for `boston.test` is $619,070\.
```
mean(boston.test$SalePrice.AbsError, na.rm = T)
```
```
## [1] 176536
```
Next, Mean Absolute Percent Error is calculated by taking the mean `SalePrice.APE`. The ‘MAPE’ confirms our suspicion, suggesting the model errs by 34%.
```
mean(boston.test$SalePrice.APE, na.rm = T)
```
```
## [1] 0.3364525
```
Data visualizations are also useful for diagnosing models. The `geom_histogram` in Figure 3\.11 reveals some very high, outlying errors. In this plot, `scale_x_continuous` ensures x\-axis labels at $100k intervals.
Perhaps the most useful visualization is the leftmost of Figure 3\.12 which plots `SalePrice` as a function of `SalePrice.Predict`. The orange line represents a perfect fit and the green line represents the average predicted fit. If the model were perfectly fit, the green and orange lines would overlap. The deviation suggests that across the range of prices, model predictions are slightly higher than observed prices, on average.
That is not the entire story, however. The rightmost panel in Figure 3\.12 is the same as the left, but divides prices in to three groups to show that the extent of over\-prediction is much higher for lower\-priced sales. A good machine learner will use diagnostic plots like these to understand what additional features may be helpful for improving the model. Here the lesson is more features are needed to account for lower prices.
### 3\.4\.2 Generalizability \- Cross\-validation
Predicting for a single hold out test set is a good way to gauge performance on new data, but testing on many holdouts is even better. Enter cross\-validation.
Cross\-validation ensures that the goodness of fit results for a single hold out is not a fluke. While there are many forms of cross\-validation, Figure 3\.13 visualizes an algorithm called ‘k\-fold’ cross\-validation, which works as such:
1. Partition the `boston.sf` data frame into k equal sized subsets (also known as “folds”).
2. For a given fold, train on a subset of observations, predict on a test set, and measure goodness of fit.
3. Average goodness of fit across all *k* folds.
The `caret` package and the `train` function is used for cross\-validating. Below, a parameter called `fitControl` is set to specify the number of k\-fold partitions \- in this case 100\. In the code below, `set.seed` ensures reproducible folds. An object `reg.cv`, is estimated using the same regression as specified in `reg.training`.
```
fitControl <- trainControl(method = "cv", number = 100)
set.seed(825)
reg.cv <-
train(SalePrice ~ ., data = st_drop_geometry(boston.sf) %>%
dplyr::select(SalePrice,
LivingArea, Style, GROSS_AREA,
NUM_FLOORS.cat, R_BDRMS, R_FULL_BTH,
R_HALF_BTH, R_KITCH, R_AC,
R_FPLACE, crimes.Buffer),
method = "lm", trControl = fitControl, na.action = na.pass)
reg.cv
```
```
## Linear Regression
##
## 1485 samples
## 11 predictor
##
## No pre-processing
## Resampling: Cross-Validated (100 fold)
## Summary of sample sizes: 1471, 1469, 1470, 1471, 1471, 1469, ...
## Resampling results:
##
## RMSE Rsquared MAE
## 272949.7 0.4866642 181828.1
##
## Tuning parameter 'intercept' was held constant at a value of TRUE
```
The cross\-validation output provides very important goodness of fit information. The value of each metric is actually the *mean* value across *all* folds. The `train` function returns many objects (`names(reg.cv)`), one of which is `resample` which provides goodness of fit for each of the 100 folds. Below, the first 5 are output.
```
reg.cv$resample[1:5,]
```
```
## RMSE Rsquared MAE Resample
## 1 183082.9 0.4978145 138100.8 Fold001
## 2 580261.8 0.9449456 299042.0 Fold002
## 3 314298.3 0.1778472 217442.4 Fold003
## 4 441588.1 0.7750248 250324.4 Fold004
## 5 193053.7 0.4765171 138188.8 Fold005
```
`mean(reg.cv$resample[,3])` returns the mean for all 100 `MAE` observations, which should be exactly the same as the average MAE shown above.
If the model is generalizable to new data, we should expect comparable goodness of fit metrics across each fold. There are two ways to see if this is true. The first is simply by taking the standard deviation, `sd`, of the `MAE` across all folds. $74,391 suggests significant variation across folds. This variation can also be visualized with a histogram of across\-fold `MAE`. If the model generalized well, the distribution of errors would cluster tightly together. Instead, this range of errors suggests the model predicts inconsistently, and would likely be unreliable for predicting houses that have not recently sold. This is an important connection for readers to make.
One reason the model may not generalize to new data is that it is simply not powerful enough, as indicated by the high MAE. Figure 3\.15 below maps sale prices and absolute errors for `boston.test`. What do you notice about the errors? Do they look like random noise or are they systematically distributed across space? What is the spatial process behind these errors? These questions will be explored in the next chapter.
### 3\.4\.1 Accuracy \- Mean Absolute Error
Below, the `createDataPartition` function randomly splits `boston.sf` into a 60% `boston.training` dataset and a 40% `boston.test` dataset.[27](#fn27) Note that the air conditioning feature, `R_AC`, contains three possible categories, and `D` appears just once in the data (`table(boston.sf$R_AC)`). The `D` observations must be moved into the training set or removed altogether, otherwise, no \\(\\beta\\) coefficient would be estimated, and the model would fail to `predict`. The parameter `y` balances factors for three such categories across training and test sets.
Another model is then estimated on the training set, `reg.training`.
```
inTrain <- createDataPartition(
y = paste(boston.sf$NUM_FLOORS.cat, boston.sf$Style, boston.sf$R_AC),
p = .60, list = FALSE)
boston.training <- boston.sf[inTrain,]
boston.test <- boston.sf[-inTrain,]
reg.training <- lm(SalePrice ~ ., data = st_drop_geometry(boston.training) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE,
crimes.Buffer))
```
Three new fields are created in `boston.test`. `SalePrice.Predict` is the sale price prediction calculated using `reg.training` to `predict` onto `boston.test`. The `SalePrice.Error` and `SalePrice.AbsError` calculate differences in predicted and observed prices. Absolute values (`abs`) may be suitable if over or under\-predictions are less of a concern. `SalePrice.APE` is the ‘Absolute Percent Error’ \- the difference between predicted and observed prices on a percentage basis. Any sale with a price greater than $5 million is removed from `boston.test`.
Keep in mind, these statistics reflect how well the model predicts for data it has never seen before. Relative to R^2, which tests goodness of fit on the training data, this is a more reliable validation approach.
```
boston.test <-
boston.test %>%
mutate(SalePrice.Predict = predict(reg.training, boston.test),
SalePrice.Error = SalePrice.Predict - SalePrice,
SalePrice.AbsError = abs(SalePrice.Predict - SalePrice),
SalePrice.APE = (abs(SalePrice.Predict - SalePrice)) / SalePrice.Predict)%>%
filter(SalePrice < 5000000)
```
Now that measures of error are attributed to each sale, some basic summary statistics describe goodness of fit. First, Mean Absolute Error (MAE) is calculated. The error is not trivial given the mean `SalePrice` for `boston.test` is $619,070\.
```
mean(boston.test$SalePrice.AbsError, na.rm = T)
```
```
## [1] 176536
```
Next, Mean Absolute Percent Error is calculated by taking the mean `SalePrice.APE`. The ‘MAPE’ confirms our suspicion, suggesting the model errs by 34%.
```
mean(boston.test$SalePrice.APE, na.rm = T)
```
```
## [1] 0.3364525
```
Data visualizations are also useful for diagnosing models. The `geom_histogram` in Figure 3\.11 reveals some very high, outlying errors. In this plot, `scale_x_continuous` ensures x\-axis labels at $100k intervals.
Perhaps the most useful visualization is the leftmost of Figure 3\.12 which plots `SalePrice` as a function of `SalePrice.Predict`. The orange line represents a perfect fit and the green line represents the average predicted fit. If the model were perfectly fit, the green and orange lines would overlap. The deviation suggests that across the range of prices, model predictions are slightly higher than observed prices, on average.
That is not the entire story, however. The rightmost panel in Figure 3\.12 is the same as the left, but divides prices in to three groups to show that the extent of over\-prediction is much higher for lower\-priced sales. A good machine learner will use diagnostic plots like these to understand what additional features may be helpful for improving the model. Here the lesson is more features are needed to account for lower prices.
### 3\.4\.2 Generalizability \- Cross\-validation
Predicting for a single hold out test set is a good way to gauge performance on new data, but testing on many holdouts is even better. Enter cross\-validation.
Cross\-validation ensures that the goodness of fit results for a single hold out is not a fluke. While there are many forms of cross\-validation, Figure 3\.13 visualizes an algorithm called ‘k\-fold’ cross\-validation, which works as such:
1. Partition the `boston.sf` data frame into k equal sized subsets (also known as “folds”).
2. For a given fold, train on a subset of observations, predict on a test set, and measure goodness of fit.
3. Average goodness of fit across all *k* folds.
The `caret` package and the `train` function is used for cross\-validating. Below, a parameter called `fitControl` is set to specify the number of k\-fold partitions \- in this case 100\. In the code below, `set.seed` ensures reproducible folds. An object `reg.cv`, is estimated using the same regression as specified in `reg.training`.
```
fitControl <- trainControl(method = "cv", number = 100)
set.seed(825)
reg.cv <-
train(SalePrice ~ ., data = st_drop_geometry(boston.sf) %>%
dplyr::select(SalePrice,
LivingArea, Style, GROSS_AREA,
NUM_FLOORS.cat, R_BDRMS, R_FULL_BTH,
R_HALF_BTH, R_KITCH, R_AC,
R_FPLACE, crimes.Buffer),
method = "lm", trControl = fitControl, na.action = na.pass)
reg.cv
```
```
## Linear Regression
##
## 1485 samples
## 11 predictor
##
## No pre-processing
## Resampling: Cross-Validated (100 fold)
## Summary of sample sizes: 1471, 1469, 1470, 1471, 1471, 1469, ...
## Resampling results:
##
## RMSE Rsquared MAE
## 272949.7 0.4866642 181828.1
##
## Tuning parameter 'intercept' was held constant at a value of TRUE
```
The cross\-validation output provides very important goodness of fit information. The value of each metric is actually the *mean* value across *all* folds. The `train` function returns many objects (`names(reg.cv)`), one of which is `resample` which provides goodness of fit for each of the 100 folds. Below, the first 5 are output.
```
reg.cv$resample[1:5,]
```
```
## RMSE Rsquared MAE Resample
## 1 183082.9 0.4978145 138100.8 Fold001
## 2 580261.8 0.9449456 299042.0 Fold002
## 3 314298.3 0.1778472 217442.4 Fold003
## 4 441588.1 0.7750248 250324.4 Fold004
## 5 193053.7 0.4765171 138188.8 Fold005
```
`mean(reg.cv$resample[,3])` returns the mean for all 100 `MAE` observations, which should be exactly the same as the average MAE shown above.
If the model is generalizable to new data, we should expect comparable goodness of fit metrics across each fold. There are two ways to see if this is true. The first is simply by taking the standard deviation, `sd`, of the `MAE` across all folds. $74,391 suggests significant variation across folds. This variation can also be visualized with a histogram of across\-fold `MAE`. If the model generalized well, the distribution of errors would cluster tightly together. Instead, this range of errors suggests the model predicts inconsistently, and would likely be unreliable for predicting houses that have not recently sold. This is an important connection for readers to make.
One reason the model may not generalize to new data is that it is simply not powerful enough, as indicated by the high MAE. Figure 3\.15 below maps sale prices and absolute errors for `boston.test`. What do you notice about the errors? Do they look like random noise or are they systematically distributed across space? What is the spatial process behind these errors? These questions will be explored in the next chapter.
3\.5 Conclusion \- Our first model
----------------------------------
In this chapter, geospatial machine learning was introduced by way of home price prediction, an important use case for local governments that use prediction to assess property taxes. The goal of geospatial machine learning is to ‘borrow the experience’ of places where data exists (the training set) and test whether that experience generalizes to new places (the test set). Accuracy and generalizability were introduced as two critical themes of prediction, and we will return to these frequently in the coming chapters.
If this chapter is your first exposure to regression, than I think it would be helpful to really understand key concepts before moving forward. Why is feature engineering so important? Why is identifying colinearity important? What does it mean for regression errors to be random noise? How does cross\-validation help to understand generalizability?
Keep in mind that the three components of home prices that need to be modeled are internal/parcel characteristics, public services/amenities and the spatial process of prices. Omitting the spatial process lead to errors that are clearly non\-random across space. Chapter 4 will teach us how to account for this missing variation.
3\.6 Assignment \- Predict house prices
---------------------------------------
When I teach this module to my students, the homework is a three\-week long home price predictive modeling competition. I give the students a training set of prices and tax parcel ids in a city, keeping a subset of prices hidden. Students then work in pairs to wrangle these data with other open datsets, and build models with minimal errors.
Cash prizes are awarded for the top two best performing teams and a third prize for data visualization and R markdown presentation. Unless you are taking my class, I have no cash for you (sorry to say), but that should not stop you from replicating this analysis on any number of open home sale datasets across the country.
Your focus should not only be on developing an accurate and generalizable model but on presenting the work flow for a non\-technical decision\-maker.
| Field Specific |
urbanspatial.github.io | https://urbanspatial.github.io/PublicPolicyAnalytics/intro-to-geospatial-machine-learning-part-2.html |
Chapter 4 Intro to geospatial machine learning, Part 2
======================================================
4\.1 On the spatial process of home prices
------------------------------------------
Recall the three components of the hedonic home price model \- internal characteristics, like the number of bedrooms; neighborhood amenities/public services, like crime exposure; and the underlying spatial process of prices. Modeling the first two in the previous chapter still left nearly one third of the variation in price unexplained.
In this chapter, the spatial process is added and we learn why generalizability across space is so important for geospatial machine learning. Let’s start with the relevant spatial process in home prices.
To me, the most interesting housing market dynamic is that from individual real estate transactions emerges a systematic spatial pattern of house prices. The top most map in Figure 4\.1 illustrates this pattern. As discussed in the Introduction, these patterns result both from external and internal decision\-making.
External decision\-makers, like Planners, enact zoning regulations dictating what can be built, where, while internal decision\-makers, like home buyers, bid on locations according to their preferences. Both preferences and zoning can be accounted for in the first two components of the hedonic model \- so what is left unexplained in the error term?
First, imagine that all nearby houses have a backyard pool valued at $10k. If the pool was the only component left unaccounted for in the model, then each nearby house should exhibit regression errors (of $10k) that cluster in space.
Second, homes are ‘appraised’ by looking at ‘comparable’ houses, nearby. This means that these comparable houses, nearby are a ‘price signal’, and if that signal is left unaccounted for, regression errors will also cluster in space.[28](#fn28)
The key to engineering features that account for this spatial process is understanding the *spatial scale* of comparable houses, nearby. This is challenging because, as Figure 4\.1 illustrates, prices in Boston exhibit clustering at different spatial scales \- both within *and* across neighborhoods.
The goal in this chapter is to engineer features that account for neighborhood\-scale clustering. We will test for generalizability of model predictions with and without these features, and conclude by considering the implications of deploying a property tax assessment algorithm that does not generalize across space.
In the next section, clustering of home price and regression errors is explored, followed by the creation of a ‘neighborhood fixed effect’ feature.
### 4\.1\.1 Setup \& Data Wrangling
In this section, libraries are loaded; the final Chapter 3 dataset, including crime features, is read in and split into training and test sets. The `reg.training` model is estimated again, and goodness of fit metrics are calculated on the `boston.test` set.
```
library(tidyverse)
library(sf)
library(spdep)
library(caret)
library(ckanr)
library(grid)
library(gridExtra)
library(knitr)
library(kableExtra)
library(tidycensus)
library(scales)
palette5 <- c("#25CB10", "#5AB60C", "#8FA108", "#C48C04", "#FA7800")
```
The data and functions are loaded.
```
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
boston.sf <- st_read(file.path(root.dir,"/Chapter3_4/boston_sf_Ch1_wrangled.geojson")) %>%
st_set_crs('ESRI:102286')
nhoods <-
st_read("http://bostonopendata-boston.opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson") %>%
st_transform('ESRI:102286')
```
Here the data is split into training and test sets, modeled, and predictions are estimated for a `Baseline Regression`.
```
inTrain <- createDataPartition(
y = paste(boston.sf$Name, boston.sf$NUM_FLOORS.cat,
boston.sf$Style, boston.sf$R_AC),
p = .60, list = FALSE)
boston.training <- boston.sf[inTrain,]
boston.test <- boston.sf[-inTrain,]
reg.training <-
lm(SalePrice ~ ., data = as.data.frame(boston.training) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE, crimes.Buffer))
boston.test <-
boston.test %>%
mutate(Regression = "Baseline Regression",
SalePrice.Predict = predict(reg.training, boston.test),
SalePrice.Error = SalePrice.Predict - SalePrice,
SalePrice.AbsError = abs(SalePrice.Predict - SalePrice),
SalePrice.APE = (abs(SalePrice.Predict - SalePrice)) / SalePrice.Predict)%>%
filter(SalePrice < 5000000)
```
4\.2 Do prices \& errors cluster? The Spatial Lag
-------------------------------------------------
So we now understand that even if a regression perfectly accounts for internal characteristics and public services/amenities, it may still have errors that cluster in space. The best way to test for this is to simply map the errors, as Figure 4\.1\.
Clustering is also known as spatial autocorrelation \- the idea that nearer things are more related than farther things. Let’s consider a spatial autocorrelation test that correlates home prices with nearby home prices.
To do so, the code block below introduces new feature engineering that calculates for each home sale, the *average* sale price of its *k* nearest neighbors. In spatial analysis parlance, this is the ‘spatial lag’, and it is measured using a series of functions from the `spdep` package.
First, a data frame of `coords`, is created by taking the `st_coordinates` of `boston.sf`. The `knn2nb` function creates a `neighborList`, coding for each point, its `5` nearest neighbors. Next, a ‘spatial weights matrix’ is created, formally relating each sale price observation to those in the `neighborList`. Finally, the `lag.listw` function calculates a spatial lag of price, `boston.sf$lagPrice`, which is the average price of a home sale’s 5 nearest neighbors. The leftmost panel of Figure 4\.2 above plots `SalePrice` as a function of `lagPrice`.
```
coords <- st_coordinates(boston.sf)
neighborList <- knn2nb(knearneigh(coords, 5))
spatialWeights <- nb2listw(neighborList, style="W")
boston.sf$lagPrice <- lag.listw(spatialWeights, boston.sf$SalePrice)
```
The interpretation of this plot is that as price increases, so does the price of nearby houses. The correlation for this spatial lag relationship is 0\.87 and is highly statistically significant. This is substantial evidence for clustering of home prices.
How about for model errors? The code block below replicates the spatial lag procedure for `SalePrice.Error`, calculating the lag directly in the `mutate` of `ggplot`. The relationship is visualized in rightmost plot above and describes a marginal but significant correlation of 0\.24\. The interpretation is that as home price errors increase, so does nearby home price errors. That model errors are spatially autocorrelated suggests that critical spatial information has been omitted from the model. Let’s now demonstrate a second approach for measuring spatial autocorrelation \- Moran’s *I*.
```
coords.test <- st_coordinates(boston.test)
neighborList.test <- knn2nb(knearneigh(coords.test, 5))
spatialWeights.test <- nb2listw(neighborList.test, style="W")
boston.test %>%
mutate(lagPriceError = lag.listw(spatialWeights.test, SalePrice.Error)) %>%
ggplot(aes(lagPriceError, SalePrice.Error))
```
### 4\.2\.1 Do model errors cluster? \- Moran’s *I*
Moran’s *I* is based on a null hypothesis that a given spatial process is randomly distributed. The statistic analyzes how local means deviate from the global mean. A positive *I* near 1, describes positive spatial autocorrelation, or clustering. An *I* near \-1 suggests a spatial process where high and low prices/errors ‘repel’ one another, or are dispersed. Where positive and negative values are randomly distributed, the Moran’s *I* statistic is near 0\.
A statistically significant p\-value overturns the null hypothesis to conclude a clustered spatial process. The Moran’s *I* p\-value is estimated by comparing the observed Moran’s *I* to the *I* calculated from many random permutations of points, like so:
1. The point geometries of home sale observations remain fixed in space.
2. Values of `SalePrice.Error` are randomly assigned to observed home sale locations.
3. Moran’s *I* is calculated for that permutation.
4. The process is repeated over *n* permutations creating a distribution of permuted Moran’s *I* values, which is then sorted.
5. If the observed Moran’s *I* value is greater than say, 95% of the permuted *I* values, then conclude that the observed spatial distribution of errors exhibit greater clustering than expected by random chance alone (a p\-value of 0\.05\).
Figure 4\.3 demonstrates Moran’s *I* values for the three spatial processes. The `moran.mc` function (`mc` stands for ‘Monte Carlo’) is used for the random permutation approach. 999 random permutations of *I* are calculated plus 1 observed *I*, giving a total distribution of 1000 Moran’s *I* observations.
The `Clustered` point process yields a middling *I* of 0\.48, but a p\-value of 0\.001 suggests that the observed point process is more clustered than all 999 random permutations (1 / 999 \= 0\.001\) and is statistically significant. In the `Random` case, the *I* is close to 0 at 0\.0189, and the p\-value is insignificant at 0\.363\. Here, about a third of the random permutations had higher Moran’s *I* than the observed configuration. Finally, the `Dispersed` case has an *I* of \-1 suggesting perfect dispersion of values. The p\-value is 0\.999, with all random permutations having Moran’s I greater than the configuration pictured (999/1000 \= 0\.999\).
The `moran.mc` function below calculates Moran’s *I* for the `SalePrice.Error` in `boston.test`. Note the inclusion of the `spatialWeights.test` built above. The observed Moran’s *I* of 0\.13 seems marginal, but the p\-value of 0\.001 suggests model errors cluster more than what we might expect due to random chance alone.
In Figure 4\.4, the frequency of all 999 randomly permutated *I* are plotted as a histogram with the Observed *I* indicated by the orange line. That the observed *I* is higher then all of the 999 randomly generated *I*’s provides visual confirmation of spatial autocorrelation. Both the spatial lag and Moran’s *I* test show that model errors exhibit spatial autocorrelation. Again, this suggests that variation in price likely related to the spatial process has been omitted from our current model.
In the next section, new geospatial features are created to account for some this spatial variation.
```
moranTest <- moran.mc(boston.test$SalePrice.Error,
spatialWeights.test, nsim = 999)
ggplot(as.data.frame(moranTest$res[c(1:999)]), aes(moranTest$res[c(1:999)])) +
geom_histogram(binwidth = 0.01) +
geom_vline(aes(xintercept = moranTest$statistic), colour = "#FA7800",size=1) +
scale_x_continuous(limits = c(-1, 1)) +
labs(title="Observed and permuted Moran's I",
subtitle= "Observed Moran's I in orange",
x="Moran's I",
y="Count") +
plotTheme()
```
4\.3 Accounting for neighborhood
--------------------------------
Now onto new geospatial features to help account for the spatial process at the neighborhood scale. To do so, a ‘neighborhood fixed effect’ feature is created.
Fixed effects were discussed in the previous chapter. These are categorical features, like architectural `Style`, that hypothesize statistically significant differences in price. `summary(reg.training)` shows that some of the `Style` fixed effects are statistically significant and help to predict price. The same hypothesis can be extended to categorical, neighborhood fixed effects.
For some intuition, the code block below regresses `SalePrice` as a function of neighborhood fixed effects, `Name`. A table is created showing that for each neighborhood, the mean `SalePrice` and the `meanPrediction` are identical. Thus, accounting for neighborhood effects accounts for the neighborhood mean of prices, and hopefully, some of the spatial process that was otherwise omitted from the model. Try to understand how `left_join` helps create the below table.
```
left_join(
st_drop_geometry(boston.test) %>%
group_by(Name) %>%
summarize(meanPrice = mean(SalePrice, na.rm = T)),
mutate(boston.test, predict.fe =
predict(lm(SalePrice ~ Name, data = boston.test),
boston.test)) %>%
st_drop_geometry %>%
group_by(Name) %>%
summarize(meanPrediction = mean(predict.fe))) %>%
kable() %>% kable_styling()
```
| Name | meanPrice | meanPrediction |
| --- | --- | --- |
| Beacon Hill | 2996000\.0 | 2996000\.0 |
| Charlestown | 904948\.6 | 904948\.6 |
| Dorchester | 524021\.5 | 524021\.5 |
| East Boston | 506665\.8 | 506665\.8 |
| Hyde Park | 413728\.0 | 413728\.0 |
| Jamaica Plain | 894167\.9 | 894167\.9 |
| Mattapan | 411423\.7 | 411423\.7 |
| Mission Hill | 1442000\.0 | 1442000\.0 |
| Roslindale | 481714\.3 | 481714\.3 |
| Roxbury | 536004\.8 | 536004\.8 |
| South Boston | 740452\.7 | 740452\.7 |
| South End | 3548250\.0 | 3548250\.0 |
| West Roxbury | 534254\.5 | 534254\.5 |
| |
| --- |
| Table 4\.1 |
Let’s see how much new predictive power these neighborhood effects generate by re\-estimating the regression with the neighborhood `Name` feature. Note that `Name` was included in the `createDataPartition` function above to ensure that neighborhoods with few sale price observations are moved into the training set.
The code block below estimates `reg.nhood` and creates a data frame, `boston.test.nhood`, with all goodness of fit metrics.
```
reg.nhood <- lm(SalePrice ~ ., data = as.data.frame(boston.training) %>%
dplyr::select(Name, SalePrice, LivingArea,
Style, GROSS_AREA, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE,crimes.Buffer))
boston.test.nhood <-
boston.test %>%
mutate(Regression = "Neighborhood Effects",
SalePrice.Predict = predict(reg.nhood, boston.test),
SalePrice.Error = SalePrice.Predict- SalePrice,
SalePrice.AbsError = abs(SalePrice.Predict- SalePrice),
SalePrice.APE = (abs(SalePrice.Predict- SalePrice)) / SalePrice)%>%
filter(SalePrice < 5000000)
```
### 4\.3\.1 Accuracy of the neighborhood model
How well do neighborhood fixed effects improve the model relative to the `Baseline Regression`? `summary(reg.nhood)` indicates that the neighborhood effects are very significant, but have rendered the `Style` fixed effects insignificant, suggesting perhaps that architectural style and neighborhood are colinear. How might this make sense? The code block below binds error metrics from `bothRegresions`, calculating a `lagPriceError` for each.
```
bothRegressions <-
rbind(
dplyr::select(boston.test, starts_with("SalePrice"), Regression, Name) %>%
mutate(lagPriceError = lag.listw(spatialWeights.test, SalePrice.Error)),
dplyr::select(boston.test.nhood, starts_with("SalePrice"), Regression, Name) %>%
mutate(lagPriceError = lag.listw(spatialWeights.test, SalePrice.Error)))
```
First, a table is created describing the MAE and MAPE for `bothRegressions`. The `Neighborhood Effects` model is more accurate on both a dollars and percentage basis. Interestingly, the R^2 of `reg.nhood` is 0\.92, which would be very high to a social scientist. To a data scientist however, a MAPE of 18% ($109,446 at the mean price) suggests this model still needs much improvement to be used in the real world.
```
st_drop_geometry(bothRegressions) %>%
gather(Variable, Value, -Regression, -Name) %>%
filter(Variable == "SalePrice.AbsError" | Variable == "SalePrice.APE") %>%
group_by(Regression, Variable) %>%
summarize(meanValue = mean(Value, na.rm = T)) %>%
spread(Variable, meanValue) %>%
kable()
```
| Regression | SalePrice.AbsError | SalePrice.APE |
| --- | --- | --- |
| Baseline Regression | 183140\.4 | 0\.7445456 |
| Neighborhood Effects | 105697\.1 | 0\.1832663 |
| |
| --- |
| Table 4\.2 |
Next, predicted prices are plotted as a function of observed prices. Recall the orange line represents a would\-be perfect fit, while the green line represents the predicted fit. The `Neighborhood Effects` model clearly fits the data better, and does so for all price levels, low, medium and high. Note, for these plots, `Regression` is the grouping variable in `facet_wrap`.
The neighborhood effects added much predictive power, perhaps by explaining part of the spatial process. As such, we should now expect less clustering or spatial autocorrelation in model errors.
```
bothRegressions %>%
dplyr::select(SalePrice.Predict, SalePrice, Regression) %>%
ggplot(aes(SalePrice, SalePrice.Predict)) +
geom_point() +
stat_smooth(aes(SalePrice, SalePrice),
method = "lm", se = FALSE, size = 1, colour="#FA7800") +
stat_smooth(aes(SalePrice.Predict, SalePrice),
method = "lm", se = FALSE, size = 1, colour="#25CB10") +
facet_wrap(~Regression) +
labs(title="Predicted sale price as a function of observed price",
subtitle="Orange line represents a perfect prediction; Green line represents prediction") +
plotTheme()
```
### 4\.3\.2 Spatial autocorrelation in the neighborhood model
Model errors that cluster in space reflect a regression that has omitted the spatial process. Given that the neighborhood fixed effects improved model accuracy, we might expect errors to be less spatially autocorrelated.
To test this hypothesis, Figure 4\.6 maps model errors across `bothRegressions`. Does it look like the errors from the `Neighborhood Effects` are more randomly distributed in space relative to the `Baseline`?
The Moran’s *I* for the `Baseline` and `Neighborhood Effects` regression errors are 0\.13 and 0\.2, respectively, and both are statistically significant.
Figure 4\.7 plots errors as a function of lagged model errors for both regressions. Note that range of errors is smaller for the `Neighborhood Effects` regression because it is more accurate. The correlations for the `Baseline` and `Neighborhood Effects` are 0\.24 and 0\.35, respectively, and are also statistically significant.
What does this all mean? We hypothesized that neighborhood effects could help account for the otherwise missing spatial process. Doing so, reduced errors by nearly half, but spatial autocorrelation remained. Why?
For one, the neighborhood fixed effect may be controlling for omitted public services/amenity features like schools, access to transit etc. Second, these features account the spatial process *across* neighborhoods. Is it possible that other spatial processes exist *within* neighborhoods? Can you think of some features that might help account for the spatial process at even smaller spatial scales? We return to this question in Chapter 5\.
### 4\.3\.3 Generalizability of the neighborhood model
Recall the two definitions of generalizability: First, the ability to predict accurately on new data \- which was the focus of cross\-validation in Chapter 3\. Second, the ability to predict with comparable accuracy across different group contexts, like neighborhoods. Here, the focus will be on the latter. It is paramount that a geospatial predictive model generalize to different neighborhood contexts. If it does not then the algorithm may not be fair.
Generalizability across space is very challenging with OLS, because OLS focuses on relationships at the mean. A model trained on Boston will not perform well on Phoenix, for example, because the urban context (at the mean) varies so significantly. Similarly, a model trained on eight low\-income neighborhoods and used to predict for a ninth wealthy, neighborhood, will also perform poorly.
Two approaches for testing across\-neighborhood generalizability are demonstrated. The first simply maps Mean Absolute Percent Errors (MAPE) by neighborhood. The second gathers Census data to test how well each model generalizes to different group contexts \- like race and income.
Figure 4\.8 below maps the mean MAPE by `nhoods` for `bothRegressions`. Not only is the `Neighborhood Effects` model more accurate in general, but its accuracy is more consistent across neighborhoods. This consistency suggests the *Neighborhood Effects* model is more generalizable. Note the `left_join` which attaches `mean.MAPE` to neighborhood geometries.
```
st_drop_geometry(bothRegressions) %>%
group_by(Regression, Name) %>%
summarize(mean.MAPE = mean(SalePrice.APE, na.rm = T)) %>%
ungroup() %>%
left_join(nhoods) %>%
st_sf() %>%
ggplot() +
geom_sf(aes(fill = mean.MAPE)) +
geom_sf(data = bothRegressions, colour = "black", size = .5) +
facet_wrap(~Regression) +
scale_fill_gradient(low = palette5[1], high = palette5[5],
name = "MAPE") +
labs(title = "Mean test set MAPE by neighborhood") +
mapTheme()
```
To test generalizability across urban contexts, `tidycensus` downloads Census data to define a `raceContext` and an `incomeContext` (don’t forget your `census_api_key`). `output = "wide"` brings in the data in wide form. Census tracts where at least 51% of residents are white receive a `Majority White` designation. Tracts with incomes greater than the citywide mean receive a `High Income` designation.
These designations are arbitrary and suffer from MAUP bias but are still useful for providing descriptive context of generalizability. Figure 4\.9 below maps the two neighborhood contexts. As these outcomes have different categorical labels, `facet_wrap` cannot be used to create a small multiple map. Instead `grid.arrange` binds both maps together over two columns (`ncol=2`).
```
tracts17 <-
get_acs(geography = "tract", variables = c("B01001_001E","B01001A_001E","B06011_001"),
year = 2017, state=25, county=025, geometry=T, output = "wide") %>%
st_transform('ESRI:102286') %>%
rename(TotalPop = B01001_001E,
NumberWhites = B01001A_001E,
Median_Income = B06011_001E) %>%
mutate(percentWhite = NumberWhites / TotalPop,
raceContext = ifelse(percentWhite > .5, "Majority White", "Majority Non-White"),
incomeContext = ifelse(Median_Income > 32322, "High Income", "Low Income"))
grid.arrange(ncol = 2,
ggplot() + geom_sf(data = na.omit(tracts17), aes(fill = raceContext)) +
scale_fill_manual(values = c("#25CB10", "#FA7800"), name="Race Context") +
labs(title = "Race Context") +
mapTheme() + theme(legend.position="bottom"),
ggplot() + geom_sf(data = na.omit(tracts17), aes(fill = incomeContext)) +
scale_fill_manual(values = c("#25CB10", "#FA7800"), name="Income Context") +
labs(title = "Income Context") +
mapTheme() + theme(legend.position="bottom"))
```
Figure 4\.9 shows that Boston is very segregated with respect to both race and income. In the code block below, MAPE is calculated across `bothRegressions` and both neighborhood contexts. There is a significant difference in goodness of fit across the races, but the gap is closer for the `Neighborhood Effects` model.
```
st_join(bothRegressions, tracts17) %>%
group_by(Regression, raceContext) %>%
summarize(mean.MAPE = scales::percent(mean(SalePrice.APE, na.rm = T))) %>%
st_drop_geometry() %>%
spread(raceContext, mean.MAPE) %>%
kable(caption = "Test set MAPE by neighborhood racial context")
```
| Regression | Majority Non\-White | Majority White |
| --- | --- | --- |
| Baseline Regression | 119% | 43% |
| Neighborhood Effects | 20% | 17% |
| |
| --- |
| Table 4\.3 |
The same trend is evident for income, with the `Baseline Regression` exhibiting much higher error rate differences across income contexts. It is now clear how the inclusion of neighborhood effects makes the model more generalizable. What would be the consequences of deploying the `Baseline Regression` to assess property taxes?
What do you notice when calculating mean price *error* by group? `SalePrice.Error` is calculated by subtracting observed price from predicted, meaning that a positive price represents an over\-prediction. If put into production, this algorithm would disproportionately over\-assess and overtax poorer, minority neighborhoods. Such an outcome would be both grossly unfair and a political liability. This ‘disparate impact’ will be the focus of several chapters to come.
```
st_join(bothRegressions, tracts17) %>%
filter(!is.na(incomeContext)) %>%
group_by(Regression, incomeContext) %>%
summarize(mean.MAPE = scales::percent(mean(SalePrice.APE, na.rm = T))) %>%
st_drop_geometry() %>%
spread(incomeContext, mean.MAPE) %>%
kable(caption = "Test set MAPE by neighborhood income context")
```
| Regression | High Income | Low Income |
| --- | --- | --- |
| Baseline Regression | 43% | 115% |
| Neighborhood Effects | 16% | 21% |
| |
| --- |
| Table 4\.4 |
4\.4 Conclusion \- Features at multiple scales
----------------------------------------------
Chapter 3 introduced the geospatial machine learning framework, culminating in a model that did not account for the spatial process of home prices. In this chapter, new methods have been introduced for quantifying spatial autocorrelation. Neighborhood features were added to control for at least part of the spatial process.
These neighborhood ‘fixed effects’ increased the accuracy and generalizability of the model, but model errors remained clustered. What makes home prices so difficult to predict (and such a great first use case), is that prices cluster at different spatial scales. We successfully accounted for the neighborhood scale, but other spatial relationships exist at smaller scales.
Some readers will be drawn to the idea of location\-based fixed effects and seek to engineer neighborhoods that better account for within and across price variation. This is a slippery slope. One reasonable approach may be to ‘cluster’ prices into a set of optimal neighborhoods, but quickly, one will realize that smaller neighborhoods are better. Imagine the smallest neighborhood \- a fixed effect for each house. The model would trend perfect, but fail if used to predict for a new property (for which no coefficient has been estimated).
This is referred to as ‘overfitting’. The models in these two chapters lack accuracy and are ‘underfit’. Conversely, a model with an R^2 of 1 or a MAPE of 0% are ‘overfit’ and would fail on new data. I have provided some tests to help, but no one statistic can conclude whether a model is under or overfit. Accuracy and generalizability are trade\-offs, and the best way to judge a model is by relating it to the decision\-making process.
Finally, neighborhood controls created a more generalizable model, presumably because it successfully ‘borrowed’ a diverse set of housing market ‘experiences’ across different communities (ie. low\-income and wealthy). However, as we will learn in the next chapter, if the data gathering process itself does not generalize across space, it is unlikely the predictions generated from those data will generalize either. We will continue to see how generalizability is the most significant consideration in public\-sector predictive modeling.
4\.1 On the spatial process of home prices
------------------------------------------
Recall the three components of the hedonic home price model \- internal characteristics, like the number of bedrooms; neighborhood amenities/public services, like crime exposure; and the underlying spatial process of prices. Modeling the first two in the previous chapter still left nearly one third of the variation in price unexplained.
In this chapter, the spatial process is added and we learn why generalizability across space is so important for geospatial machine learning. Let’s start with the relevant spatial process in home prices.
To me, the most interesting housing market dynamic is that from individual real estate transactions emerges a systematic spatial pattern of house prices. The top most map in Figure 4\.1 illustrates this pattern. As discussed in the Introduction, these patterns result both from external and internal decision\-making.
External decision\-makers, like Planners, enact zoning regulations dictating what can be built, where, while internal decision\-makers, like home buyers, bid on locations according to their preferences. Both preferences and zoning can be accounted for in the first two components of the hedonic model \- so what is left unexplained in the error term?
First, imagine that all nearby houses have a backyard pool valued at $10k. If the pool was the only component left unaccounted for in the model, then each nearby house should exhibit regression errors (of $10k) that cluster in space.
Second, homes are ‘appraised’ by looking at ‘comparable’ houses, nearby. This means that these comparable houses, nearby are a ‘price signal’, and if that signal is left unaccounted for, regression errors will also cluster in space.[28](#fn28)
The key to engineering features that account for this spatial process is understanding the *spatial scale* of comparable houses, nearby. This is challenging because, as Figure 4\.1 illustrates, prices in Boston exhibit clustering at different spatial scales \- both within *and* across neighborhoods.
The goal in this chapter is to engineer features that account for neighborhood\-scale clustering. We will test for generalizability of model predictions with and without these features, and conclude by considering the implications of deploying a property tax assessment algorithm that does not generalize across space.
In the next section, clustering of home price and regression errors is explored, followed by the creation of a ‘neighborhood fixed effect’ feature.
### 4\.1\.1 Setup \& Data Wrangling
In this section, libraries are loaded; the final Chapter 3 dataset, including crime features, is read in and split into training and test sets. The `reg.training` model is estimated again, and goodness of fit metrics are calculated on the `boston.test` set.
```
library(tidyverse)
library(sf)
library(spdep)
library(caret)
library(ckanr)
library(grid)
library(gridExtra)
library(knitr)
library(kableExtra)
library(tidycensus)
library(scales)
palette5 <- c("#25CB10", "#5AB60C", "#8FA108", "#C48C04", "#FA7800")
```
The data and functions are loaded.
```
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
boston.sf <- st_read(file.path(root.dir,"/Chapter3_4/boston_sf_Ch1_wrangled.geojson")) %>%
st_set_crs('ESRI:102286')
nhoods <-
st_read("http://bostonopendata-boston.opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson") %>%
st_transform('ESRI:102286')
```
Here the data is split into training and test sets, modeled, and predictions are estimated for a `Baseline Regression`.
```
inTrain <- createDataPartition(
y = paste(boston.sf$Name, boston.sf$NUM_FLOORS.cat,
boston.sf$Style, boston.sf$R_AC),
p = .60, list = FALSE)
boston.training <- boston.sf[inTrain,]
boston.test <- boston.sf[-inTrain,]
reg.training <-
lm(SalePrice ~ ., data = as.data.frame(boston.training) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE, crimes.Buffer))
boston.test <-
boston.test %>%
mutate(Regression = "Baseline Regression",
SalePrice.Predict = predict(reg.training, boston.test),
SalePrice.Error = SalePrice.Predict - SalePrice,
SalePrice.AbsError = abs(SalePrice.Predict - SalePrice),
SalePrice.APE = (abs(SalePrice.Predict - SalePrice)) / SalePrice.Predict)%>%
filter(SalePrice < 5000000)
```
### 4\.1\.1 Setup \& Data Wrangling
In this section, libraries are loaded; the final Chapter 3 dataset, including crime features, is read in and split into training and test sets. The `reg.training` model is estimated again, and goodness of fit metrics are calculated on the `boston.test` set.
```
library(tidyverse)
library(sf)
library(spdep)
library(caret)
library(ckanr)
library(grid)
library(gridExtra)
library(knitr)
library(kableExtra)
library(tidycensus)
library(scales)
palette5 <- c("#25CB10", "#5AB60C", "#8FA108", "#C48C04", "#FA7800")
```
The data and functions are loaded.
```
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
boston.sf <- st_read(file.path(root.dir,"/Chapter3_4/boston_sf_Ch1_wrangled.geojson")) %>%
st_set_crs('ESRI:102286')
nhoods <-
st_read("http://bostonopendata-boston.opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson") %>%
st_transform('ESRI:102286')
```
Here the data is split into training and test sets, modeled, and predictions are estimated for a `Baseline Regression`.
```
inTrain <- createDataPartition(
y = paste(boston.sf$Name, boston.sf$NUM_FLOORS.cat,
boston.sf$Style, boston.sf$R_AC),
p = .60, list = FALSE)
boston.training <- boston.sf[inTrain,]
boston.test <- boston.sf[-inTrain,]
reg.training <-
lm(SalePrice ~ ., data = as.data.frame(boston.training) %>%
dplyr::select(SalePrice, LivingArea, Style,
GROSS_AREA, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE, crimes.Buffer))
boston.test <-
boston.test %>%
mutate(Regression = "Baseline Regression",
SalePrice.Predict = predict(reg.training, boston.test),
SalePrice.Error = SalePrice.Predict - SalePrice,
SalePrice.AbsError = abs(SalePrice.Predict - SalePrice),
SalePrice.APE = (abs(SalePrice.Predict - SalePrice)) / SalePrice.Predict)%>%
filter(SalePrice < 5000000)
```
4\.2 Do prices \& errors cluster? The Spatial Lag
-------------------------------------------------
So we now understand that even if a regression perfectly accounts for internal characteristics and public services/amenities, it may still have errors that cluster in space. The best way to test for this is to simply map the errors, as Figure 4\.1\.
Clustering is also known as spatial autocorrelation \- the idea that nearer things are more related than farther things. Let’s consider a spatial autocorrelation test that correlates home prices with nearby home prices.
To do so, the code block below introduces new feature engineering that calculates for each home sale, the *average* sale price of its *k* nearest neighbors. In spatial analysis parlance, this is the ‘spatial lag’, and it is measured using a series of functions from the `spdep` package.
First, a data frame of `coords`, is created by taking the `st_coordinates` of `boston.sf`. The `knn2nb` function creates a `neighborList`, coding for each point, its `5` nearest neighbors. Next, a ‘spatial weights matrix’ is created, formally relating each sale price observation to those in the `neighborList`. Finally, the `lag.listw` function calculates a spatial lag of price, `boston.sf$lagPrice`, which is the average price of a home sale’s 5 nearest neighbors. The leftmost panel of Figure 4\.2 above plots `SalePrice` as a function of `lagPrice`.
```
coords <- st_coordinates(boston.sf)
neighborList <- knn2nb(knearneigh(coords, 5))
spatialWeights <- nb2listw(neighborList, style="W")
boston.sf$lagPrice <- lag.listw(spatialWeights, boston.sf$SalePrice)
```
The interpretation of this plot is that as price increases, so does the price of nearby houses. The correlation for this spatial lag relationship is 0\.87 and is highly statistically significant. This is substantial evidence for clustering of home prices.
How about for model errors? The code block below replicates the spatial lag procedure for `SalePrice.Error`, calculating the lag directly in the `mutate` of `ggplot`. The relationship is visualized in rightmost plot above and describes a marginal but significant correlation of 0\.24\. The interpretation is that as home price errors increase, so does nearby home price errors. That model errors are spatially autocorrelated suggests that critical spatial information has been omitted from the model. Let’s now demonstrate a second approach for measuring spatial autocorrelation \- Moran’s *I*.
```
coords.test <- st_coordinates(boston.test)
neighborList.test <- knn2nb(knearneigh(coords.test, 5))
spatialWeights.test <- nb2listw(neighborList.test, style="W")
boston.test %>%
mutate(lagPriceError = lag.listw(spatialWeights.test, SalePrice.Error)) %>%
ggplot(aes(lagPriceError, SalePrice.Error))
```
### 4\.2\.1 Do model errors cluster? \- Moran’s *I*
Moran’s *I* is based on a null hypothesis that a given spatial process is randomly distributed. The statistic analyzes how local means deviate from the global mean. A positive *I* near 1, describes positive spatial autocorrelation, or clustering. An *I* near \-1 suggests a spatial process where high and low prices/errors ‘repel’ one another, or are dispersed. Where positive and negative values are randomly distributed, the Moran’s *I* statistic is near 0\.
A statistically significant p\-value overturns the null hypothesis to conclude a clustered spatial process. The Moran’s *I* p\-value is estimated by comparing the observed Moran’s *I* to the *I* calculated from many random permutations of points, like so:
1. The point geometries of home sale observations remain fixed in space.
2. Values of `SalePrice.Error` are randomly assigned to observed home sale locations.
3. Moran’s *I* is calculated for that permutation.
4. The process is repeated over *n* permutations creating a distribution of permuted Moran’s *I* values, which is then sorted.
5. If the observed Moran’s *I* value is greater than say, 95% of the permuted *I* values, then conclude that the observed spatial distribution of errors exhibit greater clustering than expected by random chance alone (a p\-value of 0\.05\).
Figure 4\.3 demonstrates Moran’s *I* values for the three spatial processes. The `moran.mc` function (`mc` stands for ‘Monte Carlo’) is used for the random permutation approach. 999 random permutations of *I* are calculated plus 1 observed *I*, giving a total distribution of 1000 Moran’s *I* observations.
The `Clustered` point process yields a middling *I* of 0\.48, but a p\-value of 0\.001 suggests that the observed point process is more clustered than all 999 random permutations (1 / 999 \= 0\.001\) and is statistically significant. In the `Random` case, the *I* is close to 0 at 0\.0189, and the p\-value is insignificant at 0\.363\. Here, about a third of the random permutations had higher Moran’s *I* than the observed configuration. Finally, the `Dispersed` case has an *I* of \-1 suggesting perfect dispersion of values. The p\-value is 0\.999, with all random permutations having Moran’s I greater than the configuration pictured (999/1000 \= 0\.999\).
The `moran.mc` function below calculates Moran’s *I* for the `SalePrice.Error` in `boston.test`. Note the inclusion of the `spatialWeights.test` built above. The observed Moran’s *I* of 0\.13 seems marginal, but the p\-value of 0\.001 suggests model errors cluster more than what we might expect due to random chance alone.
In Figure 4\.4, the frequency of all 999 randomly permutated *I* are plotted as a histogram with the Observed *I* indicated by the orange line. That the observed *I* is higher then all of the 999 randomly generated *I*’s provides visual confirmation of spatial autocorrelation. Both the spatial lag and Moran’s *I* test show that model errors exhibit spatial autocorrelation. Again, this suggests that variation in price likely related to the spatial process has been omitted from our current model.
In the next section, new geospatial features are created to account for some this spatial variation.
```
moranTest <- moran.mc(boston.test$SalePrice.Error,
spatialWeights.test, nsim = 999)
ggplot(as.data.frame(moranTest$res[c(1:999)]), aes(moranTest$res[c(1:999)])) +
geom_histogram(binwidth = 0.01) +
geom_vline(aes(xintercept = moranTest$statistic), colour = "#FA7800",size=1) +
scale_x_continuous(limits = c(-1, 1)) +
labs(title="Observed and permuted Moran's I",
subtitle= "Observed Moran's I in orange",
x="Moran's I",
y="Count") +
plotTheme()
```
### 4\.2\.1 Do model errors cluster? \- Moran’s *I*
Moran’s *I* is based on a null hypothesis that a given spatial process is randomly distributed. The statistic analyzes how local means deviate from the global mean. A positive *I* near 1, describes positive spatial autocorrelation, or clustering. An *I* near \-1 suggests a spatial process where high and low prices/errors ‘repel’ one another, or are dispersed. Where positive and negative values are randomly distributed, the Moran’s *I* statistic is near 0\.
A statistically significant p\-value overturns the null hypothesis to conclude a clustered spatial process. The Moran’s *I* p\-value is estimated by comparing the observed Moran’s *I* to the *I* calculated from many random permutations of points, like so:
1. The point geometries of home sale observations remain fixed in space.
2. Values of `SalePrice.Error` are randomly assigned to observed home sale locations.
3. Moran’s *I* is calculated for that permutation.
4. The process is repeated over *n* permutations creating a distribution of permuted Moran’s *I* values, which is then sorted.
5. If the observed Moran’s *I* value is greater than say, 95% of the permuted *I* values, then conclude that the observed spatial distribution of errors exhibit greater clustering than expected by random chance alone (a p\-value of 0\.05\).
Figure 4\.3 demonstrates Moran’s *I* values for the three spatial processes. The `moran.mc` function (`mc` stands for ‘Monte Carlo’) is used for the random permutation approach. 999 random permutations of *I* are calculated plus 1 observed *I*, giving a total distribution of 1000 Moran’s *I* observations.
The `Clustered` point process yields a middling *I* of 0\.48, but a p\-value of 0\.001 suggests that the observed point process is more clustered than all 999 random permutations (1 / 999 \= 0\.001\) and is statistically significant. In the `Random` case, the *I* is close to 0 at 0\.0189, and the p\-value is insignificant at 0\.363\. Here, about a third of the random permutations had higher Moran’s *I* than the observed configuration. Finally, the `Dispersed` case has an *I* of \-1 suggesting perfect dispersion of values. The p\-value is 0\.999, with all random permutations having Moran’s I greater than the configuration pictured (999/1000 \= 0\.999\).
The `moran.mc` function below calculates Moran’s *I* for the `SalePrice.Error` in `boston.test`. Note the inclusion of the `spatialWeights.test` built above. The observed Moran’s *I* of 0\.13 seems marginal, but the p\-value of 0\.001 suggests model errors cluster more than what we might expect due to random chance alone.
In Figure 4\.4, the frequency of all 999 randomly permutated *I* are plotted as a histogram with the Observed *I* indicated by the orange line. That the observed *I* is higher then all of the 999 randomly generated *I*’s provides visual confirmation of spatial autocorrelation. Both the spatial lag and Moran’s *I* test show that model errors exhibit spatial autocorrelation. Again, this suggests that variation in price likely related to the spatial process has been omitted from our current model.
In the next section, new geospatial features are created to account for some this spatial variation.
```
moranTest <- moran.mc(boston.test$SalePrice.Error,
spatialWeights.test, nsim = 999)
ggplot(as.data.frame(moranTest$res[c(1:999)]), aes(moranTest$res[c(1:999)])) +
geom_histogram(binwidth = 0.01) +
geom_vline(aes(xintercept = moranTest$statistic), colour = "#FA7800",size=1) +
scale_x_continuous(limits = c(-1, 1)) +
labs(title="Observed and permuted Moran's I",
subtitle= "Observed Moran's I in orange",
x="Moran's I",
y="Count") +
plotTheme()
```
4\.3 Accounting for neighborhood
--------------------------------
Now onto new geospatial features to help account for the spatial process at the neighborhood scale. To do so, a ‘neighborhood fixed effect’ feature is created.
Fixed effects were discussed in the previous chapter. These are categorical features, like architectural `Style`, that hypothesize statistically significant differences in price. `summary(reg.training)` shows that some of the `Style` fixed effects are statistically significant and help to predict price. The same hypothesis can be extended to categorical, neighborhood fixed effects.
For some intuition, the code block below regresses `SalePrice` as a function of neighborhood fixed effects, `Name`. A table is created showing that for each neighborhood, the mean `SalePrice` and the `meanPrediction` are identical. Thus, accounting for neighborhood effects accounts for the neighborhood mean of prices, and hopefully, some of the spatial process that was otherwise omitted from the model. Try to understand how `left_join` helps create the below table.
```
left_join(
st_drop_geometry(boston.test) %>%
group_by(Name) %>%
summarize(meanPrice = mean(SalePrice, na.rm = T)),
mutate(boston.test, predict.fe =
predict(lm(SalePrice ~ Name, data = boston.test),
boston.test)) %>%
st_drop_geometry %>%
group_by(Name) %>%
summarize(meanPrediction = mean(predict.fe))) %>%
kable() %>% kable_styling()
```
| Name | meanPrice | meanPrediction |
| --- | --- | --- |
| Beacon Hill | 2996000\.0 | 2996000\.0 |
| Charlestown | 904948\.6 | 904948\.6 |
| Dorchester | 524021\.5 | 524021\.5 |
| East Boston | 506665\.8 | 506665\.8 |
| Hyde Park | 413728\.0 | 413728\.0 |
| Jamaica Plain | 894167\.9 | 894167\.9 |
| Mattapan | 411423\.7 | 411423\.7 |
| Mission Hill | 1442000\.0 | 1442000\.0 |
| Roslindale | 481714\.3 | 481714\.3 |
| Roxbury | 536004\.8 | 536004\.8 |
| South Boston | 740452\.7 | 740452\.7 |
| South End | 3548250\.0 | 3548250\.0 |
| West Roxbury | 534254\.5 | 534254\.5 |
| |
| --- |
| Table 4\.1 |
Let’s see how much new predictive power these neighborhood effects generate by re\-estimating the regression with the neighborhood `Name` feature. Note that `Name` was included in the `createDataPartition` function above to ensure that neighborhoods with few sale price observations are moved into the training set.
The code block below estimates `reg.nhood` and creates a data frame, `boston.test.nhood`, with all goodness of fit metrics.
```
reg.nhood <- lm(SalePrice ~ ., data = as.data.frame(boston.training) %>%
dplyr::select(Name, SalePrice, LivingArea,
Style, GROSS_AREA, NUM_FLOORS.cat,
R_BDRMS, R_FULL_BTH, R_HALF_BTH,
R_KITCH, R_AC, R_FPLACE,crimes.Buffer))
boston.test.nhood <-
boston.test %>%
mutate(Regression = "Neighborhood Effects",
SalePrice.Predict = predict(reg.nhood, boston.test),
SalePrice.Error = SalePrice.Predict- SalePrice,
SalePrice.AbsError = abs(SalePrice.Predict- SalePrice),
SalePrice.APE = (abs(SalePrice.Predict- SalePrice)) / SalePrice)%>%
filter(SalePrice < 5000000)
```
### 4\.3\.1 Accuracy of the neighborhood model
How well do neighborhood fixed effects improve the model relative to the `Baseline Regression`? `summary(reg.nhood)` indicates that the neighborhood effects are very significant, but have rendered the `Style` fixed effects insignificant, suggesting perhaps that architectural style and neighborhood are colinear. How might this make sense? The code block below binds error metrics from `bothRegresions`, calculating a `lagPriceError` for each.
```
bothRegressions <-
rbind(
dplyr::select(boston.test, starts_with("SalePrice"), Regression, Name) %>%
mutate(lagPriceError = lag.listw(spatialWeights.test, SalePrice.Error)),
dplyr::select(boston.test.nhood, starts_with("SalePrice"), Regression, Name) %>%
mutate(lagPriceError = lag.listw(spatialWeights.test, SalePrice.Error)))
```
First, a table is created describing the MAE and MAPE for `bothRegressions`. The `Neighborhood Effects` model is more accurate on both a dollars and percentage basis. Interestingly, the R^2 of `reg.nhood` is 0\.92, which would be very high to a social scientist. To a data scientist however, a MAPE of 18% ($109,446 at the mean price) suggests this model still needs much improvement to be used in the real world.
```
st_drop_geometry(bothRegressions) %>%
gather(Variable, Value, -Regression, -Name) %>%
filter(Variable == "SalePrice.AbsError" | Variable == "SalePrice.APE") %>%
group_by(Regression, Variable) %>%
summarize(meanValue = mean(Value, na.rm = T)) %>%
spread(Variable, meanValue) %>%
kable()
```
| Regression | SalePrice.AbsError | SalePrice.APE |
| --- | --- | --- |
| Baseline Regression | 183140\.4 | 0\.7445456 |
| Neighborhood Effects | 105697\.1 | 0\.1832663 |
| |
| --- |
| Table 4\.2 |
Next, predicted prices are plotted as a function of observed prices. Recall the orange line represents a would\-be perfect fit, while the green line represents the predicted fit. The `Neighborhood Effects` model clearly fits the data better, and does so for all price levels, low, medium and high. Note, for these plots, `Regression` is the grouping variable in `facet_wrap`.
The neighborhood effects added much predictive power, perhaps by explaining part of the spatial process. As such, we should now expect less clustering or spatial autocorrelation in model errors.
```
bothRegressions %>%
dplyr::select(SalePrice.Predict, SalePrice, Regression) %>%
ggplot(aes(SalePrice, SalePrice.Predict)) +
geom_point() +
stat_smooth(aes(SalePrice, SalePrice),
method = "lm", se = FALSE, size = 1, colour="#FA7800") +
stat_smooth(aes(SalePrice.Predict, SalePrice),
method = "lm", se = FALSE, size = 1, colour="#25CB10") +
facet_wrap(~Regression) +
labs(title="Predicted sale price as a function of observed price",
subtitle="Orange line represents a perfect prediction; Green line represents prediction") +
plotTheme()
```
### 4\.3\.2 Spatial autocorrelation in the neighborhood model
Model errors that cluster in space reflect a regression that has omitted the spatial process. Given that the neighborhood fixed effects improved model accuracy, we might expect errors to be less spatially autocorrelated.
To test this hypothesis, Figure 4\.6 maps model errors across `bothRegressions`. Does it look like the errors from the `Neighborhood Effects` are more randomly distributed in space relative to the `Baseline`?
The Moran’s *I* for the `Baseline` and `Neighborhood Effects` regression errors are 0\.13 and 0\.2, respectively, and both are statistically significant.
Figure 4\.7 plots errors as a function of lagged model errors for both regressions. Note that range of errors is smaller for the `Neighborhood Effects` regression because it is more accurate. The correlations for the `Baseline` and `Neighborhood Effects` are 0\.24 and 0\.35, respectively, and are also statistically significant.
What does this all mean? We hypothesized that neighborhood effects could help account for the otherwise missing spatial process. Doing so, reduced errors by nearly half, but spatial autocorrelation remained. Why?
For one, the neighborhood fixed effect may be controlling for omitted public services/amenity features like schools, access to transit etc. Second, these features account the spatial process *across* neighborhoods. Is it possible that other spatial processes exist *within* neighborhoods? Can you think of some features that might help account for the spatial process at even smaller spatial scales? We return to this question in Chapter 5\.
### 4\.3\.3 Generalizability of the neighborhood model
Recall the two definitions of generalizability: First, the ability to predict accurately on new data \- which was the focus of cross\-validation in Chapter 3\. Second, the ability to predict with comparable accuracy across different group contexts, like neighborhoods. Here, the focus will be on the latter. It is paramount that a geospatial predictive model generalize to different neighborhood contexts. If it does not then the algorithm may not be fair.
Generalizability across space is very challenging with OLS, because OLS focuses on relationships at the mean. A model trained on Boston will not perform well on Phoenix, for example, because the urban context (at the mean) varies so significantly. Similarly, a model trained on eight low\-income neighborhoods and used to predict for a ninth wealthy, neighborhood, will also perform poorly.
Two approaches for testing across\-neighborhood generalizability are demonstrated. The first simply maps Mean Absolute Percent Errors (MAPE) by neighborhood. The second gathers Census data to test how well each model generalizes to different group contexts \- like race and income.
Figure 4\.8 below maps the mean MAPE by `nhoods` for `bothRegressions`. Not only is the `Neighborhood Effects` model more accurate in general, but its accuracy is more consistent across neighborhoods. This consistency suggests the *Neighborhood Effects* model is more generalizable. Note the `left_join` which attaches `mean.MAPE` to neighborhood geometries.
```
st_drop_geometry(bothRegressions) %>%
group_by(Regression, Name) %>%
summarize(mean.MAPE = mean(SalePrice.APE, na.rm = T)) %>%
ungroup() %>%
left_join(nhoods) %>%
st_sf() %>%
ggplot() +
geom_sf(aes(fill = mean.MAPE)) +
geom_sf(data = bothRegressions, colour = "black", size = .5) +
facet_wrap(~Regression) +
scale_fill_gradient(low = palette5[1], high = palette5[5],
name = "MAPE") +
labs(title = "Mean test set MAPE by neighborhood") +
mapTheme()
```
To test generalizability across urban contexts, `tidycensus` downloads Census data to define a `raceContext` and an `incomeContext` (don’t forget your `census_api_key`). `output = "wide"` brings in the data in wide form. Census tracts where at least 51% of residents are white receive a `Majority White` designation. Tracts with incomes greater than the citywide mean receive a `High Income` designation.
These designations are arbitrary and suffer from MAUP bias but are still useful for providing descriptive context of generalizability. Figure 4\.9 below maps the two neighborhood contexts. As these outcomes have different categorical labels, `facet_wrap` cannot be used to create a small multiple map. Instead `grid.arrange` binds both maps together over two columns (`ncol=2`).
```
tracts17 <-
get_acs(geography = "tract", variables = c("B01001_001E","B01001A_001E","B06011_001"),
year = 2017, state=25, county=025, geometry=T, output = "wide") %>%
st_transform('ESRI:102286') %>%
rename(TotalPop = B01001_001E,
NumberWhites = B01001A_001E,
Median_Income = B06011_001E) %>%
mutate(percentWhite = NumberWhites / TotalPop,
raceContext = ifelse(percentWhite > .5, "Majority White", "Majority Non-White"),
incomeContext = ifelse(Median_Income > 32322, "High Income", "Low Income"))
grid.arrange(ncol = 2,
ggplot() + geom_sf(data = na.omit(tracts17), aes(fill = raceContext)) +
scale_fill_manual(values = c("#25CB10", "#FA7800"), name="Race Context") +
labs(title = "Race Context") +
mapTheme() + theme(legend.position="bottom"),
ggplot() + geom_sf(data = na.omit(tracts17), aes(fill = incomeContext)) +
scale_fill_manual(values = c("#25CB10", "#FA7800"), name="Income Context") +
labs(title = "Income Context") +
mapTheme() + theme(legend.position="bottom"))
```
Figure 4\.9 shows that Boston is very segregated with respect to both race and income. In the code block below, MAPE is calculated across `bothRegressions` and both neighborhood contexts. There is a significant difference in goodness of fit across the races, but the gap is closer for the `Neighborhood Effects` model.
```
st_join(bothRegressions, tracts17) %>%
group_by(Regression, raceContext) %>%
summarize(mean.MAPE = scales::percent(mean(SalePrice.APE, na.rm = T))) %>%
st_drop_geometry() %>%
spread(raceContext, mean.MAPE) %>%
kable(caption = "Test set MAPE by neighborhood racial context")
```
| Regression | Majority Non\-White | Majority White |
| --- | --- | --- |
| Baseline Regression | 119% | 43% |
| Neighborhood Effects | 20% | 17% |
| |
| --- |
| Table 4\.3 |
The same trend is evident for income, with the `Baseline Regression` exhibiting much higher error rate differences across income contexts. It is now clear how the inclusion of neighborhood effects makes the model more generalizable. What would be the consequences of deploying the `Baseline Regression` to assess property taxes?
What do you notice when calculating mean price *error* by group? `SalePrice.Error` is calculated by subtracting observed price from predicted, meaning that a positive price represents an over\-prediction. If put into production, this algorithm would disproportionately over\-assess and overtax poorer, minority neighborhoods. Such an outcome would be both grossly unfair and a political liability. This ‘disparate impact’ will be the focus of several chapters to come.
```
st_join(bothRegressions, tracts17) %>%
filter(!is.na(incomeContext)) %>%
group_by(Regression, incomeContext) %>%
summarize(mean.MAPE = scales::percent(mean(SalePrice.APE, na.rm = T))) %>%
st_drop_geometry() %>%
spread(incomeContext, mean.MAPE) %>%
kable(caption = "Test set MAPE by neighborhood income context")
```
| Regression | High Income | Low Income |
| --- | --- | --- |
| Baseline Regression | 43% | 115% |
| Neighborhood Effects | 16% | 21% |
| |
| --- |
| Table 4\.4 |
### 4\.3\.1 Accuracy of the neighborhood model
How well do neighborhood fixed effects improve the model relative to the `Baseline Regression`? `summary(reg.nhood)` indicates that the neighborhood effects are very significant, but have rendered the `Style` fixed effects insignificant, suggesting perhaps that architectural style and neighborhood are colinear. How might this make sense? The code block below binds error metrics from `bothRegresions`, calculating a `lagPriceError` for each.
```
bothRegressions <-
rbind(
dplyr::select(boston.test, starts_with("SalePrice"), Regression, Name) %>%
mutate(lagPriceError = lag.listw(spatialWeights.test, SalePrice.Error)),
dplyr::select(boston.test.nhood, starts_with("SalePrice"), Regression, Name) %>%
mutate(lagPriceError = lag.listw(spatialWeights.test, SalePrice.Error)))
```
First, a table is created describing the MAE and MAPE for `bothRegressions`. The `Neighborhood Effects` model is more accurate on both a dollars and percentage basis. Interestingly, the R^2 of `reg.nhood` is 0\.92, which would be very high to a social scientist. To a data scientist however, a MAPE of 18% ($109,446 at the mean price) suggests this model still needs much improvement to be used in the real world.
```
st_drop_geometry(bothRegressions) %>%
gather(Variable, Value, -Regression, -Name) %>%
filter(Variable == "SalePrice.AbsError" | Variable == "SalePrice.APE") %>%
group_by(Regression, Variable) %>%
summarize(meanValue = mean(Value, na.rm = T)) %>%
spread(Variable, meanValue) %>%
kable()
```
| Regression | SalePrice.AbsError | SalePrice.APE |
| --- | --- | --- |
| Baseline Regression | 183140\.4 | 0\.7445456 |
| Neighborhood Effects | 105697\.1 | 0\.1832663 |
| |
| --- |
| Table 4\.2 |
Next, predicted prices are plotted as a function of observed prices. Recall the orange line represents a would\-be perfect fit, while the green line represents the predicted fit. The `Neighborhood Effects` model clearly fits the data better, and does so for all price levels, low, medium and high. Note, for these plots, `Regression` is the grouping variable in `facet_wrap`.
The neighborhood effects added much predictive power, perhaps by explaining part of the spatial process. As such, we should now expect less clustering or spatial autocorrelation in model errors.
```
bothRegressions %>%
dplyr::select(SalePrice.Predict, SalePrice, Regression) %>%
ggplot(aes(SalePrice, SalePrice.Predict)) +
geom_point() +
stat_smooth(aes(SalePrice, SalePrice),
method = "lm", se = FALSE, size = 1, colour="#FA7800") +
stat_smooth(aes(SalePrice.Predict, SalePrice),
method = "lm", se = FALSE, size = 1, colour="#25CB10") +
facet_wrap(~Regression) +
labs(title="Predicted sale price as a function of observed price",
subtitle="Orange line represents a perfect prediction; Green line represents prediction") +
plotTheme()
```
### 4\.3\.2 Spatial autocorrelation in the neighborhood model
Model errors that cluster in space reflect a regression that has omitted the spatial process. Given that the neighborhood fixed effects improved model accuracy, we might expect errors to be less spatially autocorrelated.
To test this hypothesis, Figure 4\.6 maps model errors across `bothRegressions`. Does it look like the errors from the `Neighborhood Effects` are more randomly distributed in space relative to the `Baseline`?
The Moran’s *I* for the `Baseline` and `Neighborhood Effects` regression errors are 0\.13 and 0\.2, respectively, and both are statistically significant.
Figure 4\.7 plots errors as a function of lagged model errors for both regressions. Note that range of errors is smaller for the `Neighborhood Effects` regression because it is more accurate. The correlations for the `Baseline` and `Neighborhood Effects` are 0\.24 and 0\.35, respectively, and are also statistically significant.
What does this all mean? We hypothesized that neighborhood effects could help account for the otherwise missing spatial process. Doing so, reduced errors by nearly half, but spatial autocorrelation remained. Why?
For one, the neighborhood fixed effect may be controlling for omitted public services/amenity features like schools, access to transit etc. Second, these features account the spatial process *across* neighborhoods. Is it possible that other spatial processes exist *within* neighborhoods? Can you think of some features that might help account for the spatial process at even smaller spatial scales? We return to this question in Chapter 5\.
### 4\.3\.3 Generalizability of the neighborhood model
Recall the two definitions of generalizability: First, the ability to predict accurately on new data \- which was the focus of cross\-validation in Chapter 3\. Second, the ability to predict with comparable accuracy across different group contexts, like neighborhoods. Here, the focus will be on the latter. It is paramount that a geospatial predictive model generalize to different neighborhood contexts. If it does not then the algorithm may not be fair.
Generalizability across space is very challenging with OLS, because OLS focuses on relationships at the mean. A model trained on Boston will not perform well on Phoenix, for example, because the urban context (at the mean) varies so significantly. Similarly, a model trained on eight low\-income neighborhoods and used to predict for a ninth wealthy, neighborhood, will also perform poorly.
Two approaches for testing across\-neighborhood generalizability are demonstrated. The first simply maps Mean Absolute Percent Errors (MAPE) by neighborhood. The second gathers Census data to test how well each model generalizes to different group contexts \- like race and income.
Figure 4\.8 below maps the mean MAPE by `nhoods` for `bothRegressions`. Not only is the `Neighborhood Effects` model more accurate in general, but its accuracy is more consistent across neighborhoods. This consistency suggests the *Neighborhood Effects* model is more generalizable. Note the `left_join` which attaches `mean.MAPE` to neighborhood geometries.
```
st_drop_geometry(bothRegressions) %>%
group_by(Regression, Name) %>%
summarize(mean.MAPE = mean(SalePrice.APE, na.rm = T)) %>%
ungroup() %>%
left_join(nhoods) %>%
st_sf() %>%
ggplot() +
geom_sf(aes(fill = mean.MAPE)) +
geom_sf(data = bothRegressions, colour = "black", size = .5) +
facet_wrap(~Regression) +
scale_fill_gradient(low = palette5[1], high = palette5[5],
name = "MAPE") +
labs(title = "Mean test set MAPE by neighborhood") +
mapTheme()
```
To test generalizability across urban contexts, `tidycensus` downloads Census data to define a `raceContext` and an `incomeContext` (don’t forget your `census_api_key`). `output = "wide"` brings in the data in wide form. Census tracts where at least 51% of residents are white receive a `Majority White` designation. Tracts with incomes greater than the citywide mean receive a `High Income` designation.
These designations are arbitrary and suffer from MAUP bias but are still useful for providing descriptive context of generalizability. Figure 4\.9 below maps the two neighborhood contexts. As these outcomes have different categorical labels, `facet_wrap` cannot be used to create a small multiple map. Instead `grid.arrange` binds both maps together over two columns (`ncol=2`).
```
tracts17 <-
get_acs(geography = "tract", variables = c("B01001_001E","B01001A_001E","B06011_001"),
year = 2017, state=25, county=025, geometry=T, output = "wide") %>%
st_transform('ESRI:102286') %>%
rename(TotalPop = B01001_001E,
NumberWhites = B01001A_001E,
Median_Income = B06011_001E) %>%
mutate(percentWhite = NumberWhites / TotalPop,
raceContext = ifelse(percentWhite > .5, "Majority White", "Majority Non-White"),
incomeContext = ifelse(Median_Income > 32322, "High Income", "Low Income"))
grid.arrange(ncol = 2,
ggplot() + geom_sf(data = na.omit(tracts17), aes(fill = raceContext)) +
scale_fill_manual(values = c("#25CB10", "#FA7800"), name="Race Context") +
labs(title = "Race Context") +
mapTheme() + theme(legend.position="bottom"),
ggplot() + geom_sf(data = na.omit(tracts17), aes(fill = incomeContext)) +
scale_fill_manual(values = c("#25CB10", "#FA7800"), name="Income Context") +
labs(title = "Income Context") +
mapTheme() + theme(legend.position="bottom"))
```
Figure 4\.9 shows that Boston is very segregated with respect to both race and income. In the code block below, MAPE is calculated across `bothRegressions` and both neighborhood contexts. There is a significant difference in goodness of fit across the races, but the gap is closer for the `Neighborhood Effects` model.
```
st_join(bothRegressions, tracts17) %>%
group_by(Regression, raceContext) %>%
summarize(mean.MAPE = scales::percent(mean(SalePrice.APE, na.rm = T))) %>%
st_drop_geometry() %>%
spread(raceContext, mean.MAPE) %>%
kable(caption = "Test set MAPE by neighborhood racial context")
```
| Regression | Majority Non\-White | Majority White |
| --- | --- | --- |
| Baseline Regression | 119% | 43% |
| Neighborhood Effects | 20% | 17% |
| |
| --- |
| Table 4\.3 |
The same trend is evident for income, with the `Baseline Regression` exhibiting much higher error rate differences across income contexts. It is now clear how the inclusion of neighborhood effects makes the model more generalizable. What would be the consequences of deploying the `Baseline Regression` to assess property taxes?
What do you notice when calculating mean price *error* by group? `SalePrice.Error` is calculated by subtracting observed price from predicted, meaning that a positive price represents an over\-prediction. If put into production, this algorithm would disproportionately over\-assess and overtax poorer, minority neighborhoods. Such an outcome would be both grossly unfair and a political liability. This ‘disparate impact’ will be the focus of several chapters to come.
```
st_join(bothRegressions, tracts17) %>%
filter(!is.na(incomeContext)) %>%
group_by(Regression, incomeContext) %>%
summarize(mean.MAPE = scales::percent(mean(SalePrice.APE, na.rm = T))) %>%
st_drop_geometry() %>%
spread(incomeContext, mean.MAPE) %>%
kable(caption = "Test set MAPE by neighborhood income context")
```
| Regression | High Income | Low Income |
| --- | --- | --- |
| Baseline Regression | 43% | 115% |
| Neighborhood Effects | 16% | 21% |
| |
| --- |
| Table 4\.4 |
4\.4 Conclusion \- Features at multiple scales
----------------------------------------------
Chapter 3 introduced the geospatial machine learning framework, culminating in a model that did not account for the spatial process of home prices. In this chapter, new methods have been introduced for quantifying spatial autocorrelation. Neighborhood features were added to control for at least part of the spatial process.
These neighborhood ‘fixed effects’ increased the accuracy and generalizability of the model, but model errors remained clustered. What makes home prices so difficult to predict (and such a great first use case), is that prices cluster at different spatial scales. We successfully accounted for the neighborhood scale, but other spatial relationships exist at smaller scales.
Some readers will be drawn to the idea of location\-based fixed effects and seek to engineer neighborhoods that better account for within and across price variation. This is a slippery slope. One reasonable approach may be to ‘cluster’ prices into a set of optimal neighborhoods, but quickly, one will realize that smaller neighborhoods are better. Imagine the smallest neighborhood \- a fixed effect for each house. The model would trend perfect, but fail if used to predict for a new property (for which no coefficient has been estimated).
This is referred to as ‘overfitting’. The models in these two chapters lack accuracy and are ‘underfit’. Conversely, a model with an R^2 of 1 or a MAPE of 0% are ‘overfit’ and would fail on new data. I have provided some tests to help, but no one statistic can conclude whether a model is under or overfit. Accuracy and generalizability are trade\-offs, and the best way to judge a model is by relating it to the decision\-making process.
Finally, neighborhood controls created a more generalizable model, presumably because it successfully ‘borrowed’ a diverse set of housing market ‘experiences’ across different communities (ie. low\-income and wealthy). However, as we will learn in the next chapter, if the data gathering process itself does not generalize across space, it is unlikely the predictions generated from those data will generalize either. We will continue to see how generalizability is the most significant consideration in public\-sector predictive modeling.
| Field Specific |
urbanspatial.github.io | https://urbanspatial.github.io/PublicPolicyAnalytics/geospatial-risk-modeling-predictive-policing.html |
Chapter 5 Geospatial risk modeling \- Predictive Policing
=========================================================
5\.1 New predictive policing tools
----------------------------------
Of the few public sector machine learning algorithms in use, few are as prevalent or as despised as the suite of algorithms commonly referred to as ‘Predictive Policing’. There are several flavors of Predictive Policing from forecasting where crime will happen, who will commit crime, judicial response to crime, as well as investigative outcomes.
Policing has driven public\-sector machine learning because law enforcement has significant planning and resource allocation questions. In 2017, the Chicago Police Department recorded 268,387 incidents or 735 incidents per day, on average. Nearly 12,000 sworn officers patrol a 228 square mile area. How should police commanders strategically allocate these officers in time and space?
As discussed in 0\.3, data science is an ideal planning tool to ensure that the supply of a limited resource (e.g. policing), matches the demand for those resources (e.g. crime). For many years now, police departments have been all\-in on using crime\-related data for planning purposes.
New York City’s CompStat program, which began in the mid\-1990s, was setup to promote operational decision\-making through the compilation of crime statistics and maps. A 1999 survey found that 11% of small and 33% of large sized police departments nationally implemented a CompStat program.[29](#fn29) By 2013, 20% of surveyed police departments said they had at least 1 full time sworn officer “conducting research…using computerized records”, and 38% reported at least one non\-sworn full\-time personnel.[30](#fn30)
Emerging demand for this technology, a growing abundance of cloud services, and sophisticated machine learning algorithms have all given rise to several commercial Predictive Policing products. Several of these focus on placed\-based predictions, including PredPol[31](#fn31), Risk Terrain Modeling[32](#fn32), and ShotSpotter Missions.[33](#fn33)
Today, with growing awareness that Black Lives Matter and that policing resources in America are allocated in part, by way of systematic racism, critics increasingly charge that Predictive Policing is harmful. Some cities like Santa Cruz, CA have banned the practice outright[34](#fn34), while others like Pittsburgh are pivoting the technology to allocate social services instead of law enforcement.[35](#fn35)
With this critique in mind, why include Predictive Policing in this book? First and foremost, this chapter provides important context for how to judge a ‘useful’ machine learning model in government (Section 0\.3\). Second, it provides an even more nuanced definition of model generalizability. Finally, it is the only open source tutorial on geospatial risk modeling that I am aware of. Whether or not these models proliferate (I don’t see them going away any time soon), it will be helpful for stakeholders to gain a shared understanding of their inner\-workings to prevent increased bias and unwarranted surveillance.
### 5\.1\.1 Generalizability in geospatial risk models
A geospatial risk model is a regression model, not unlike those covered in the previous two chapters. The dependent variable is the occurrence of discrete events like crime, fires, successful sales calls, locations of donut shops, etc.[36](#fn36) Predictions from these models are interpreted as ‘the forecasted risk/opportunity of that event occurring *here*’.
For crime, the hypothesis is that crime risk is a function of *exposure* to a series of geospatial risk and protective factors, such as blight or recreation centers, respectively. The assumption is that as exposure to risk factors increases, so does crime risk.
The goal is to borrow the experience from places where crime is observed and test whether that experience generalizes to places that may be at risk for crime, even if few events are reported. Crime is relatively rare, and observed crime likely discounts actual crime risk. Predictions from geospatial risk models can be thought of as *latent risk* \- areas at risk even if a crime has not actually been reported there. By this definition, it is easy to see why such a forecast would be appealing for police.
Interestingly, a model that predicts crime risk equal to observed crime (a Mean Absolute Error of `0`), would not reveal any latent risk. As a resource allocation tool, this would not be any more useful than one which tells police to return to yesterday’s crime scene. Thus, accuracy is not as critical for this use case, as it was in the last use case.
Generalizability, on the other hand, is incredibly important. Of the two definitions previously introduced, the second will be more important here. A generalizable geospatial risk prediction model is one that predicts risk with comparable accuracy across different neighborhoods. When crime is the outcome of interest, this is anything but straightforward.
Consider that home sale prices from Chapters 3 and 4, are pulled from a complete sample \- any house that transacted shows up in the data. Not only is say, the location of every drug offense not a complete sample, but it is not likely a representative sample, either. For a drug offense event to appear in the data, it must be observed by law enforcement. Many drug offenses go unobserved, and worse, officers may selectively choose to enforce drug crime more fervently in some communities than others.[37](#fn37)
If selective enforcement is the result of an officer’s preconceived or biased beliefs, then this ‘selection bias’ will be baked into the crime outcome. If a model fails to account for this bias, it will fall into the error term of the regression and lead to predictions that do not generalize across space.
One might assume that additional controls for race can help account for the selection bias, but the true nature of the bias is unknown. It may be entirely about race, partially so, or a confluence of factors; and likely some officers are more biased than others. Thus, without observing the explicit selection *criteria*, it is not possible to fully account for the selection bias.
For this reason, a model trained to predict drug offense risk for instance, is almost certainly not useful. However one trained to predict building fire risk may be. Fire departments cannot selectively choose which building fires to extinguish and which to let rage. Thus, all else equal, the building fire outcome (and ultimately the predictions) are more likely to generalize across space.
Selection bias may exist in the dependent variable, but it may also exist in the features as well. A second reason why these models may not generalize has to do with the exposure hypothesis.
The rationale for the exposure hypothesis can be traced in part to the Broken Windows Theory, which posits a link between community ‘disorder’ and crime.[38](#fn38) The theory is that features of the built environment such as blight and disrepair may signal a local tolerance for criminality. A host of built environment risk factors impact crime risk, but many of these suffer from selection bias.[39](#fn39)
Perhaps graffiti is a signal that criminality is more locally accepted. The best source of address\-level graffiti data comes from 311 open datasets.[40](#fn40) Assume graffiti was equally distributed throughout the city, but only residents of certain neighborhoods (maybe those where graffiti was rare) chose to file 311 reports. If this reporting criteria is not observed in the model, model predictions would reflect this spatial selection bias and predictions would not generalize.
Operators of ‘risk factors’ like homeless shelters, laundromats, and check cashing outlets may select into certain neighborhoods based on market preferences. Spatial selection here may also be an issue. So why is this important and what role does Broken Windows and selection bias play in Predictive Policing?
### 5\.1\.2 From Broken Windows Theory to Broken Windows Policing
There is a wealth of important criminological and social science research on the efficacy of Broken Windows Theory. While I am unqualified to critique these findings, I do think it is useful to consider the consequences of developing a police allocation tool based on Broken Windows Theory.
Racist placed\-based policies, like redlining and mortgage discrimination, corralled low\-income minorities into segregated neighborhoods without the tools for economic empowerment.[41](#fn41) Many of these communities are characterized by blight and disrepair. A risk model predicated on disrepair (Broken Windows) might only perpetuate these same racist placed\-based policies.
In other words, if the data inputs to a forecasting model are biased against communities of color, then the prediction outputs of that model will also be biased against those places. This bias compounds when risk predictions are converted to resource allocation. Surveillance increases, more crimes are ‘reported’, and more risk is predicted.
When a reasonable criminological theory is operationalized into an empirical model based on flawed data, the result is an unuseful decision\-making tool.
Because the lack of generalizability is driven in part by unobserved information like selection bias, despite our best efforts below, we still will not know the degree of bias baked in the risk predictions. While a different outcome, like fire, could have been chosen to demonstrate geospatial risk prediction, policing is the most ubiquitous example of the approach, and worth our focus.
In this chapter, a geospatial risk predictive model of burglary is created. We begin by wrangling burglary and risk factor data into geospatial features, correlating their exposure, and estimating models to predict burglary latent risk. These models are then validated in part, by comparing predictions to a standard, business\-as\-usual measure of geospatial crime risk.
### 5\.1\.3 Setup
Begin by loading the requisite packages and functions, including the `functions.R` script.
```
library(tidyverse)
library(sf)
library(RSocrata)
library(viridis)
library(spatstat)
library(raster)
library(spdep)
library(FNN)
library(grid)
library(gridExtra)
library(knitr)
library(kableExtra)
library(tidycensus)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
```
5\.2 Data wrangling: Creating the `fishnet`
-------------------------------------------
What is the appropriate unit of analysis for predicting geospatial crime risk? No matter the answer, scale biases are likely (Chapter 1\). The best place to start however, is with the resource allocation process. Many police departments divide the city into administrative areas in which some measure of decision\-making autonomy is allowed. How useful would a patrol allocation algorithm be if predictions were made at the Police Beat or District scale as Figure 5\.1?
Tens or hundreds of thousands of people live in a Police District, so a single, District\-wide risk prediction would not provide enough precision intelligence on where officers should patrol. In the code below, `policeDistricts` and `policeBeats` are downloaded.
```
policeDistricts <-
st_read("https://data.cityofchicago.org/api/geospatial/fthy-xz3r?method=export&format=GeoJSON") %>%
st_transform('ESRI:102271') %>%
dplyr::select(District = dist_num)
policeBeats <-
st_read("https://data.cityofchicago.org/api/geospatial/aerh-rz74?method=export&format=GeoJSON") %>%
st_transform('ESRI:102271') %>%
dplyr::select(District = beat_num)
bothPoliceUnits <- rbind(mutate(policeDistricts, Legend = "Police Districts"),
mutate(policeBeats, Legend = "Police Beats"))
```
Instead, consider crime risk not as a phenomenon that varies across administrative units, but one varying smoothly across the landscape, like elevation. Imagine that crime clusters in space, and that crime risk dissipates outward from these ‘hot spots’, like elevation dips from mountaintops to valleys. The best way to represent this spatial trend in a regression\-ready form, is to aggregate point\-level data into a lattice of grid cells.
This grid cell lattice is referred to as the `fishnet` and created below from a `chicagoBoundary` that omits O’Hare airport in the northwest of the city. `st_make_grid` is used to create a `fishnet` with 500ft by 500ft grid cells.[42](#fn42)
```
chicagoBoundary <-
st_read(file.path(root.dir,"/Chapter5/chicagoBoundary.geojson")) %>%
st_transform('ESRI:102271')
fishnet <-
st_make_grid(chicagoBoundary, cellsize = 500) %>%
st_sf() %>%
mutate(uniqueID = rownames(.))
```
Next, `burglaries` are downloaded from the Chicago Open Data site using the `RSocrata` package. Socrata is the creator of the open source platform that Chicago uses to share its data.
The code block below downloads the data and selects only ‘forcible entry’ burglaries. Some additional data wrangling removes extraneous characters from the`Location` field with `gsub`. The resulting field is then converted to `separate` fields of `X` and `Y` coordinates. Those fields are then made numeric, converted to simple features, projected, and duplicate geometries are removed with `distinct`. The density map (`stat_density2d`) shows some clear burglary hotspots in Chicago.
```
burglaries <-
read.socrata("https://data.cityofchicago.org/Public-Safety/Crimes-2017/d62x-nvdr") %>%
filter(Primary.Type == "BURGLARY" & Description == "FORCIBLE ENTRY") %>%
mutate(x = gsub("[()]", "", Location)) %>%
separate(x,into= c("Y","X"), sep=",") %>%
mutate(X = as.numeric(X),Y = as.numeric(Y)) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant")%>%
st_transform('ESRI:102271') %>%
distinct()
```
### 5\.2\.1 Data wrangling: Joining burglaries to the `fishnet`
To get the count of burglaries by grid cell, first `mutate` a ‘counter’ field, `countBurglaries`, for each burglary event, and then spatial join (`aggregate`) burglary points to the `fishnet`, taking the sum of `countBurglaries`.
A grid cell with no burglaries receives `NA` which is converted to `0` with `replace_na`. A random `uniqueID` is generated for each grid cell, as well as a random group `cvID`, used later for cross\-validation. Roughly 100 `cvID`s are generated (`round(nrow(burglaries) / 100)`) to allow 100\-fold cross validation below.
Figure 5\.4 maps the count of burglaries by grid cell, and the clustered spatial process of burglary begins to take shape. Notice the use of `scale_fill_viridis` from the `viridis` package, which automatically inputs the blue through yellow color ramp.
```
crime_net <-
dplyr::select(burglaries) %>%
mutate(countBurglaries = 1) %>%
aggregate(., fishnet, sum) %>%
mutate(countBurglaries = replace_na(countBurglaries, 0),
uniqueID = rownames(.),
cvID = sample(round(nrow(fishnet) / 24), size=nrow(fishnet), replace = TRUE))
ggplot() +
geom_sf(data = crime_net, aes(fill = countBurglaries)) +
scale_fill_viridis() +
labs(title = "Count of Burglaires for the fishnet") +
mapTheme()
```
### 5\.2\.2 Wrangling risk factors
The `crime_net` includes counts of the dependent variable, burglary. Next, a small set of risk factor features are downloaded and wrangled to the `fishnet`. The very simple model created in this chapter is based on a limited set of features. A typical analysis would likely include many more.
Six risk factors are downloaded, including 311 reports of abandoned cars, street lights out, graffiti remediation, sanitation complaints, and abandon buildings, along with a neighborhood polygon layer and the location of retail stores that sell liquor to go.
Take note of the approach used to wrangle each dataset. Data is downloaded; year and coordinates are created; and the latter converted to `sf`. The data is then projected and a `Legend` field is added to label the risk factor. This allows each to be `rbind` into one dataset for small multiple mapping, as shown in Figure 5\.5\.
The `graffiti` code block is the first where we have seen the `%in%` operator, which enables `filter` to take inputs from a list rather than chaining together several ‘or’ (`|`) statements.
```
abandonCars <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Abandoned-Vehicles/3c9v-pnva") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Abandoned_Cars")
abandonBuildings <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Vacant-and-Abandoned-Building/7nii-7srd") %>%
mutate(year = substr(date_service_request_was_received,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Abandoned_Buildings")
graffiti <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Graffiti-Removal-Historical/hec5-y4x5") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
filter(where_is_the_graffiti_located_ %in% c("Front", "Rear", "Side")) %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Graffiti")
streetLightsOut <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Street-Lights-All-Out/zuxi-7xem") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Street_Lights_Out")
sanitation <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Sanitation-Code-Complaints-Hi/me59-5fac") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Sanitation")
liquorRetail <-
read.socrata("https://data.cityofchicago.org/resource/nrmj-3kcf.json") %>%
filter(business_activity == "Retail Sales of Packaged Liquor") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Liquor_Retail")
neighborhoods <-
st_read("https://raw.githubusercontent.com/blackmad/neighborhoods/master/chicago.geojson") %>%
st_transform(st_crs(fishnet))
```
5\.3 Feature engineering \- Count of risk factors by grid cell
--------------------------------------------------------------
We have already seen feature engineering strategies for measuring exposure, but grid cells are an added nuance. We start by joining a long form layer of crime events to the `vars_net` fishnet. First, the individual risk factor layers are bound. Next, the fishnet is spatially joined (`st_join`) *to each point*. The outcome is a large point data frame with a column for each fishnet `uniqueID`.
That output is then converted from a long form layer of risk factor points with grid cell `uniqueID`s, to a wide form layer of grid cells with risk factor columns. This is done by grouping on grid cell `uniqueID` and risk factor `Legend`, then summing the count of events. The `full_join` adds the fishnet geometries and `spread` converts to wide form. `vars_net`, is now regression\-ready.
```
vars_net <-
rbind(abandonCars,streetLightsOut,abandonBuildings,
liquorRetail, graffiti, sanitation) %>%
st_join(., fishnet, join=st_within) %>%
st_drop_geometry() %>%
group_by(uniqueID, Legend) %>%
summarize(count = n()) %>%
full_join(fishnet) %>%
spread(Legend, count, fill=0) %>%
st_sf() %>%
dplyr::select(-`<NA>`) %>%
na.omit() %>%
ungroup()
```
Now let’s map each `vars_net` feature as a small multiple map. Each risk factor has a different range (min, max) and at this stage of `sf` development, `facet_wrap` is unable to calculate unique legends for each. Instead, a function is used to loop through the creation of individual risk factor maps and compile (i.e. `grid.arrange`) them into one small multiple plot.
`vars_net.long` is `vars_net` in long form; `vars` is a vector of `Variable` names; and `mapList` is an empty list. The loop says for each `Variable`, *i*, in `vars`, create a map by `filter`ing for `Variable` *i* and adding it to `mapList`. Once all six risk factor maps have been created, `do.call` then loops through the `mapList` and arranges each map in the small multiple plot. Try to recreate this visualization with `facet_wrap` and notice the difference in legends.
These risk factors illustrate slightly different spatial processes. `Abandoned_Buildings` are mainly clustered in South Chicago, while `Abandoned_Cars` are mostly in the north. 311 complaints for Graffiti tend to cluster along major thoroughfares and `Liquor_Retail` is heavily clustered in “The Loop”, Chicago’s downtown. In 5\.4\.1, we’ll test the correlation between these features and `burglaries`.
```
vars_net.long <-
gather(vars_net, Variable, value, -geometry, -uniqueID)
vars <- unique(vars_net.long$Variable)
mapList <- list()
for(i in vars){
mapList[[i]] <-
ggplot() +
geom_sf(data = filter(vars_net.long, Variable == i), aes(fill=value), colour=NA) +
scale_fill_viridis(name="") +
labs(title=i) +
mapTheme()}
do.call(grid.arrange,c(mapList, ncol=3, top="Risk Factors by Fishnet"))
```
### 5\.3\.1 Feature engineering \- Nearest neighbor features
Consider how grid cell counts impose a very rigid spatial scale of exposure. Our second feature engineering approach is to calculate average nearest neighbor distance (3\.2\.1\) to hypothesize a smoother exposure relationship across space. Here, the `nn_function` is used.
Average nearest neighbor features are created by converting `vars_net` grid cells to centroid points then measuring to *k* risk factor points. Note, the `nn_function` requires both input layers to be points. For demonstration purposes *k* is set to 3, but ideally, one would test different *k* definitions of scale. Two shortcut functions are created to make the code block less verbose.
```
st_c <- st_coordinates
st_coid <- st_centroid
vars_net <-
vars_net %>%
mutate(
Abandoned_Buildings.nn =
nn_function(st_c(st_coid(vars_net)), st_c(abandonBuildings),3),
Abandoned_Cars.nn =
nn_function(st_c(st_coid(vars_net)), st_c(abandonCars),3),
Graffiti.nn =
nn_function(st_c(st_coid(vars_net)), st_c(graffiti),3),
Liquor_Retail.nn =
nn_function(st_c(st_coid(vars_net)), st_c(liquorRetail),3),
Street_Lights_Out.nn =
nn_function(st_c(st_coid(vars_net)), st_c(streetLightsOut),3),
Sanitation.nn =
nn_function(st_c(st_coid(vars_net)), st_c(sanitation),3))
```
The nearest neighbor features are then plotted below. Note the use of `select` and `ends_with` to map only the nearest neighbor features.
```
vars_net.long.nn <-
dplyr::select(vars_net, ends_with(".nn")) %>%
gather(Variable, value, -geometry)
vars <- unique(vars_net.long.nn$Variable)
mapList <- list()
for(i in vars){
mapList[[i]] <-
ggplot() +
geom_sf(data = filter(vars_net.long.nn, Variable == i), aes(fill=value), colour=NA) +
scale_fill_viridis(name="") +
labs(title=i) +
mapTheme()}
do.call(grid.arrange,c(mapList, ncol = 3, top = "Nearest Neighbor risk Factors by Fishnet"))
```
### 5\.3\.2 Feature Engineering \- Measure distance to one point
It also may be reasonable to measure distance to a single point, like the centroid of the Loop \- Chicago’s Central Business District. This is done with `st_distance`.
```
loopPoint <-
filter(neighborhoods, name == "Loop") %>%
st_centroid()
vars_net$loopDistance =
st_distance(st_centroid(vars_net),loopPoint) %>%
as.numeric()
```
### 5\.3\.3 Feature Engineering \- Create the `final_net`
Next, the `crime_net` and `vars_net` layers are joined into one regression\-ready, `final_net`. In the code block below, the `left_join` is enabled by converting `vars_net` to a data frame with `st_drop_geometry`.
```
final_net <-
left_join(crime_net, st_drop_geometry(vars_net), by="uniqueID")
```
Neighborhood `name` and `policeDistrict` are spatially joined to the `final_net` using grid cell centroids (1\.2\.3\). The `st_drop_geometry` and `left_join` operations then drop the point centroid geometries, joining the result back to grid cell geometries and converting again to `sf`. Some grid cell centroids do not fall into a neighborhood (returning `NA`) and are removed with `na.omit`. Other neighborhoods are so small, they are only represented by one grid cell.
```
final_net <-
st_centroid(final_net) %>%
st_join(dplyr::select(neighborhoods, name)) %>%
st_join(dplyr::select(policeDistricts, District)) %>%
st_drop_geometry() %>%
left_join(dplyr::select(final_net, geometry, uniqueID)) %>%
st_sf() %>%
na.omit()
```
5\.4 Exploring the spatial process of burglary
----------------------------------------------
In 4\.2\.1, ‘Global’ Moran’s *I* was used to test for spatial autocorrelation (clustering) of home prices and model errors. This information provided insight into the spatial process of home prices, accounting for neighborhood scale clustering, but not clustering at more local scales. Here, that local spatial process is explored.
To do so, a statistic called Local Moran’s *I* is introduced. Here, the null hypothesis is that the burglary count at a given location is randomly distributed relative to its *immediate neighbors*.
Like its global cousin, a spatial weights matrix is used to relate a unit to its neighbors. In 4\.2, a nearest neighbor weights matrix was used. Here, weights are calculated with ‘polygon adjacency’. The code block below creates a neighbor list, `poly2nb`, and a spatial weights matrix, `final_net.weights` using `queen` contiguity. This means that every grid cell is related to its eight adjacent neighbors (think about how a queen moves on a chess board).
Figure 5\.10 below visualizes one grid cell and its queen neighbors. Any one grid cell’s neighbors can be returned like so, `final_net.nb[[1457]]`.
```
final_net.nb <- poly2nb(as_Spatial(final_net), queen=TRUE)
final_net.weights <- nb2listw(final_net.nb, style="W", zero.policy=TRUE)
```
Figure 5\.11 below describes the *local* spatial process of burglary. `final_net.localMorans` is created by column binding `final_net` with the results of a `localmoran` test. The inputs to the `localmoran` test include `countBurglaries` and the spatial weights matrix. Several useful test statistics are output including *I*, the p\-value, and `Significiant_Hotspots`, defined as those grid cells with higher local counts than what might otherwise be expected under randomness (p\-values \<\= 0\.05\). The data frame is then converted to long form for mapping.
Another `grid.arrange` loop is used to create a small multiple map of the aforementioned indicators. The legend in Figure 5\.11 below, shows that relatively high values of *I* represent strong and statistically significant evidence of local clustering. Further evidence of this can be seen in the p\-value and significant hotspot maps. This test provides insight into the scale, location and intensity of burglary hotspots.
```
final_net.localMorans <-
cbind(
as.data.frame(localmoran(final_net$countBurglaries, final_net.weights)),
as.data.frame(final_net)) %>%
st_sf() %>%
dplyr::select(Burglary_Count = countBurglaries,
Local_Morans_I = Ii,
P_Value = `Pr(z > 0)`) %>%
mutate(Significant_Hotspots = ifelse(P_Value <= 0.05, 1, 0)) %>%
gather(Variable, Value, -geometry)
vars <- unique(final_net.localMorans$Variable)
varList <- list()
for(i in vars){
varList[[i]] <-
ggplot() +
geom_sf(data = filter(final_net.localMorans, Variable == i),
aes(fill = Value), colour=NA) +
scale_fill_viridis(name="") +
labs(title=i) +
mapTheme() + theme(legend.position="bottom")}
do.call(grid.arrange,c(varList, ncol = 4, top = "Local Morans I statistics, Burglary"))
```
Why is this information useful? A generalizable model must predict equally as well in the hotspots as the coldspots. A model fit only to the coldspots, will underfit the hotspots and vice versa. Not only is this local insight useful exploratory analysis, it can also be engineered into powerful spatial features to control for the local spatial process.
What is the appropriate scale for this local spatial process? Figure 5\.12 explores local hotspots by varying the p\-value of the Local Moran’s I. The smaller the p\-value the more significant the clusters. `0.0000001` conforms to very strong and significant burglary hot spots.
The code block below creates a Local Moran’s *I* feature in `final_net`. As before, a dummy variable, `burglary.isSig`, denotes a cell as part of a significant cluster (a p\-value `<= 0.0000001`). `burglary.isSig.dist` then measures average nearest neighbor distance from each cell centroid to its nearest significant cluster. We can now model important information on the local spatial process of burglaries.
```
final_net <-
final_net %>%
mutate(burglary.isSig =
ifelse(localmoran(final_net$countBurglaries,
final_net.weights)[,5] <= 0.0000001, 1, 0)) %>%
mutate(burglary.isSig.dist =
nn_function(st_coordinates(st_centroid(final_net)),
st_coordinates(st_centroid(
filter(final_net, burglary.isSig == 1))), 1))
```
### 5\.4\.1 Correlation tests
Correlation gives important context while also providing intuition on features that may predict `countBurglaries`. The code block below creates a small multiple scatterplot of `countBurglaries` as a function of the risk factors. `correlation.long` converts `final_net` to long form. `correlation.cor` groups by `Variable`, and calculates the Pearson R correlation, shown directly on the plot.
Figure 5\.14 organizes count and nearest neighbor (`nn`) correlations side\-by\-side. While correlation for count features is a bit awkward, this approach can help with feature selection. For a given risk factor, avoid colinearity by selecting *either* the count or nearest neighbor feature. Just remember, when all features are entered into a multivariate regression, the correlations will change.
```
correlation.long <-
st_drop_geometry(final_net) %>%
dplyr::select(-uniqueID, -cvID, -loopDistance, -name, -District) %>%
gather(Variable, Value, -countBurglaries)
correlation.cor <-
correlation.long %>%
group_by(Variable) %>%
summarize(correlation = cor(Value, countBurglaries, use = "complete.obs"))
ggplot(correlation.long, aes(Value, countBurglaries)) +
geom_point(size = 0.1) +
geom_text(data = correlation.cor, aes(label = paste("r =", round(correlation, 2))),
x=-Inf, y=Inf, vjust = 1.5, hjust = -.1) +
geom_smooth(method = "lm", se = FALSE, colour = "black") +
facet_wrap(~Variable, ncol = 2, scales = "free") +
labs(title = "Burglary count as a function of risk factors") +
plotTheme()
```
5\.5 Poisson Regression
-----------------------
Take a look at the skewed distribution of `countBurglaries` in the topmost histogram of Figure 5\.15\. Given burglary is a relatively rare event, it is reasonable for most grid cells to contain no crime events. When data is distributed this way, an OLS regression is inappropriate. In this section, a Poisson Regression is estimated which is uniquely suited to modeling a count outcome like `countBurglaries`.
There are many different approaches to modeling burglary counts. Here, a Poisson Regression is used, which is based on a Poisson distribution, simulated in the bottommost histogram of Figure 5\.15\. Does the observed and simulated distributions appear similar? There are many flavors of count\-based regression, but the one used here is the most simple.[43](#fn43)
### 5\.5\.1 Cross\-validated Poisson Regression
Recall, generalizability is important 1\) to test model performance on new data and 2\) on different (spatial) group contexts, like neighborhoods. In this section, both are addressed.
Unlike home prices, `final_net` is not split into training and test sets. Instead, we move directly to cross\-validation, and because geospatial risk models are purely spatial, *spatial cross\-validation* becomes an important option.
A well generalized crime predictive model learns the crime risk ‘experience’ at both citywide and local spatial scales. The best way to test for this is to hold out one local area, train the model on the remaining *n \- 1* areas, predict for the hold out, and record the goodness of fit. In this form of spatial cross\-validation called ‘Leave\-one\-group\-out’ cross\-validation (LOGO\-CV), each neighborhood takes a turn as a hold\-out.
Imagine one neighborhood has a particularly unique local experience. LOGO\-CV assumes that the experience in other neighborhoods generalizes to this unique place \- which is a pretty rigid assumption.
Three `final_net` fields can be used for cross\-validation. A random generated `cvID` associated with each grid cell can be used for random k\-fold cross\-validation. Neighborhood `name` and Police `District` can be used for spatial cross\-validation.
Below, goodness of fit metrics are generated for four regressions \- two including `Just Risk Factors` (`reg.vars`), and a second (`reg.ss.vars`) includes risk factors plus the Local Moran’s I `Spatial Process` features created in 5\.4\.1\. These features are relatively simple \- can you improve on them?
```
reg.vars <- c("Abandoned_Buildings.nn", "Abandoned_Cars.nn", "Graffiti.nn",
"Liquor_Retail.nn", "Street_Lights_Out.nn", "Sanitation.nn",
"loopDistance")
reg.ss.vars <- c("Abandoned_Buildings.nn", "Abandoned_Cars.nn", "Graffiti.nn",
"Liquor_Retail.nn", "Street_Lights_Out.nn", "Sanitation.nn",
"loopDistance", "burglary.isSig", "burglary.isSig.dist")
```
The `crossValidate` function below is very simple and designed to take an input `dataset`; a cross\-validation `id`; a `dependentVariable`; and a list of independent variables, `indVariables`. `cvID_list` is a list of unique `id`s which could be numbers or neighborhood `name`, for instance.
For a given neighborhood `name`, the function assigns each grid cell *not* in that neighborhood to the training set, `fold.train`, and each cell *in* that neighborhood to the test set, `fold.test`. A model is trained from the former and used to predict on the latter. The process is repeated until each neighborhood has a turn acting as a hold out. **For now**, the `countBurglaries` variable is hardwired into the function, which you will **have to change** when using this function for future analyses. If you have read in the above `functions.r`, you can see the function by running `crossValidate`.
The code block below runs `crossValidate` to estimate four different regressions.
* `reg.cv` and `reg.ss.cv` perform random *k\-fold* cross validation using `Just Risk Factors` and the `Spatial Process` features, respectively.
* `reg.spatialCV` and `reg.ss.spatialCV` perform *LOGO\-CV*, spatial cross\-validation on neighborhood `name`, using the aforementioned two sets of features.
The function makes it easy to swap out different cross\-validation group `id`’s. k\-fold cross validation uses the `cvID`. LOGO\-CV uses the neighborhood `name`. You may also wish to explore results using the Police `districts`. Note, in the `select` operation at the end of the LOGO\-CV models, the cross validation `id` is standardized to `cvID`.
The result of each analysis is a `sf` layer with observed and predicted burglary counts.
```
reg.cv <- crossValidate(
dataset = final_net,
id = "cvID",
dependentVariable = "countBurglaries",
indVariables = reg.vars) %>%
dplyr::select(cvID = cvID, countBurglaries, Prediction, geometry)
reg.ss.cv <- crossValidate(
dataset = final_net,
id = "cvID",
dependentVariable = "countBurglaries",
indVariables = reg.ss.vars) %>%
dplyr::select(cvID = cvID, countBurglaries, Prediction, geometry)
reg.spatialCV <- crossValidate(
dataset = final_net,
id = "name",
dependentVariable = "countBurglaries",
indVariables = reg.vars) %>%
dplyr::select(cvID = name, countBurglaries, Prediction, geometry)
reg.ss.spatialCV <- crossValidate(
dataset = final_net,
id = "name",
dependentVariable = "countBurglaries",
indVariables = reg.ss.vars) %>%
dplyr::select(cvID = name, countBurglaries, Prediction, geometry)
```
### 5\.5\.2 Accuracy \& Generalzability
A host of goodness of fit metrics are calculated below with particular emphasis on generalizability across space. The code block below creates a long form `reg.summary`, that binds together observed/predicted counts and errors for each grid cell and for each `Regression`, along with the `cvID`, and the `geometry`.
```
reg.summary <-
rbind(
mutate(reg.cv, Error = Prediction - countBurglaries,
Regression = "Random k-fold CV: Just Risk Factors"),
mutate(reg.ss.cv, Error = Prediction - countBurglaries,
Regression = "Random k-fold CV: Spatial Process"),
mutate(reg.spatialCV, Error = Prediction - countBurglaries,
Regression = "Spatial LOGO-CV: Just Risk Factors"),
mutate(reg.ss.spatialCV, Error = Prediction - countBurglaries,
Regression = "Spatial LOGO-CV: Spatial Process")) %>%
st_sf()
```
In the code block below, `error_by_reg_and_fold` calculates and visualizes MAE for *each* fold across each regression. The `Spatial Process` features seem to reduce errors overall.
Recall, LOGO\-CV assumes the local spatial process from all *other* neighborhoods generalizes to the hold\-out. When the local spatial process is not accounted for (ie. `Just Risk Factors`), some neighborhood hold\-outs have MAEs greater than 4 burglaries. However, those large errors disappear when the `Spatial Process` features are added. The lesson is that there is a shared local burglary experience across Chicago, and accounting for it improves the model, particularly in the hotspots.
What more can you learn by plotting raw errors in this histogram format?
```
error_by_reg_and_fold <-
reg.summary %>%
group_by(Regression, cvID) %>%
summarize(Mean_Error = mean(Prediction - countBurglaries, na.rm = T),
MAE = mean(abs(Mean_Error), na.rm = T),
SD_MAE = mean(abs(Mean_Error), na.rm = T)) %>%
ungroup()
error_by_reg_and_fold %>%
ggplot(aes(MAE)) +
geom_histogram(bins = 30, colour="black", fill = "#FDE725FF") +
facet_wrap(~Regression) +
geom_vline(xintercept = 0) + scale_x_continuous(breaks = seq(0, 8, by = 1)) +
labs(title="Distribution of MAE", subtitle = "k-fold cross validation vs. LOGO-CV",
x="Mean Absolute Error", y="Count") +
plotTheme()
```
The table below builds on `error_by_reg_and_fold` to calculate the mean and standard deviation in errors by regression (note the additional `group_by`). The result confirms our conclusion that the `Spatial Process` features improve the model. The model appears slightly less robust for the spatial cross\-validation because LOGO\-CV is such a conservative assumption. For intuition on how severe these errors are, compare them to the observed mean `countBurglaries`.
```
st_drop_geometry(error_by_reg_and_fold) %>%
group_by(Regression) %>%
summarize(Mean_MAE = round(mean(MAE), 2),
SD_MAE = round(sd(MAE), 2)) %>%
kable() %>%
kable_styling("striped", full_width = F) %>%
row_spec(2, color = "black", background = "#FDE725FF") %>%
row_spec(4, color = "black", background = "#FDE725FF")
```
| Regression | Mean\_MAE | SD\_MAE |
| --- | --- | --- |
| Random k\-fold CV: Just Risk Factors | 0\.49 | 0\.35 |
| Random k\-fold CV: Spatial Process | 0\.42 | 0\.29 |
| Spatial LOGO\-CV: Just Risk Factors | 0\.98 | 1\.26 |
| Spatial LOGO\-CV: Spatial Process | 0\.62 | 0\.61 |
| MAE by regression |
| --- |
| Table 5\.1 |
Figure 5\.17 visualizes the LOGO\-CV errors spatially. Note the use of `str_detect` in the `filter` operation to pull out just the LOGO\-CV regression errors. These maps visualize where the higher errors occur when the local spatial process is not accounted for. Not surprisingly, the largest errors are in the hotspot locations.
```
error_by_reg_and_fold %>%
filter(str_detect(Regression, "LOGO")) %>%
ggplot() +
geom_sf(aes(fill = MAE)) +
facet_wrap(~Regression) +
scale_fill_viridis() +
labs(title = "Burglary errors by LOGO-CV Regression") +
mapTheme() + theme(legend.position="bottom")
```
As discussed in Chapter 4, accounting for the local spatial process should remove all spatial variation in `countBurglary`, which should leave little spatial autocorrelation in model errors. To test this, the code block below calculates a new `neighborhood.weights`, spatial weights matrix at the neighborhood instead of grid cell scale. Global Moran’s *I* and p\-values are then calculated for each LOGO\-CV regression.
This provide more evidence that the `Spatial Process` features helped account for the spatial variation in burglary, although some still remains. More risk *and* protective factor features would be the next step to improve this, followed perhaps by engineering improved spatial process features.
```
neighborhood.weights <-
filter(error_by_reg_and_fold, Regression == "Spatial LOGO-CV: Spatial Process") %>%
group_by(cvID) %>%
poly2nb(as_Spatial(.), queen=TRUE) %>%
nb2listw(., style="W", zero.policy=TRUE)
filter(error_by_reg_and_fold, str_detect(Regression, "LOGO")) %>%
st_drop_geometry() %>%
group_by(Regression) %>%
summarize(Morans_I = moran.mc(abs(Mean_Error), neighborhood.weights,
nsim = 999, zero.policy = TRUE,
na.action=na.omit)[[1]],
p_value = moran.mc(abs(Mean_Error), neighborhood.weights,
nsim = 999, zero.policy = TRUE,
na.action=na.omit)[[3]])
```
| Regression | Morans\_I | p\_value |
| --- | --- | --- |
| Spatial LOGO\-CV: Just Risk Factors | 0\.2505835 | 0\.001 |
| Spatial LOGO\-CV: Spatial Process | 0\.1501559 | 0\.013 |
| Moran’s I on Errors by Regression |
| --- |
| Table 5\.2 |
On to model predictions. Figure 5\.18 below maps predictions for the LOGO\-CV regressions. The `Spatial Process` features do a better job picking up the hotspots, as intended. Given the rigid assumptions of LOGO\-CV, it is impressive that other local hotpots can generally predict hotspots in hold\-out neighborhoods.
The spatial process features produce a more ‘smooth’ crime risk surface, relative to the observed counts. These predictions represent ‘latent crime risk’ \- areas at risk even if a crime hasn’t actually been observed. Accuracy is not as important as generalizability, nevertheless, of 6817 observed burglaries, `Spatial LOGO-CV: Spatial Process` predicted 6804 burglaries, Citywide.
Interestingly, Figure 5\.19 below shows that all models over\-predict in low burglary areas and under\-predict in hot spot areas. Over\-predictions in lower burglary areas may highlight areas of latent risk. Under\-prediction in higher burglary areas may reflect difficulty predicting the hotspots.
Let’s now test for generalizability across racial\-neighborhood context, as we did in Chapter 4\.
```
st_drop_geometry(reg.summary) %>%
group_by(Regression) %>%
mutate(burglary_Decile = ntile(countBurglaries, 10)) %>%
group_by(Regression, burglary_Decile) %>%
summarize(meanObserved = mean(countBurglaries, na.rm=T),
meanPrediction = mean(Prediction, na.rm=T)) %>%
gather(Variable, Value, -Regression, -burglary_Decile) %>%
ggplot(aes(burglary_Decile, Value, shape = Variable)) +
geom_point(size = 2) + geom_path(aes(group = burglary_Decile), colour = "black") +
scale_shape_manual(values = c(2, 17)) +
facet_wrap(~Regression) + xlim(0,10) +
labs(title = "Predicted and observed burglary by observed burglary decile") +
plotTheme()
```
### 5\.5\.3 Generalizability by neighborhood context
Does the algorithm generalize across different neighborhood contexts? To test this proposition, `tidycensus` is used to pull race data by Census tract. `percentWhite` is calculated and tracts are split into two groups, `Majority_White` and `Majority_Non_White`. A spatial subset is used to get tracts within the study area.
Like Boston, Chicago is a very segregated City, as the map below shows.
```
tracts18 <-
get_acs(geography = "tract", variables = c("B01001_001E","B01001A_001E"),
year = 2018, state=17, county=031, geometry=T) %>%
st_transform('ESRI:102271') %>%
dplyr::select(variable, estimate, GEOID) %>%
spread(variable, estimate) %>%
rename(TotalPop = B01001_001,
NumberWhites = B01001A_001) %>%
mutate(percentWhite = NumberWhites / TotalPop,
raceContext = ifelse(percentWhite > .5, "Majority_White", "Majority_Non_White")) %>%
.[neighborhoods,]
```
As in Chapter 4, `Error` is calculated by subtracting the observed burglary count from the prediction. Thus, a positive difference represents an over\-prediction. The least ideal result is a model that over\-predicts risk in Minority areas, and under\-predicts in White areas. If reporting selection bias is an issue, such a model *may* unfairly allocate police resource disproportionately in Black and Brown communities. The table below compares average (non\-absolute) errors for the LOGO\-CV regressions by `raceContext`, by joining the fishnet grid cell centroids to tract boundaries.
The model on average, under\-predicts in `Majority_Non_White` neighborhoods and over\-predicts in `Majority_White` neighborhoods. The `Spatial Process` model not only reports lower errors overall, but a smaller difference in errors across neighborhood context.
It looks like this algorithm generalizes well with respect to race, right? We will return to this question in the conclusion. In the last stage of the analysis, the utility of this algorithm is judged relative to an alternative police allocation method.
```
reg.summary %>%
filter(str_detect(Regression, "LOGO")) %>%
st_centroid() %>%
st_join(tracts18) %>%
na.omit() %>%
st_drop_geometry() %>%
group_by(Regression, raceContext) %>%
summarize(mean.Error = mean(Error, na.rm = T)) %>%
spread(raceContext, mean.Error) %>%
kable(caption = "Mean Error by neighborhood racial context") %>%
kable_styling("striped", full_width = F)
```
| Regression | Majority\_Non\_White | Majority\_White |
| --- | --- | --- |
| Spatial LOGO\-CV: Just Risk Factors | \-0\.1164018 | 0\.1211148 |
| Spatial LOGO\-CV: Spatial Process | \-0\.0525908 | 0\.0421751 |
| Mean Error by neighborhood racial context |
| --- |
| Table 5\.3 |
### 5\.5\.4 Does this model allocate better than traditional crime hotspots?
Police departments all over the world use hotspot policing to target police resources to the places where crime is most concentrated. In this section, we ask whether risk predictions outperform traditional ‘Kernel density’ hotspot mapping[44](#fn44). To add an element of *across\-time* generalizability, hotspot and risk predictions from these 2017 burglaries are used to predict the location of burglaries from *2018*.
Kernel density works by centering a smooth kernel, or curve, atop each crime point such that the curve is at its highest directly over the point and the lowest at the range of a circular search radius. The density in a particular place is the sum of all the kernels that underlie it. Thus, areas with many nearby points have relatively high densities. The key scale assumption in kernel density is the use of a global search radius parameter. Because of its reliance on nearby points, think of Kernel density as one making ‘predictions’ based purely on spatial autocorrelation.
Figure 5\.20 visualizes three Kernel density maps at three different scales. Note the different burglary hotspot ‘narratives’ depending on the radius used.
The code block below creates a Kernel density map with a `1000` foot search radius using the `spatstat` package. `as.ppp` converts burglary coordinates to a `ppp` class. The `density` function creates the Kernel density. To map, the `ppp` is converted to a data frame and then an `sf` layer. Points are spatially joined to the `final_net` and the mean density is taken. Here density is visualized with a `sample_n` of 1,500 points overlayed on top.
```
burg_ppp <- as.ppp(st_coordinates(burglaries), W = st_bbox(final_net))
burg_KD <- spatstat::density.ppp(burg_ppp, 1000)
as.data.frame(burg_KD) %>%
st_as_sf(coords = c("x", "y"), crs = st_crs(final_net)) %>%
aggregate(., final_net, mean) %>%
ggplot() +
geom_sf(aes(fill=value)) +
geom_sf(data = sample_n(burglaries, 1500), size = .5) +
scale_fill_viridis(name = "Density") +
labs(title = "Kernel density of 2017 burglaries") +
mapTheme()
```
Next, a new goodness of fit indicator is created to illustrate whether the *2017* kernel density or risk predictions capture more of the *2018* burglaries. If the risk predictions capture more observed burglaries than the kernel density, then the risk prediction model provides a more robust targeting tool for allocating police resources. Here are the steps.
1. Download `burglaries18` from the Chicago Open Data site.
2. Compute the Kernel Density on 2017 `burglaries`.
3. Scale the Kernel density values to run from 1\-100 and then reclassify those values into 5 risk categories.
4. Spatial join the density to the fishnet.
5. Join to the fishnet, the count of burglaries *in 2018* for each grid cell.
6. Repeat for the risk predictions.
7. Take the rate of 2018 points by model type and risk category. Map and plot accordingly.
Step 1 downloads `burglaries18`.
```
burglaries18 <-
read.socrata("https://data.cityofchicago.org/Public-Safety/Crimes-2018/3i3m-jwuy") %>%
filter(Primary.Type == "BURGLARY" &
Description == "FORCIBLE ENTRY") %>%
mutate(x = gsub("[()]", "", Location)) %>%
separate(x,into= c("Y","X"), sep=",") %>%
mutate(X = as.numeric(X),
Y = as.numeric(Y)) %>%
na.omit %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform('ESRI:102271') %>%
distinct() %>%
.[fishnet,]
```
Next, Kernel density is computed on the 2017 burglaries. `burg_KDE_sf` converts the density to an sf layer; spatial joins (`aggregate`) it to the fishnet; converts the density to 100 deciles (`ntile`); and again to 5 risk categories. Finally, one last spatial join adds the count of observed burglaries in 2018\.
Check out `head(burg_KDE_sf)` to see the result.
```
burg_ppp <- as.ppp(st_coordinates(burglaries), W = st_bbox(final_net))
burg_KD <- spatstat::density.ppp(burg_ppp, 1000)
burg_KDE_sf <- as.data.frame(burg_KD) %>%
st_as_sf(coords = c("x", "y"), crs = st_crs(final_net)) %>%
aggregate(., final_net, mean) %>%
mutate(label = "Kernel Density",
Risk_Category = ntile(value, 100),
Risk_Category = case_when(
Risk_Category >= 90 ~ "90% to 100%",
Risk_Category >= 70 & Risk_Category <= 89 ~ "70% to 89%",
Risk_Category >= 50 & Risk_Category <= 69 ~ "50% to 69%",
Risk_Category >= 30 & Risk_Category <= 49 ~ "30% to 49%",
Risk_Category >= 1 & Risk_Category <= 29 ~ "1% to 29%")) %>%
cbind(
aggregate(
dplyr::select(burglaries18) %>% mutate(burgCount = 1), ., sum) %>%
mutate(burgCount = replace_na(burgCount, 0))) %>%
dplyr::select(label, Risk_Category, burgCount)
```
The same process is repeated for risk predictions. Note the prediction from the LOGO\-CV with the spatial features is being used here.
```
burg_risk_sf <-
filter(reg.summary, Regression == "Spatial LOGO-CV: Spatial Process") %>%
mutate(label = "Risk Predictions",
Risk_Category = ntile(Prediction, 100),
Risk_Category = case_when(
Risk_Category >= 90 ~ "90% to 100%",
Risk_Category >= 70 & Risk_Category <= 89 ~ "70% to 89%",
Risk_Category >= 50 & Risk_Category <= 69 ~ "50% to 69%",
Risk_Category >= 30 & Risk_Category <= 49 ~ "30% to 49%",
Risk_Category >= 1 & Risk_Category <= 29 ~ "1% to 29%")) %>%
cbind(
aggregate(
dplyr::select(burglaries18) %>% mutate(burgCount = 1), ., sum) %>%
mutate(burgCount = replace_na(burgCount, 0))) %>%
dplyr::select(label,Risk_Category, burgCount)
```
For each grid cell and model type (density vs. risk prediction), there is now an associated risk category and 2018 burglary count. Below a map is generated of the risk categories for both model types with a sample of `burglary18` points overlaid. A strongly fit model should show that the highest risk category is uniquely targeted to places with a high density of burglary points.
Is this what we see? High risk categories with few 2018 observed burglaries may suggest latent risk or a poorly fit model. This ambiguity is why accuracy for geospatial risk models is tough to judge. Nevertheless, more features/feature engineering would be helpful.
```
rbind(burg_KDE_sf, burg_risk_sf) %>%
na.omit() %>%
gather(Variable, Value, -label, -Risk_Category, -geometry) %>%
ggplot() +
geom_sf(aes(fill = Risk_Category), colour = NA) +
geom_sf(data = sample_n(burglaries18, 3000), size = .5, colour = "black") +
facet_wrap(~label, ) +
scale_fill_viridis(discrete = TRUE) +
labs(title="Comparison of Kernel Density and Risk Predictions",
subtitle="2017 burglar risk predictions; 2018 burglaries") +
mapTheme()
```
Finally, the code block below calculates the rate of 2018 burglary points by risk category and model type. A well fit model should show that the risk predictions capture a greater share of 2018 burglaries *in the highest risk category* relative to the Kernel density.
The risk prediction model narrowly edges out the Kernel Density in the top two highest risk categories \- suggesting this simple model has some value relative to the business\-as\-usual hot spot approach. Thus, we may have developed a tool to help target police resources, but is it a useful planning tool?
```
rbind(burg_KDE_sf, burg_risk_sf) %>%
st_set_geometry(NULL) %>% na.omit() %>%
gather(Variable, Value, -label, -Risk_Category) %>%
group_by(label, Risk_Category) %>%
summarize(countBurglaries = sum(Value)) %>%
ungroup() %>%
group_by(label) %>%
mutate(Rate_of_test_set_crimes = countBurglaries / sum(countBurglaries)) %>%
ggplot(aes(Risk_Category,Rate_of_test_set_crimes)) +
geom_bar(aes(fill=label), position="dodge", stat="identity") +
scale_fill_viridis(discrete = TRUE) +
labs(title = "Risk prediction vs. Kernel density, 2018 burglaries") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, vjust = 0.5))
```
5\.6 Conclusion \- Bias but useful?
-----------------------------------
In this chapter, a geospatial risk prediction model borrows the burglary experience in places where it has been observed, and tests whether that experience generalizes to places where burglary risk may be high, despite few actual events. Should these tests hold, the resulting predictions can be thought of as ‘latent risk’ for burglary and can be used to allocate police response across space.
We introduced new and powerful feature engineering strategies to capture the local spatial process. Spatial cross\-validation was also introduced as an important test of across\-space generalizability.
Despite finding that the model generalizes well across different neighborhood contexts, we cannot be sure the model doesn’t suffer from selection bias. As discussed in 5\.1, if law enforcement systemtically over\-polices certain communities, and this selection criteria goes unaccounted for in the model, then the model may be biased regardless of the above tests.
Nevertheless, we have demonstrated that even a simple risk prediction algorithm may outperform traditional hot spot analysis. By adding the element of time/seasonality (see Chapter 8\) and deploying predictions in an intuitive user interface, this approach could easily compete with commercial Predictive Policing products. Should it?
Imagine if ‘back\-testing’ the algorithm on historical data showed that its use would have predicted 20% more burglary than the current allocation process (either Kernel density or otherwise). What if we could show Chicago officials that paying us $100k/year for a license to our algorithm would reduce property crime by $10 million \- should they buy it?
Such a cost/benefit argument makes for a powerful sales pitch \- and it is why data science is such a fast growing industry. However, as I have mentioned, economic bottom lines are far from the only bottom lines in government.
What if the $10 million in savings lead police to increase enforcement and surveillance disproportionately in Black and Brown communities? Worse, what about feedback effects where steering police to these neighborhoods causes more reported crime, which then leads to increased predicted risk?
Stakeholders will either conclude, “but this tech reduces crime \- this is a no\-brainer!” or, “this technology perpetuates a legacy of systematic racism, fear, and disenfranchisement!”
Both opinions have merit and the right approach is a function of community standards. Many big cities are currently at this point. Law enforcement sees machine learning as the next logical progression in the analytics they have been building for years. At the same time, critics and community stakeholders see these as tools of the surveillance state.
These two groups need a forum for communicating these views and for agreeing on the appropriate community standard. I refer to such a forum as one being part of the ‘algorithmic governance’ process, discussed in more detail in the book’s Conclusion.
Finally, while these models may not be appropriate for crime prediction, there are a host of other Planning outcomes that could benefit greatly. In the past I have built these models to predict risks for outcomes like fires and child maltreatment. I have also built them to predict where a company’s next retail store should go. I urge readers to think about how these models could be used to predict ‘opportunity’, not just risk.
As always, this begins with an understanding of the use case and the current resource allocation process. The outcome of all this code should not be just a few maps and tables, but a strategic plan that converts these predictions into actionable intelligence.
5\.7 Assignment \- Predict risk
-------------------------------
Your job is to build a version of this model **for a different outcome that likely suffers from more selection bias than burglary**. You can build this model in Chicago or any other city with sufficient open data resources. Please also add at least **two** new features not used above, and iteratively build models until you have landed on one that optimizes for accuracy and generalizability.
Your final deliverable should be **in R markdown** form **with code blocks**. Please provide the following materials with **brief annotations** (please don’t forget this):
1. A map of your outcome of interest in point form, with some description of what, when, and why you think selection bias may be an issue.
2. A map of your outcome joined to the fishnet.
3. A small multiple map of your risk factors in the fishnet (counts, distance and/or other feature engineering approaches).
4. Local Moran’s I\-related small multiple map of your outcome (see 5\.4\.1\)
5. A small multiple scatterplot with correlations.
6. A histogram of your dependent variable.
7. A small multiple map of model errors by random k\-fold and spatial cross validation.
8. A table of MAE and standard deviation MAE by regression.
9. A table of raw errors by race context for a random k\-fold vs. spatial cross validation regression.
10. The map comparing kernel density to risk predictions for *the next year’s crime*.
11. The bar plot making this comparison.
12. Two paragraphs on why or why not you would recommend your algorithm be put into production.
5\.1 New predictive policing tools
----------------------------------
Of the few public sector machine learning algorithms in use, few are as prevalent or as despised as the suite of algorithms commonly referred to as ‘Predictive Policing’. There are several flavors of Predictive Policing from forecasting where crime will happen, who will commit crime, judicial response to crime, as well as investigative outcomes.
Policing has driven public\-sector machine learning because law enforcement has significant planning and resource allocation questions. In 2017, the Chicago Police Department recorded 268,387 incidents or 735 incidents per day, on average. Nearly 12,000 sworn officers patrol a 228 square mile area. How should police commanders strategically allocate these officers in time and space?
As discussed in 0\.3, data science is an ideal planning tool to ensure that the supply of a limited resource (e.g. policing), matches the demand for those resources (e.g. crime). For many years now, police departments have been all\-in on using crime\-related data for planning purposes.
New York City’s CompStat program, which began in the mid\-1990s, was setup to promote operational decision\-making through the compilation of crime statistics and maps. A 1999 survey found that 11% of small and 33% of large sized police departments nationally implemented a CompStat program.[29](#fn29) By 2013, 20% of surveyed police departments said they had at least 1 full time sworn officer “conducting research…using computerized records”, and 38% reported at least one non\-sworn full\-time personnel.[30](#fn30)
Emerging demand for this technology, a growing abundance of cloud services, and sophisticated machine learning algorithms have all given rise to several commercial Predictive Policing products. Several of these focus on placed\-based predictions, including PredPol[31](#fn31), Risk Terrain Modeling[32](#fn32), and ShotSpotter Missions.[33](#fn33)
Today, with growing awareness that Black Lives Matter and that policing resources in America are allocated in part, by way of systematic racism, critics increasingly charge that Predictive Policing is harmful. Some cities like Santa Cruz, CA have banned the practice outright[34](#fn34), while others like Pittsburgh are pivoting the technology to allocate social services instead of law enforcement.[35](#fn35)
With this critique in mind, why include Predictive Policing in this book? First and foremost, this chapter provides important context for how to judge a ‘useful’ machine learning model in government (Section 0\.3\). Second, it provides an even more nuanced definition of model generalizability. Finally, it is the only open source tutorial on geospatial risk modeling that I am aware of. Whether or not these models proliferate (I don’t see them going away any time soon), it will be helpful for stakeholders to gain a shared understanding of their inner\-workings to prevent increased bias and unwarranted surveillance.
### 5\.1\.1 Generalizability in geospatial risk models
A geospatial risk model is a regression model, not unlike those covered in the previous two chapters. The dependent variable is the occurrence of discrete events like crime, fires, successful sales calls, locations of donut shops, etc.[36](#fn36) Predictions from these models are interpreted as ‘the forecasted risk/opportunity of that event occurring *here*’.
For crime, the hypothesis is that crime risk is a function of *exposure* to a series of geospatial risk and protective factors, such as blight or recreation centers, respectively. The assumption is that as exposure to risk factors increases, so does crime risk.
The goal is to borrow the experience from places where crime is observed and test whether that experience generalizes to places that may be at risk for crime, even if few events are reported. Crime is relatively rare, and observed crime likely discounts actual crime risk. Predictions from geospatial risk models can be thought of as *latent risk* \- areas at risk even if a crime has not actually been reported there. By this definition, it is easy to see why such a forecast would be appealing for police.
Interestingly, a model that predicts crime risk equal to observed crime (a Mean Absolute Error of `0`), would not reveal any latent risk. As a resource allocation tool, this would not be any more useful than one which tells police to return to yesterday’s crime scene. Thus, accuracy is not as critical for this use case, as it was in the last use case.
Generalizability, on the other hand, is incredibly important. Of the two definitions previously introduced, the second will be more important here. A generalizable geospatial risk prediction model is one that predicts risk with comparable accuracy across different neighborhoods. When crime is the outcome of interest, this is anything but straightforward.
Consider that home sale prices from Chapters 3 and 4, are pulled from a complete sample \- any house that transacted shows up in the data. Not only is say, the location of every drug offense not a complete sample, but it is not likely a representative sample, either. For a drug offense event to appear in the data, it must be observed by law enforcement. Many drug offenses go unobserved, and worse, officers may selectively choose to enforce drug crime more fervently in some communities than others.[37](#fn37)
If selective enforcement is the result of an officer’s preconceived or biased beliefs, then this ‘selection bias’ will be baked into the crime outcome. If a model fails to account for this bias, it will fall into the error term of the regression and lead to predictions that do not generalize across space.
One might assume that additional controls for race can help account for the selection bias, but the true nature of the bias is unknown. It may be entirely about race, partially so, or a confluence of factors; and likely some officers are more biased than others. Thus, without observing the explicit selection *criteria*, it is not possible to fully account for the selection bias.
For this reason, a model trained to predict drug offense risk for instance, is almost certainly not useful. However one trained to predict building fire risk may be. Fire departments cannot selectively choose which building fires to extinguish and which to let rage. Thus, all else equal, the building fire outcome (and ultimately the predictions) are more likely to generalize across space.
Selection bias may exist in the dependent variable, but it may also exist in the features as well. A second reason why these models may not generalize has to do with the exposure hypothesis.
The rationale for the exposure hypothesis can be traced in part to the Broken Windows Theory, which posits a link between community ‘disorder’ and crime.[38](#fn38) The theory is that features of the built environment such as blight and disrepair may signal a local tolerance for criminality. A host of built environment risk factors impact crime risk, but many of these suffer from selection bias.[39](#fn39)
Perhaps graffiti is a signal that criminality is more locally accepted. The best source of address\-level graffiti data comes from 311 open datasets.[40](#fn40) Assume graffiti was equally distributed throughout the city, but only residents of certain neighborhoods (maybe those where graffiti was rare) chose to file 311 reports. If this reporting criteria is not observed in the model, model predictions would reflect this spatial selection bias and predictions would not generalize.
Operators of ‘risk factors’ like homeless shelters, laundromats, and check cashing outlets may select into certain neighborhoods based on market preferences. Spatial selection here may also be an issue. So why is this important and what role does Broken Windows and selection bias play in Predictive Policing?
### 5\.1\.2 From Broken Windows Theory to Broken Windows Policing
There is a wealth of important criminological and social science research on the efficacy of Broken Windows Theory. While I am unqualified to critique these findings, I do think it is useful to consider the consequences of developing a police allocation tool based on Broken Windows Theory.
Racist placed\-based policies, like redlining and mortgage discrimination, corralled low\-income minorities into segregated neighborhoods without the tools for economic empowerment.[41](#fn41) Many of these communities are characterized by blight and disrepair. A risk model predicated on disrepair (Broken Windows) might only perpetuate these same racist placed\-based policies.
In other words, if the data inputs to a forecasting model are biased against communities of color, then the prediction outputs of that model will also be biased against those places. This bias compounds when risk predictions are converted to resource allocation. Surveillance increases, more crimes are ‘reported’, and more risk is predicted.
When a reasonable criminological theory is operationalized into an empirical model based on flawed data, the result is an unuseful decision\-making tool.
Because the lack of generalizability is driven in part by unobserved information like selection bias, despite our best efforts below, we still will not know the degree of bias baked in the risk predictions. While a different outcome, like fire, could have been chosen to demonstrate geospatial risk prediction, policing is the most ubiquitous example of the approach, and worth our focus.
In this chapter, a geospatial risk predictive model of burglary is created. We begin by wrangling burglary and risk factor data into geospatial features, correlating their exposure, and estimating models to predict burglary latent risk. These models are then validated in part, by comparing predictions to a standard, business\-as\-usual measure of geospatial crime risk.
### 5\.1\.3 Setup
Begin by loading the requisite packages and functions, including the `functions.R` script.
```
library(tidyverse)
library(sf)
library(RSocrata)
library(viridis)
library(spatstat)
library(raster)
library(spdep)
library(FNN)
library(grid)
library(gridExtra)
library(knitr)
library(kableExtra)
library(tidycensus)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
```
### 5\.1\.1 Generalizability in geospatial risk models
A geospatial risk model is a regression model, not unlike those covered in the previous two chapters. The dependent variable is the occurrence of discrete events like crime, fires, successful sales calls, locations of donut shops, etc.[36](#fn36) Predictions from these models are interpreted as ‘the forecasted risk/opportunity of that event occurring *here*’.
For crime, the hypothesis is that crime risk is a function of *exposure* to a series of geospatial risk and protective factors, such as blight or recreation centers, respectively. The assumption is that as exposure to risk factors increases, so does crime risk.
The goal is to borrow the experience from places where crime is observed and test whether that experience generalizes to places that may be at risk for crime, even if few events are reported. Crime is relatively rare, and observed crime likely discounts actual crime risk. Predictions from geospatial risk models can be thought of as *latent risk* \- areas at risk even if a crime has not actually been reported there. By this definition, it is easy to see why such a forecast would be appealing for police.
Interestingly, a model that predicts crime risk equal to observed crime (a Mean Absolute Error of `0`), would not reveal any latent risk. As a resource allocation tool, this would not be any more useful than one which tells police to return to yesterday’s crime scene. Thus, accuracy is not as critical for this use case, as it was in the last use case.
Generalizability, on the other hand, is incredibly important. Of the two definitions previously introduced, the second will be more important here. A generalizable geospatial risk prediction model is one that predicts risk with comparable accuracy across different neighborhoods. When crime is the outcome of interest, this is anything but straightforward.
Consider that home sale prices from Chapters 3 and 4, are pulled from a complete sample \- any house that transacted shows up in the data. Not only is say, the location of every drug offense not a complete sample, but it is not likely a representative sample, either. For a drug offense event to appear in the data, it must be observed by law enforcement. Many drug offenses go unobserved, and worse, officers may selectively choose to enforce drug crime more fervently in some communities than others.[37](#fn37)
If selective enforcement is the result of an officer’s preconceived or biased beliefs, then this ‘selection bias’ will be baked into the crime outcome. If a model fails to account for this bias, it will fall into the error term of the regression and lead to predictions that do not generalize across space.
One might assume that additional controls for race can help account for the selection bias, but the true nature of the bias is unknown. It may be entirely about race, partially so, or a confluence of factors; and likely some officers are more biased than others. Thus, without observing the explicit selection *criteria*, it is not possible to fully account for the selection bias.
For this reason, a model trained to predict drug offense risk for instance, is almost certainly not useful. However one trained to predict building fire risk may be. Fire departments cannot selectively choose which building fires to extinguish and which to let rage. Thus, all else equal, the building fire outcome (and ultimately the predictions) are more likely to generalize across space.
Selection bias may exist in the dependent variable, but it may also exist in the features as well. A second reason why these models may not generalize has to do with the exposure hypothesis.
The rationale for the exposure hypothesis can be traced in part to the Broken Windows Theory, which posits a link between community ‘disorder’ and crime.[38](#fn38) The theory is that features of the built environment such as blight and disrepair may signal a local tolerance for criminality. A host of built environment risk factors impact crime risk, but many of these suffer from selection bias.[39](#fn39)
Perhaps graffiti is a signal that criminality is more locally accepted. The best source of address\-level graffiti data comes from 311 open datasets.[40](#fn40) Assume graffiti was equally distributed throughout the city, but only residents of certain neighborhoods (maybe those where graffiti was rare) chose to file 311 reports. If this reporting criteria is not observed in the model, model predictions would reflect this spatial selection bias and predictions would not generalize.
Operators of ‘risk factors’ like homeless shelters, laundromats, and check cashing outlets may select into certain neighborhoods based on market preferences. Spatial selection here may also be an issue. So why is this important and what role does Broken Windows and selection bias play in Predictive Policing?
### 5\.1\.2 From Broken Windows Theory to Broken Windows Policing
There is a wealth of important criminological and social science research on the efficacy of Broken Windows Theory. While I am unqualified to critique these findings, I do think it is useful to consider the consequences of developing a police allocation tool based on Broken Windows Theory.
Racist placed\-based policies, like redlining and mortgage discrimination, corralled low\-income minorities into segregated neighborhoods without the tools for economic empowerment.[41](#fn41) Many of these communities are characterized by blight and disrepair. A risk model predicated on disrepair (Broken Windows) might only perpetuate these same racist placed\-based policies.
In other words, if the data inputs to a forecasting model are biased against communities of color, then the prediction outputs of that model will also be biased against those places. This bias compounds when risk predictions are converted to resource allocation. Surveillance increases, more crimes are ‘reported’, and more risk is predicted.
When a reasonable criminological theory is operationalized into an empirical model based on flawed data, the result is an unuseful decision\-making tool.
Because the lack of generalizability is driven in part by unobserved information like selection bias, despite our best efforts below, we still will not know the degree of bias baked in the risk predictions. While a different outcome, like fire, could have been chosen to demonstrate geospatial risk prediction, policing is the most ubiquitous example of the approach, and worth our focus.
In this chapter, a geospatial risk predictive model of burglary is created. We begin by wrangling burglary and risk factor data into geospatial features, correlating their exposure, and estimating models to predict burglary latent risk. These models are then validated in part, by comparing predictions to a standard, business\-as\-usual measure of geospatial crime risk.
### 5\.1\.3 Setup
Begin by loading the requisite packages and functions, including the `functions.R` script.
```
library(tidyverse)
library(sf)
library(RSocrata)
library(viridis)
library(spatstat)
library(raster)
library(spdep)
library(FNN)
library(grid)
library(gridExtra)
library(knitr)
library(kableExtra)
library(tidycensus)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
```
5\.2 Data wrangling: Creating the `fishnet`
-------------------------------------------
What is the appropriate unit of analysis for predicting geospatial crime risk? No matter the answer, scale biases are likely (Chapter 1\). The best place to start however, is with the resource allocation process. Many police departments divide the city into administrative areas in which some measure of decision\-making autonomy is allowed. How useful would a patrol allocation algorithm be if predictions were made at the Police Beat or District scale as Figure 5\.1?
Tens or hundreds of thousands of people live in a Police District, so a single, District\-wide risk prediction would not provide enough precision intelligence on where officers should patrol. In the code below, `policeDistricts` and `policeBeats` are downloaded.
```
policeDistricts <-
st_read("https://data.cityofchicago.org/api/geospatial/fthy-xz3r?method=export&format=GeoJSON") %>%
st_transform('ESRI:102271') %>%
dplyr::select(District = dist_num)
policeBeats <-
st_read("https://data.cityofchicago.org/api/geospatial/aerh-rz74?method=export&format=GeoJSON") %>%
st_transform('ESRI:102271') %>%
dplyr::select(District = beat_num)
bothPoliceUnits <- rbind(mutate(policeDistricts, Legend = "Police Districts"),
mutate(policeBeats, Legend = "Police Beats"))
```
Instead, consider crime risk not as a phenomenon that varies across administrative units, but one varying smoothly across the landscape, like elevation. Imagine that crime clusters in space, and that crime risk dissipates outward from these ‘hot spots’, like elevation dips from mountaintops to valleys. The best way to represent this spatial trend in a regression\-ready form, is to aggregate point\-level data into a lattice of grid cells.
This grid cell lattice is referred to as the `fishnet` and created below from a `chicagoBoundary` that omits O’Hare airport in the northwest of the city. `st_make_grid` is used to create a `fishnet` with 500ft by 500ft grid cells.[42](#fn42)
```
chicagoBoundary <-
st_read(file.path(root.dir,"/Chapter5/chicagoBoundary.geojson")) %>%
st_transform('ESRI:102271')
fishnet <-
st_make_grid(chicagoBoundary, cellsize = 500) %>%
st_sf() %>%
mutate(uniqueID = rownames(.))
```
Next, `burglaries` are downloaded from the Chicago Open Data site using the `RSocrata` package. Socrata is the creator of the open source platform that Chicago uses to share its data.
The code block below downloads the data and selects only ‘forcible entry’ burglaries. Some additional data wrangling removes extraneous characters from the`Location` field with `gsub`. The resulting field is then converted to `separate` fields of `X` and `Y` coordinates. Those fields are then made numeric, converted to simple features, projected, and duplicate geometries are removed with `distinct`. The density map (`stat_density2d`) shows some clear burglary hotspots in Chicago.
```
burglaries <-
read.socrata("https://data.cityofchicago.org/Public-Safety/Crimes-2017/d62x-nvdr") %>%
filter(Primary.Type == "BURGLARY" & Description == "FORCIBLE ENTRY") %>%
mutate(x = gsub("[()]", "", Location)) %>%
separate(x,into= c("Y","X"), sep=",") %>%
mutate(X = as.numeric(X),Y = as.numeric(Y)) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant")%>%
st_transform('ESRI:102271') %>%
distinct()
```
### 5\.2\.1 Data wrangling: Joining burglaries to the `fishnet`
To get the count of burglaries by grid cell, first `mutate` a ‘counter’ field, `countBurglaries`, for each burglary event, and then spatial join (`aggregate`) burglary points to the `fishnet`, taking the sum of `countBurglaries`.
A grid cell with no burglaries receives `NA` which is converted to `0` with `replace_na`. A random `uniqueID` is generated for each grid cell, as well as a random group `cvID`, used later for cross\-validation. Roughly 100 `cvID`s are generated (`round(nrow(burglaries) / 100)`) to allow 100\-fold cross validation below.
Figure 5\.4 maps the count of burglaries by grid cell, and the clustered spatial process of burglary begins to take shape. Notice the use of `scale_fill_viridis` from the `viridis` package, which automatically inputs the blue through yellow color ramp.
```
crime_net <-
dplyr::select(burglaries) %>%
mutate(countBurglaries = 1) %>%
aggregate(., fishnet, sum) %>%
mutate(countBurglaries = replace_na(countBurglaries, 0),
uniqueID = rownames(.),
cvID = sample(round(nrow(fishnet) / 24), size=nrow(fishnet), replace = TRUE))
ggplot() +
geom_sf(data = crime_net, aes(fill = countBurglaries)) +
scale_fill_viridis() +
labs(title = "Count of Burglaires for the fishnet") +
mapTheme()
```
### 5\.2\.2 Wrangling risk factors
The `crime_net` includes counts of the dependent variable, burglary. Next, a small set of risk factor features are downloaded and wrangled to the `fishnet`. The very simple model created in this chapter is based on a limited set of features. A typical analysis would likely include many more.
Six risk factors are downloaded, including 311 reports of abandoned cars, street lights out, graffiti remediation, sanitation complaints, and abandon buildings, along with a neighborhood polygon layer and the location of retail stores that sell liquor to go.
Take note of the approach used to wrangle each dataset. Data is downloaded; year and coordinates are created; and the latter converted to `sf`. The data is then projected and a `Legend` field is added to label the risk factor. This allows each to be `rbind` into one dataset for small multiple mapping, as shown in Figure 5\.5\.
The `graffiti` code block is the first where we have seen the `%in%` operator, which enables `filter` to take inputs from a list rather than chaining together several ‘or’ (`|`) statements.
```
abandonCars <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Abandoned-Vehicles/3c9v-pnva") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Abandoned_Cars")
abandonBuildings <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Vacant-and-Abandoned-Building/7nii-7srd") %>%
mutate(year = substr(date_service_request_was_received,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Abandoned_Buildings")
graffiti <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Graffiti-Removal-Historical/hec5-y4x5") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
filter(where_is_the_graffiti_located_ %in% c("Front", "Rear", "Side")) %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Graffiti")
streetLightsOut <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Street-Lights-All-Out/zuxi-7xem") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Street_Lights_Out")
sanitation <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Sanitation-Code-Complaints-Hi/me59-5fac") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Sanitation")
liquorRetail <-
read.socrata("https://data.cityofchicago.org/resource/nrmj-3kcf.json") %>%
filter(business_activity == "Retail Sales of Packaged Liquor") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Liquor_Retail")
neighborhoods <-
st_read("https://raw.githubusercontent.com/blackmad/neighborhoods/master/chicago.geojson") %>%
st_transform(st_crs(fishnet))
```
### 5\.2\.1 Data wrangling: Joining burglaries to the `fishnet`
To get the count of burglaries by grid cell, first `mutate` a ‘counter’ field, `countBurglaries`, for each burglary event, and then spatial join (`aggregate`) burglary points to the `fishnet`, taking the sum of `countBurglaries`.
A grid cell with no burglaries receives `NA` which is converted to `0` with `replace_na`. A random `uniqueID` is generated for each grid cell, as well as a random group `cvID`, used later for cross\-validation. Roughly 100 `cvID`s are generated (`round(nrow(burglaries) / 100)`) to allow 100\-fold cross validation below.
Figure 5\.4 maps the count of burglaries by grid cell, and the clustered spatial process of burglary begins to take shape. Notice the use of `scale_fill_viridis` from the `viridis` package, which automatically inputs the blue through yellow color ramp.
```
crime_net <-
dplyr::select(burglaries) %>%
mutate(countBurglaries = 1) %>%
aggregate(., fishnet, sum) %>%
mutate(countBurglaries = replace_na(countBurglaries, 0),
uniqueID = rownames(.),
cvID = sample(round(nrow(fishnet) / 24), size=nrow(fishnet), replace = TRUE))
ggplot() +
geom_sf(data = crime_net, aes(fill = countBurglaries)) +
scale_fill_viridis() +
labs(title = "Count of Burglaires for the fishnet") +
mapTheme()
```
### 5\.2\.2 Wrangling risk factors
The `crime_net` includes counts of the dependent variable, burglary. Next, a small set of risk factor features are downloaded and wrangled to the `fishnet`. The very simple model created in this chapter is based on a limited set of features. A typical analysis would likely include many more.
Six risk factors are downloaded, including 311 reports of abandoned cars, street lights out, graffiti remediation, sanitation complaints, and abandon buildings, along with a neighborhood polygon layer and the location of retail stores that sell liquor to go.
Take note of the approach used to wrangle each dataset. Data is downloaded; year and coordinates are created; and the latter converted to `sf`. The data is then projected and a `Legend` field is added to label the risk factor. This allows each to be `rbind` into one dataset for small multiple mapping, as shown in Figure 5\.5\.
The `graffiti` code block is the first where we have seen the `%in%` operator, which enables `filter` to take inputs from a list rather than chaining together several ‘or’ (`|`) statements.
```
abandonCars <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Abandoned-Vehicles/3c9v-pnva") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Abandoned_Cars")
abandonBuildings <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Vacant-and-Abandoned-Building/7nii-7srd") %>%
mutate(year = substr(date_service_request_was_received,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Abandoned_Buildings")
graffiti <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Graffiti-Removal-Historical/hec5-y4x5") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
filter(where_is_the_graffiti_located_ %in% c("Front", "Rear", "Side")) %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Graffiti")
streetLightsOut <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Street-Lights-All-Out/zuxi-7xem") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Street_Lights_Out")
sanitation <-
read.socrata("https://data.cityofchicago.org/Service-Requests/311-Service-Requests-Sanitation-Code-Complaints-Hi/me59-5fac") %>%
mutate(year = substr(creation_date,1,4)) %>% filter(year == "2017") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Sanitation")
liquorRetail <-
read.socrata("https://data.cityofchicago.org/resource/nrmj-3kcf.json") %>%
filter(business_activity == "Retail Sales of Packaged Liquor") %>%
dplyr::select(Y = latitude, X = longitude) %>%
na.omit() %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform(st_crs(fishnet)) %>%
mutate(Legend = "Liquor_Retail")
neighborhoods <-
st_read("https://raw.githubusercontent.com/blackmad/neighborhoods/master/chicago.geojson") %>%
st_transform(st_crs(fishnet))
```
5\.3 Feature engineering \- Count of risk factors by grid cell
--------------------------------------------------------------
We have already seen feature engineering strategies for measuring exposure, but grid cells are an added nuance. We start by joining a long form layer of crime events to the `vars_net` fishnet. First, the individual risk factor layers are bound. Next, the fishnet is spatially joined (`st_join`) *to each point*. The outcome is a large point data frame with a column for each fishnet `uniqueID`.
That output is then converted from a long form layer of risk factor points with grid cell `uniqueID`s, to a wide form layer of grid cells with risk factor columns. This is done by grouping on grid cell `uniqueID` and risk factor `Legend`, then summing the count of events. The `full_join` adds the fishnet geometries and `spread` converts to wide form. `vars_net`, is now regression\-ready.
```
vars_net <-
rbind(abandonCars,streetLightsOut,abandonBuildings,
liquorRetail, graffiti, sanitation) %>%
st_join(., fishnet, join=st_within) %>%
st_drop_geometry() %>%
group_by(uniqueID, Legend) %>%
summarize(count = n()) %>%
full_join(fishnet) %>%
spread(Legend, count, fill=0) %>%
st_sf() %>%
dplyr::select(-`<NA>`) %>%
na.omit() %>%
ungroup()
```
Now let’s map each `vars_net` feature as a small multiple map. Each risk factor has a different range (min, max) and at this stage of `sf` development, `facet_wrap` is unable to calculate unique legends for each. Instead, a function is used to loop through the creation of individual risk factor maps and compile (i.e. `grid.arrange`) them into one small multiple plot.
`vars_net.long` is `vars_net` in long form; `vars` is a vector of `Variable` names; and `mapList` is an empty list. The loop says for each `Variable`, *i*, in `vars`, create a map by `filter`ing for `Variable` *i* and adding it to `mapList`. Once all six risk factor maps have been created, `do.call` then loops through the `mapList` and arranges each map in the small multiple plot. Try to recreate this visualization with `facet_wrap` and notice the difference in legends.
These risk factors illustrate slightly different spatial processes. `Abandoned_Buildings` are mainly clustered in South Chicago, while `Abandoned_Cars` are mostly in the north. 311 complaints for Graffiti tend to cluster along major thoroughfares and `Liquor_Retail` is heavily clustered in “The Loop”, Chicago’s downtown. In 5\.4\.1, we’ll test the correlation between these features and `burglaries`.
```
vars_net.long <-
gather(vars_net, Variable, value, -geometry, -uniqueID)
vars <- unique(vars_net.long$Variable)
mapList <- list()
for(i in vars){
mapList[[i]] <-
ggplot() +
geom_sf(data = filter(vars_net.long, Variable == i), aes(fill=value), colour=NA) +
scale_fill_viridis(name="") +
labs(title=i) +
mapTheme()}
do.call(grid.arrange,c(mapList, ncol=3, top="Risk Factors by Fishnet"))
```
### 5\.3\.1 Feature engineering \- Nearest neighbor features
Consider how grid cell counts impose a very rigid spatial scale of exposure. Our second feature engineering approach is to calculate average nearest neighbor distance (3\.2\.1\) to hypothesize a smoother exposure relationship across space. Here, the `nn_function` is used.
Average nearest neighbor features are created by converting `vars_net` grid cells to centroid points then measuring to *k* risk factor points. Note, the `nn_function` requires both input layers to be points. For demonstration purposes *k* is set to 3, but ideally, one would test different *k* definitions of scale. Two shortcut functions are created to make the code block less verbose.
```
st_c <- st_coordinates
st_coid <- st_centroid
vars_net <-
vars_net %>%
mutate(
Abandoned_Buildings.nn =
nn_function(st_c(st_coid(vars_net)), st_c(abandonBuildings),3),
Abandoned_Cars.nn =
nn_function(st_c(st_coid(vars_net)), st_c(abandonCars),3),
Graffiti.nn =
nn_function(st_c(st_coid(vars_net)), st_c(graffiti),3),
Liquor_Retail.nn =
nn_function(st_c(st_coid(vars_net)), st_c(liquorRetail),3),
Street_Lights_Out.nn =
nn_function(st_c(st_coid(vars_net)), st_c(streetLightsOut),3),
Sanitation.nn =
nn_function(st_c(st_coid(vars_net)), st_c(sanitation),3))
```
The nearest neighbor features are then plotted below. Note the use of `select` and `ends_with` to map only the nearest neighbor features.
```
vars_net.long.nn <-
dplyr::select(vars_net, ends_with(".nn")) %>%
gather(Variable, value, -geometry)
vars <- unique(vars_net.long.nn$Variable)
mapList <- list()
for(i in vars){
mapList[[i]] <-
ggplot() +
geom_sf(data = filter(vars_net.long.nn, Variable == i), aes(fill=value), colour=NA) +
scale_fill_viridis(name="") +
labs(title=i) +
mapTheme()}
do.call(grid.arrange,c(mapList, ncol = 3, top = "Nearest Neighbor risk Factors by Fishnet"))
```
### 5\.3\.2 Feature Engineering \- Measure distance to one point
It also may be reasonable to measure distance to a single point, like the centroid of the Loop \- Chicago’s Central Business District. This is done with `st_distance`.
```
loopPoint <-
filter(neighborhoods, name == "Loop") %>%
st_centroid()
vars_net$loopDistance =
st_distance(st_centroid(vars_net),loopPoint) %>%
as.numeric()
```
### 5\.3\.3 Feature Engineering \- Create the `final_net`
Next, the `crime_net` and `vars_net` layers are joined into one regression\-ready, `final_net`. In the code block below, the `left_join` is enabled by converting `vars_net` to a data frame with `st_drop_geometry`.
```
final_net <-
left_join(crime_net, st_drop_geometry(vars_net), by="uniqueID")
```
Neighborhood `name` and `policeDistrict` are spatially joined to the `final_net` using grid cell centroids (1\.2\.3\). The `st_drop_geometry` and `left_join` operations then drop the point centroid geometries, joining the result back to grid cell geometries and converting again to `sf`. Some grid cell centroids do not fall into a neighborhood (returning `NA`) and are removed with `na.omit`. Other neighborhoods are so small, they are only represented by one grid cell.
```
final_net <-
st_centroid(final_net) %>%
st_join(dplyr::select(neighborhoods, name)) %>%
st_join(dplyr::select(policeDistricts, District)) %>%
st_drop_geometry() %>%
left_join(dplyr::select(final_net, geometry, uniqueID)) %>%
st_sf() %>%
na.omit()
```
### 5\.3\.1 Feature engineering \- Nearest neighbor features
Consider how grid cell counts impose a very rigid spatial scale of exposure. Our second feature engineering approach is to calculate average nearest neighbor distance (3\.2\.1\) to hypothesize a smoother exposure relationship across space. Here, the `nn_function` is used.
Average nearest neighbor features are created by converting `vars_net` grid cells to centroid points then measuring to *k* risk factor points. Note, the `nn_function` requires both input layers to be points. For demonstration purposes *k* is set to 3, but ideally, one would test different *k* definitions of scale. Two shortcut functions are created to make the code block less verbose.
```
st_c <- st_coordinates
st_coid <- st_centroid
vars_net <-
vars_net %>%
mutate(
Abandoned_Buildings.nn =
nn_function(st_c(st_coid(vars_net)), st_c(abandonBuildings),3),
Abandoned_Cars.nn =
nn_function(st_c(st_coid(vars_net)), st_c(abandonCars),3),
Graffiti.nn =
nn_function(st_c(st_coid(vars_net)), st_c(graffiti),3),
Liquor_Retail.nn =
nn_function(st_c(st_coid(vars_net)), st_c(liquorRetail),3),
Street_Lights_Out.nn =
nn_function(st_c(st_coid(vars_net)), st_c(streetLightsOut),3),
Sanitation.nn =
nn_function(st_c(st_coid(vars_net)), st_c(sanitation),3))
```
The nearest neighbor features are then plotted below. Note the use of `select` and `ends_with` to map only the nearest neighbor features.
```
vars_net.long.nn <-
dplyr::select(vars_net, ends_with(".nn")) %>%
gather(Variable, value, -geometry)
vars <- unique(vars_net.long.nn$Variable)
mapList <- list()
for(i in vars){
mapList[[i]] <-
ggplot() +
geom_sf(data = filter(vars_net.long.nn, Variable == i), aes(fill=value), colour=NA) +
scale_fill_viridis(name="") +
labs(title=i) +
mapTheme()}
do.call(grid.arrange,c(mapList, ncol = 3, top = "Nearest Neighbor risk Factors by Fishnet"))
```
### 5\.3\.2 Feature Engineering \- Measure distance to one point
It also may be reasonable to measure distance to a single point, like the centroid of the Loop \- Chicago’s Central Business District. This is done with `st_distance`.
```
loopPoint <-
filter(neighborhoods, name == "Loop") %>%
st_centroid()
vars_net$loopDistance =
st_distance(st_centroid(vars_net),loopPoint) %>%
as.numeric()
```
### 5\.3\.3 Feature Engineering \- Create the `final_net`
Next, the `crime_net` and `vars_net` layers are joined into one regression\-ready, `final_net`. In the code block below, the `left_join` is enabled by converting `vars_net` to a data frame with `st_drop_geometry`.
```
final_net <-
left_join(crime_net, st_drop_geometry(vars_net), by="uniqueID")
```
Neighborhood `name` and `policeDistrict` are spatially joined to the `final_net` using grid cell centroids (1\.2\.3\). The `st_drop_geometry` and `left_join` operations then drop the point centroid geometries, joining the result back to grid cell geometries and converting again to `sf`. Some grid cell centroids do not fall into a neighborhood (returning `NA`) and are removed with `na.omit`. Other neighborhoods are so small, they are only represented by one grid cell.
```
final_net <-
st_centroid(final_net) %>%
st_join(dplyr::select(neighborhoods, name)) %>%
st_join(dplyr::select(policeDistricts, District)) %>%
st_drop_geometry() %>%
left_join(dplyr::select(final_net, geometry, uniqueID)) %>%
st_sf() %>%
na.omit()
```
5\.4 Exploring the spatial process of burglary
----------------------------------------------
In 4\.2\.1, ‘Global’ Moran’s *I* was used to test for spatial autocorrelation (clustering) of home prices and model errors. This information provided insight into the spatial process of home prices, accounting for neighborhood scale clustering, but not clustering at more local scales. Here, that local spatial process is explored.
To do so, a statistic called Local Moran’s *I* is introduced. Here, the null hypothesis is that the burglary count at a given location is randomly distributed relative to its *immediate neighbors*.
Like its global cousin, a spatial weights matrix is used to relate a unit to its neighbors. In 4\.2, a nearest neighbor weights matrix was used. Here, weights are calculated with ‘polygon adjacency’. The code block below creates a neighbor list, `poly2nb`, and a spatial weights matrix, `final_net.weights` using `queen` contiguity. This means that every grid cell is related to its eight adjacent neighbors (think about how a queen moves on a chess board).
Figure 5\.10 below visualizes one grid cell and its queen neighbors. Any one grid cell’s neighbors can be returned like so, `final_net.nb[[1457]]`.
```
final_net.nb <- poly2nb(as_Spatial(final_net), queen=TRUE)
final_net.weights <- nb2listw(final_net.nb, style="W", zero.policy=TRUE)
```
Figure 5\.11 below describes the *local* spatial process of burglary. `final_net.localMorans` is created by column binding `final_net` with the results of a `localmoran` test. The inputs to the `localmoran` test include `countBurglaries` and the spatial weights matrix. Several useful test statistics are output including *I*, the p\-value, and `Significiant_Hotspots`, defined as those grid cells with higher local counts than what might otherwise be expected under randomness (p\-values \<\= 0\.05\). The data frame is then converted to long form for mapping.
Another `grid.arrange` loop is used to create a small multiple map of the aforementioned indicators. The legend in Figure 5\.11 below, shows that relatively high values of *I* represent strong and statistically significant evidence of local clustering. Further evidence of this can be seen in the p\-value and significant hotspot maps. This test provides insight into the scale, location and intensity of burglary hotspots.
```
final_net.localMorans <-
cbind(
as.data.frame(localmoran(final_net$countBurglaries, final_net.weights)),
as.data.frame(final_net)) %>%
st_sf() %>%
dplyr::select(Burglary_Count = countBurglaries,
Local_Morans_I = Ii,
P_Value = `Pr(z > 0)`) %>%
mutate(Significant_Hotspots = ifelse(P_Value <= 0.05, 1, 0)) %>%
gather(Variable, Value, -geometry)
vars <- unique(final_net.localMorans$Variable)
varList <- list()
for(i in vars){
varList[[i]] <-
ggplot() +
geom_sf(data = filter(final_net.localMorans, Variable == i),
aes(fill = Value), colour=NA) +
scale_fill_viridis(name="") +
labs(title=i) +
mapTheme() + theme(legend.position="bottom")}
do.call(grid.arrange,c(varList, ncol = 4, top = "Local Morans I statistics, Burglary"))
```
Why is this information useful? A generalizable model must predict equally as well in the hotspots as the coldspots. A model fit only to the coldspots, will underfit the hotspots and vice versa. Not only is this local insight useful exploratory analysis, it can also be engineered into powerful spatial features to control for the local spatial process.
What is the appropriate scale for this local spatial process? Figure 5\.12 explores local hotspots by varying the p\-value of the Local Moran’s I. The smaller the p\-value the more significant the clusters. `0.0000001` conforms to very strong and significant burglary hot spots.
The code block below creates a Local Moran’s *I* feature in `final_net`. As before, a dummy variable, `burglary.isSig`, denotes a cell as part of a significant cluster (a p\-value `<= 0.0000001`). `burglary.isSig.dist` then measures average nearest neighbor distance from each cell centroid to its nearest significant cluster. We can now model important information on the local spatial process of burglaries.
```
final_net <-
final_net %>%
mutate(burglary.isSig =
ifelse(localmoran(final_net$countBurglaries,
final_net.weights)[,5] <= 0.0000001, 1, 0)) %>%
mutate(burglary.isSig.dist =
nn_function(st_coordinates(st_centroid(final_net)),
st_coordinates(st_centroid(
filter(final_net, burglary.isSig == 1))), 1))
```
### 5\.4\.1 Correlation tests
Correlation gives important context while also providing intuition on features that may predict `countBurglaries`. The code block below creates a small multiple scatterplot of `countBurglaries` as a function of the risk factors. `correlation.long` converts `final_net` to long form. `correlation.cor` groups by `Variable`, and calculates the Pearson R correlation, shown directly on the plot.
Figure 5\.14 organizes count and nearest neighbor (`nn`) correlations side\-by\-side. While correlation for count features is a bit awkward, this approach can help with feature selection. For a given risk factor, avoid colinearity by selecting *either* the count or nearest neighbor feature. Just remember, when all features are entered into a multivariate regression, the correlations will change.
```
correlation.long <-
st_drop_geometry(final_net) %>%
dplyr::select(-uniqueID, -cvID, -loopDistance, -name, -District) %>%
gather(Variable, Value, -countBurglaries)
correlation.cor <-
correlation.long %>%
group_by(Variable) %>%
summarize(correlation = cor(Value, countBurglaries, use = "complete.obs"))
ggplot(correlation.long, aes(Value, countBurglaries)) +
geom_point(size = 0.1) +
geom_text(data = correlation.cor, aes(label = paste("r =", round(correlation, 2))),
x=-Inf, y=Inf, vjust = 1.5, hjust = -.1) +
geom_smooth(method = "lm", se = FALSE, colour = "black") +
facet_wrap(~Variable, ncol = 2, scales = "free") +
labs(title = "Burglary count as a function of risk factors") +
plotTheme()
```
### 5\.4\.1 Correlation tests
Correlation gives important context while also providing intuition on features that may predict `countBurglaries`. The code block below creates a small multiple scatterplot of `countBurglaries` as a function of the risk factors. `correlation.long` converts `final_net` to long form. `correlation.cor` groups by `Variable`, and calculates the Pearson R correlation, shown directly on the plot.
Figure 5\.14 organizes count and nearest neighbor (`nn`) correlations side\-by\-side. While correlation for count features is a bit awkward, this approach can help with feature selection. For a given risk factor, avoid colinearity by selecting *either* the count or nearest neighbor feature. Just remember, when all features are entered into a multivariate regression, the correlations will change.
```
correlation.long <-
st_drop_geometry(final_net) %>%
dplyr::select(-uniqueID, -cvID, -loopDistance, -name, -District) %>%
gather(Variable, Value, -countBurglaries)
correlation.cor <-
correlation.long %>%
group_by(Variable) %>%
summarize(correlation = cor(Value, countBurglaries, use = "complete.obs"))
ggplot(correlation.long, aes(Value, countBurglaries)) +
geom_point(size = 0.1) +
geom_text(data = correlation.cor, aes(label = paste("r =", round(correlation, 2))),
x=-Inf, y=Inf, vjust = 1.5, hjust = -.1) +
geom_smooth(method = "lm", se = FALSE, colour = "black") +
facet_wrap(~Variable, ncol = 2, scales = "free") +
labs(title = "Burglary count as a function of risk factors") +
plotTheme()
```
5\.5 Poisson Regression
-----------------------
Take a look at the skewed distribution of `countBurglaries` in the topmost histogram of Figure 5\.15\. Given burglary is a relatively rare event, it is reasonable for most grid cells to contain no crime events. When data is distributed this way, an OLS regression is inappropriate. In this section, a Poisson Regression is estimated which is uniquely suited to modeling a count outcome like `countBurglaries`.
There are many different approaches to modeling burglary counts. Here, a Poisson Regression is used, which is based on a Poisson distribution, simulated in the bottommost histogram of Figure 5\.15\. Does the observed and simulated distributions appear similar? There are many flavors of count\-based regression, but the one used here is the most simple.[43](#fn43)
### 5\.5\.1 Cross\-validated Poisson Regression
Recall, generalizability is important 1\) to test model performance on new data and 2\) on different (spatial) group contexts, like neighborhoods. In this section, both are addressed.
Unlike home prices, `final_net` is not split into training and test sets. Instead, we move directly to cross\-validation, and because geospatial risk models are purely spatial, *spatial cross\-validation* becomes an important option.
A well generalized crime predictive model learns the crime risk ‘experience’ at both citywide and local spatial scales. The best way to test for this is to hold out one local area, train the model on the remaining *n \- 1* areas, predict for the hold out, and record the goodness of fit. In this form of spatial cross\-validation called ‘Leave\-one\-group\-out’ cross\-validation (LOGO\-CV), each neighborhood takes a turn as a hold\-out.
Imagine one neighborhood has a particularly unique local experience. LOGO\-CV assumes that the experience in other neighborhoods generalizes to this unique place \- which is a pretty rigid assumption.
Three `final_net` fields can be used for cross\-validation. A random generated `cvID` associated with each grid cell can be used for random k\-fold cross\-validation. Neighborhood `name` and Police `District` can be used for spatial cross\-validation.
Below, goodness of fit metrics are generated for four regressions \- two including `Just Risk Factors` (`reg.vars`), and a second (`reg.ss.vars`) includes risk factors plus the Local Moran’s I `Spatial Process` features created in 5\.4\.1\. These features are relatively simple \- can you improve on them?
```
reg.vars <- c("Abandoned_Buildings.nn", "Abandoned_Cars.nn", "Graffiti.nn",
"Liquor_Retail.nn", "Street_Lights_Out.nn", "Sanitation.nn",
"loopDistance")
reg.ss.vars <- c("Abandoned_Buildings.nn", "Abandoned_Cars.nn", "Graffiti.nn",
"Liquor_Retail.nn", "Street_Lights_Out.nn", "Sanitation.nn",
"loopDistance", "burglary.isSig", "burglary.isSig.dist")
```
The `crossValidate` function below is very simple and designed to take an input `dataset`; a cross\-validation `id`; a `dependentVariable`; and a list of independent variables, `indVariables`. `cvID_list` is a list of unique `id`s which could be numbers or neighborhood `name`, for instance.
For a given neighborhood `name`, the function assigns each grid cell *not* in that neighborhood to the training set, `fold.train`, and each cell *in* that neighborhood to the test set, `fold.test`. A model is trained from the former and used to predict on the latter. The process is repeated until each neighborhood has a turn acting as a hold out. **For now**, the `countBurglaries` variable is hardwired into the function, which you will **have to change** when using this function for future analyses. If you have read in the above `functions.r`, you can see the function by running `crossValidate`.
The code block below runs `crossValidate` to estimate four different regressions.
* `reg.cv` and `reg.ss.cv` perform random *k\-fold* cross validation using `Just Risk Factors` and the `Spatial Process` features, respectively.
* `reg.spatialCV` and `reg.ss.spatialCV` perform *LOGO\-CV*, spatial cross\-validation on neighborhood `name`, using the aforementioned two sets of features.
The function makes it easy to swap out different cross\-validation group `id`’s. k\-fold cross validation uses the `cvID`. LOGO\-CV uses the neighborhood `name`. You may also wish to explore results using the Police `districts`. Note, in the `select` operation at the end of the LOGO\-CV models, the cross validation `id` is standardized to `cvID`.
The result of each analysis is a `sf` layer with observed and predicted burglary counts.
```
reg.cv <- crossValidate(
dataset = final_net,
id = "cvID",
dependentVariable = "countBurglaries",
indVariables = reg.vars) %>%
dplyr::select(cvID = cvID, countBurglaries, Prediction, geometry)
reg.ss.cv <- crossValidate(
dataset = final_net,
id = "cvID",
dependentVariable = "countBurglaries",
indVariables = reg.ss.vars) %>%
dplyr::select(cvID = cvID, countBurglaries, Prediction, geometry)
reg.spatialCV <- crossValidate(
dataset = final_net,
id = "name",
dependentVariable = "countBurglaries",
indVariables = reg.vars) %>%
dplyr::select(cvID = name, countBurglaries, Prediction, geometry)
reg.ss.spatialCV <- crossValidate(
dataset = final_net,
id = "name",
dependentVariable = "countBurglaries",
indVariables = reg.ss.vars) %>%
dplyr::select(cvID = name, countBurglaries, Prediction, geometry)
```
### 5\.5\.2 Accuracy \& Generalzability
A host of goodness of fit metrics are calculated below with particular emphasis on generalizability across space. The code block below creates a long form `reg.summary`, that binds together observed/predicted counts and errors for each grid cell and for each `Regression`, along with the `cvID`, and the `geometry`.
```
reg.summary <-
rbind(
mutate(reg.cv, Error = Prediction - countBurglaries,
Regression = "Random k-fold CV: Just Risk Factors"),
mutate(reg.ss.cv, Error = Prediction - countBurglaries,
Regression = "Random k-fold CV: Spatial Process"),
mutate(reg.spatialCV, Error = Prediction - countBurglaries,
Regression = "Spatial LOGO-CV: Just Risk Factors"),
mutate(reg.ss.spatialCV, Error = Prediction - countBurglaries,
Regression = "Spatial LOGO-CV: Spatial Process")) %>%
st_sf()
```
In the code block below, `error_by_reg_and_fold` calculates and visualizes MAE for *each* fold across each regression. The `Spatial Process` features seem to reduce errors overall.
Recall, LOGO\-CV assumes the local spatial process from all *other* neighborhoods generalizes to the hold\-out. When the local spatial process is not accounted for (ie. `Just Risk Factors`), some neighborhood hold\-outs have MAEs greater than 4 burglaries. However, those large errors disappear when the `Spatial Process` features are added. The lesson is that there is a shared local burglary experience across Chicago, and accounting for it improves the model, particularly in the hotspots.
What more can you learn by plotting raw errors in this histogram format?
```
error_by_reg_and_fold <-
reg.summary %>%
group_by(Regression, cvID) %>%
summarize(Mean_Error = mean(Prediction - countBurglaries, na.rm = T),
MAE = mean(abs(Mean_Error), na.rm = T),
SD_MAE = mean(abs(Mean_Error), na.rm = T)) %>%
ungroup()
error_by_reg_and_fold %>%
ggplot(aes(MAE)) +
geom_histogram(bins = 30, colour="black", fill = "#FDE725FF") +
facet_wrap(~Regression) +
geom_vline(xintercept = 0) + scale_x_continuous(breaks = seq(0, 8, by = 1)) +
labs(title="Distribution of MAE", subtitle = "k-fold cross validation vs. LOGO-CV",
x="Mean Absolute Error", y="Count") +
plotTheme()
```
The table below builds on `error_by_reg_and_fold` to calculate the mean and standard deviation in errors by regression (note the additional `group_by`). The result confirms our conclusion that the `Spatial Process` features improve the model. The model appears slightly less robust for the spatial cross\-validation because LOGO\-CV is such a conservative assumption. For intuition on how severe these errors are, compare them to the observed mean `countBurglaries`.
```
st_drop_geometry(error_by_reg_and_fold) %>%
group_by(Regression) %>%
summarize(Mean_MAE = round(mean(MAE), 2),
SD_MAE = round(sd(MAE), 2)) %>%
kable() %>%
kable_styling("striped", full_width = F) %>%
row_spec(2, color = "black", background = "#FDE725FF") %>%
row_spec(4, color = "black", background = "#FDE725FF")
```
| Regression | Mean\_MAE | SD\_MAE |
| --- | --- | --- |
| Random k\-fold CV: Just Risk Factors | 0\.49 | 0\.35 |
| Random k\-fold CV: Spatial Process | 0\.42 | 0\.29 |
| Spatial LOGO\-CV: Just Risk Factors | 0\.98 | 1\.26 |
| Spatial LOGO\-CV: Spatial Process | 0\.62 | 0\.61 |
| MAE by regression |
| --- |
| Table 5\.1 |
Figure 5\.17 visualizes the LOGO\-CV errors spatially. Note the use of `str_detect` in the `filter` operation to pull out just the LOGO\-CV regression errors. These maps visualize where the higher errors occur when the local spatial process is not accounted for. Not surprisingly, the largest errors are in the hotspot locations.
```
error_by_reg_and_fold %>%
filter(str_detect(Regression, "LOGO")) %>%
ggplot() +
geom_sf(aes(fill = MAE)) +
facet_wrap(~Regression) +
scale_fill_viridis() +
labs(title = "Burglary errors by LOGO-CV Regression") +
mapTheme() + theme(legend.position="bottom")
```
As discussed in Chapter 4, accounting for the local spatial process should remove all spatial variation in `countBurglary`, which should leave little spatial autocorrelation in model errors. To test this, the code block below calculates a new `neighborhood.weights`, spatial weights matrix at the neighborhood instead of grid cell scale. Global Moran’s *I* and p\-values are then calculated for each LOGO\-CV regression.
This provide more evidence that the `Spatial Process` features helped account for the spatial variation in burglary, although some still remains. More risk *and* protective factor features would be the next step to improve this, followed perhaps by engineering improved spatial process features.
```
neighborhood.weights <-
filter(error_by_reg_and_fold, Regression == "Spatial LOGO-CV: Spatial Process") %>%
group_by(cvID) %>%
poly2nb(as_Spatial(.), queen=TRUE) %>%
nb2listw(., style="W", zero.policy=TRUE)
filter(error_by_reg_and_fold, str_detect(Regression, "LOGO")) %>%
st_drop_geometry() %>%
group_by(Regression) %>%
summarize(Morans_I = moran.mc(abs(Mean_Error), neighborhood.weights,
nsim = 999, zero.policy = TRUE,
na.action=na.omit)[[1]],
p_value = moran.mc(abs(Mean_Error), neighborhood.weights,
nsim = 999, zero.policy = TRUE,
na.action=na.omit)[[3]])
```
| Regression | Morans\_I | p\_value |
| --- | --- | --- |
| Spatial LOGO\-CV: Just Risk Factors | 0\.2505835 | 0\.001 |
| Spatial LOGO\-CV: Spatial Process | 0\.1501559 | 0\.013 |
| Moran’s I on Errors by Regression |
| --- |
| Table 5\.2 |
On to model predictions. Figure 5\.18 below maps predictions for the LOGO\-CV regressions. The `Spatial Process` features do a better job picking up the hotspots, as intended. Given the rigid assumptions of LOGO\-CV, it is impressive that other local hotpots can generally predict hotspots in hold\-out neighborhoods.
The spatial process features produce a more ‘smooth’ crime risk surface, relative to the observed counts. These predictions represent ‘latent crime risk’ \- areas at risk even if a crime hasn’t actually been observed. Accuracy is not as important as generalizability, nevertheless, of 6817 observed burglaries, `Spatial LOGO-CV: Spatial Process` predicted 6804 burglaries, Citywide.
Interestingly, Figure 5\.19 below shows that all models over\-predict in low burglary areas and under\-predict in hot spot areas. Over\-predictions in lower burglary areas may highlight areas of latent risk. Under\-prediction in higher burglary areas may reflect difficulty predicting the hotspots.
Let’s now test for generalizability across racial\-neighborhood context, as we did in Chapter 4\.
```
st_drop_geometry(reg.summary) %>%
group_by(Regression) %>%
mutate(burglary_Decile = ntile(countBurglaries, 10)) %>%
group_by(Regression, burglary_Decile) %>%
summarize(meanObserved = mean(countBurglaries, na.rm=T),
meanPrediction = mean(Prediction, na.rm=T)) %>%
gather(Variable, Value, -Regression, -burglary_Decile) %>%
ggplot(aes(burglary_Decile, Value, shape = Variable)) +
geom_point(size = 2) + geom_path(aes(group = burglary_Decile), colour = "black") +
scale_shape_manual(values = c(2, 17)) +
facet_wrap(~Regression) + xlim(0,10) +
labs(title = "Predicted and observed burglary by observed burglary decile") +
plotTheme()
```
### 5\.5\.3 Generalizability by neighborhood context
Does the algorithm generalize across different neighborhood contexts? To test this proposition, `tidycensus` is used to pull race data by Census tract. `percentWhite` is calculated and tracts are split into two groups, `Majority_White` and `Majority_Non_White`. A spatial subset is used to get tracts within the study area.
Like Boston, Chicago is a very segregated City, as the map below shows.
```
tracts18 <-
get_acs(geography = "tract", variables = c("B01001_001E","B01001A_001E"),
year = 2018, state=17, county=031, geometry=T) %>%
st_transform('ESRI:102271') %>%
dplyr::select(variable, estimate, GEOID) %>%
spread(variable, estimate) %>%
rename(TotalPop = B01001_001,
NumberWhites = B01001A_001) %>%
mutate(percentWhite = NumberWhites / TotalPop,
raceContext = ifelse(percentWhite > .5, "Majority_White", "Majority_Non_White")) %>%
.[neighborhoods,]
```
As in Chapter 4, `Error` is calculated by subtracting the observed burglary count from the prediction. Thus, a positive difference represents an over\-prediction. The least ideal result is a model that over\-predicts risk in Minority areas, and under\-predicts in White areas. If reporting selection bias is an issue, such a model *may* unfairly allocate police resource disproportionately in Black and Brown communities. The table below compares average (non\-absolute) errors for the LOGO\-CV regressions by `raceContext`, by joining the fishnet grid cell centroids to tract boundaries.
The model on average, under\-predicts in `Majority_Non_White` neighborhoods and over\-predicts in `Majority_White` neighborhoods. The `Spatial Process` model not only reports lower errors overall, but a smaller difference in errors across neighborhood context.
It looks like this algorithm generalizes well with respect to race, right? We will return to this question in the conclusion. In the last stage of the analysis, the utility of this algorithm is judged relative to an alternative police allocation method.
```
reg.summary %>%
filter(str_detect(Regression, "LOGO")) %>%
st_centroid() %>%
st_join(tracts18) %>%
na.omit() %>%
st_drop_geometry() %>%
group_by(Regression, raceContext) %>%
summarize(mean.Error = mean(Error, na.rm = T)) %>%
spread(raceContext, mean.Error) %>%
kable(caption = "Mean Error by neighborhood racial context") %>%
kable_styling("striped", full_width = F)
```
| Regression | Majority\_Non\_White | Majority\_White |
| --- | --- | --- |
| Spatial LOGO\-CV: Just Risk Factors | \-0\.1164018 | 0\.1211148 |
| Spatial LOGO\-CV: Spatial Process | \-0\.0525908 | 0\.0421751 |
| Mean Error by neighborhood racial context |
| --- |
| Table 5\.3 |
### 5\.5\.4 Does this model allocate better than traditional crime hotspots?
Police departments all over the world use hotspot policing to target police resources to the places where crime is most concentrated. In this section, we ask whether risk predictions outperform traditional ‘Kernel density’ hotspot mapping[44](#fn44). To add an element of *across\-time* generalizability, hotspot and risk predictions from these 2017 burglaries are used to predict the location of burglaries from *2018*.
Kernel density works by centering a smooth kernel, or curve, atop each crime point such that the curve is at its highest directly over the point and the lowest at the range of a circular search radius. The density in a particular place is the sum of all the kernels that underlie it. Thus, areas with many nearby points have relatively high densities. The key scale assumption in kernel density is the use of a global search radius parameter. Because of its reliance on nearby points, think of Kernel density as one making ‘predictions’ based purely on spatial autocorrelation.
Figure 5\.20 visualizes three Kernel density maps at three different scales. Note the different burglary hotspot ‘narratives’ depending on the radius used.
The code block below creates a Kernel density map with a `1000` foot search radius using the `spatstat` package. `as.ppp` converts burglary coordinates to a `ppp` class. The `density` function creates the Kernel density. To map, the `ppp` is converted to a data frame and then an `sf` layer. Points are spatially joined to the `final_net` and the mean density is taken. Here density is visualized with a `sample_n` of 1,500 points overlayed on top.
```
burg_ppp <- as.ppp(st_coordinates(burglaries), W = st_bbox(final_net))
burg_KD <- spatstat::density.ppp(burg_ppp, 1000)
as.data.frame(burg_KD) %>%
st_as_sf(coords = c("x", "y"), crs = st_crs(final_net)) %>%
aggregate(., final_net, mean) %>%
ggplot() +
geom_sf(aes(fill=value)) +
geom_sf(data = sample_n(burglaries, 1500), size = .5) +
scale_fill_viridis(name = "Density") +
labs(title = "Kernel density of 2017 burglaries") +
mapTheme()
```
Next, a new goodness of fit indicator is created to illustrate whether the *2017* kernel density or risk predictions capture more of the *2018* burglaries. If the risk predictions capture more observed burglaries than the kernel density, then the risk prediction model provides a more robust targeting tool for allocating police resources. Here are the steps.
1. Download `burglaries18` from the Chicago Open Data site.
2. Compute the Kernel Density on 2017 `burglaries`.
3. Scale the Kernel density values to run from 1\-100 and then reclassify those values into 5 risk categories.
4. Spatial join the density to the fishnet.
5. Join to the fishnet, the count of burglaries *in 2018* for each grid cell.
6. Repeat for the risk predictions.
7. Take the rate of 2018 points by model type and risk category. Map and plot accordingly.
Step 1 downloads `burglaries18`.
```
burglaries18 <-
read.socrata("https://data.cityofchicago.org/Public-Safety/Crimes-2018/3i3m-jwuy") %>%
filter(Primary.Type == "BURGLARY" &
Description == "FORCIBLE ENTRY") %>%
mutate(x = gsub("[()]", "", Location)) %>%
separate(x,into= c("Y","X"), sep=",") %>%
mutate(X = as.numeric(X),
Y = as.numeric(Y)) %>%
na.omit %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform('ESRI:102271') %>%
distinct() %>%
.[fishnet,]
```
Next, Kernel density is computed on the 2017 burglaries. `burg_KDE_sf` converts the density to an sf layer; spatial joins (`aggregate`) it to the fishnet; converts the density to 100 deciles (`ntile`); and again to 5 risk categories. Finally, one last spatial join adds the count of observed burglaries in 2018\.
Check out `head(burg_KDE_sf)` to see the result.
```
burg_ppp <- as.ppp(st_coordinates(burglaries), W = st_bbox(final_net))
burg_KD <- spatstat::density.ppp(burg_ppp, 1000)
burg_KDE_sf <- as.data.frame(burg_KD) %>%
st_as_sf(coords = c("x", "y"), crs = st_crs(final_net)) %>%
aggregate(., final_net, mean) %>%
mutate(label = "Kernel Density",
Risk_Category = ntile(value, 100),
Risk_Category = case_when(
Risk_Category >= 90 ~ "90% to 100%",
Risk_Category >= 70 & Risk_Category <= 89 ~ "70% to 89%",
Risk_Category >= 50 & Risk_Category <= 69 ~ "50% to 69%",
Risk_Category >= 30 & Risk_Category <= 49 ~ "30% to 49%",
Risk_Category >= 1 & Risk_Category <= 29 ~ "1% to 29%")) %>%
cbind(
aggregate(
dplyr::select(burglaries18) %>% mutate(burgCount = 1), ., sum) %>%
mutate(burgCount = replace_na(burgCount, 0))) %>%
dplyr::select(label, Risk_Category, burgCount)
```
The same process is repeated for risk predictions. Note the prediction from the LOGO\-CV with the spatial features is being used here.
```
burg_risk_sf <-
filter(reg.summary, Regression == "Spatial LOGO-CV: Spatial Process") %>%
mutate(label = "Risk Predictions",
Risk_Category = ntile(Prediction, 100),
Risk_Category = case_when(
Risk_Category >= 90 ~ "90% to 100%",
Risk_Category >= 70 & Risk_Category <= 89 ~ "70% to 89%",
Risk_Category >= 50 & Risk_Category <= 69 ~ "50% to 69%",
Risk_Category >= 30 & Risk_Category <= 49 ~ "30% to 49%",
Risk_Category >= 1 & Risk_Category <= 29 ~ "1% to 29%")) %>%
cbind(
aggregate(
dplyr::select(burglaries18) %>% mutate(burgCount = 1), ., sum) %>%
mutate(burgCount = replace_na(burgCount, 0))) %>%
dplyr::select(label,Risk_Category, burgCount)
```
For each grid cell and model type (density vs. risk prediction), there is now an associated risk category and 2018 burglary count. Below a map is generated of the risk categories for both model types with a sample of `burglary18` points overlaid. A strongly fit model should show that the highest risk category is uniquely targeted to places with a high density of burglary points.
Is this what we see? High risk categories with few 2018 observed burglaries may suggest latent risk or a poorly fit model. This ambiguity is why accuracy for geospatial risk models is tough to judge. Nevertheless, more features/feature engineering would be helpful.
```
rbind(burg_KDE_sf, burg_risk_sf) %>%
na.omit() %>%
gather(Variable, Value, -label, -Risk_Category, -geometry) %>%
ggplot() +
geom_sf(aes(fill = Risk_Category), colour = NA) +
geom_sf(data = sample_n(burglaries18, 3000), size = .5, colour = "black") +
facet_wrap(~label, ) +
scale_fill_viridis(discrete = TRUE) +
labs(title="Comparison of Kernel Density and Risk Predictions",
subtitle="2017 burglar risk predictions; 2018 burglaries") +
mapTheme()
```
Finally, the code block below calculates the rate of 2018 burglary points by risk category and model type. A well fit model should show that the risk predictions capture a greater share of 2018 burglaries *in the highest risk category* relative to the Kernel density.
The risk prediction model narrowly edges out the Kernel Density in the top two highest risk categories \- suggesting this simple model has some value relative to the business\-as\-usual hot spot approach. Thus, we may have developed a tool to help target police resources, but is it a useful planning tool?
```
rbind(burg_KDE_sf, burg_risk_sf) %>%
st_set_geometry(NULL) %>% na.omit() %>%
gather(Variable, Value, -label, -Risk_Category) %>%
group_by(label, Risk_Category) %>%
summarize(countBurglaries = sum(Value)) %>%
ungroup() %>%
group_by(label) %>%
mutate(Rate_of_test_set_crimes = countBurglaries / sum(countBurglaries)) %>%
ggplot(aes(Risk_Category,Rate_of_test_set_crimes)) +
geom_bar(aes(fill=label), position="dodge", stat="identity") +
scale_fill_viridis(discrete = TRUE) +
labs(title = "Risk prediction vs. Kernel density, 2018 burglaries") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, vjust = 0.5))
```
### 5\.5\.1 Cross\-validated Poisson Regression
Recall, generalizability is important 1\) to test model performance on new data and 2\) on different (spatial) group contexts, like neighborhoods. In this section, both are addressed.
Unlike home prices, `final_net` is not split into training and test sets. Instead, we move directly to cross\-validation, and because geospatial risk models are purely spatial, *spatial cross\-validation* becomes an important option.
A well generalized crime predictive model learns the crime risk ‘experience’ at both citywide and local spatial scales. The best way to test for this is to hold out one local area, train the model on the remaining *n \- 1* areas, predict for the hold out, and record the goodness of fit. In this form of spatial cross\-validation called ‘Leave\-one\-group\-out’ cross\-validation (LOGO\-CV), each neighborhood takes a turn as a hold\-out.
Imagine one neighborhood has a particularly unique local experience. LOGO\-CV assumes that the experience in other neighborhoods generalizes to this unique place \- which is a pretty rigid assumption.
Three `final_net` fields can be used for cross\-validation. A random generated `cvID` associated with each grid cell can be used for random k\-fold cross\-validation. Neighborhood `name` and Police `District` can be used for spatial cross\-validation.
Below, goodness of fit metrics are generated for four regressions \- two including `Just Risk Factors` (`reg.vars`), and a second (`reg.ss.vars`) includes risk factors plus the Local Moran’s I `Spatial Process` features created in 5\.4\.1\. These features are relatively simple \- can you improve on them?
```
reg.vars <- c("Abandoned_Buildings.nn", "Abandoned_Cars.nn", "Graffiti.nn",
"Liquor_Retail.nn", "Street_Lights_Out.nn", "Sanitation.nn",
"loopDistance")
reg.ss.vars <- c("Abandoned_Buildings.nn", "Abandoned_Cars.nn", "Graffiti.nn",
"Liquor_Retail.nn", "Street_Lights_Out.nn", "Sanitation.nn",
"loopDistance", "burglary.isSig", "burglary.isSig.dist")
```
The `crossValidate` function below is very simple and designed to take an input `dataset`; a cross\-validation `id`; a `dependentVariable`; and a list of independent variables, `indVariables`. `cvID_list` is a list of unique `id`s which could be numbers or neighborhood `name`, for instance.
For a given neighborhood `name`, the function assigns each grid cell *not* in that neighborhood to the training set, `fold.train`, and each cell *in* that neighborhood to the test set, `fold.test`. A model is trained from the former and used to predict on the latter. The process is repeated until each neighborhood has a turn acting as a hold out. **For now**, the `countBurglaries` variable is hardwired into the function, which you will **have to change** when using this function for future analyses. If you have read in the above `functions.r`, you can see the function by running `crossValidate`.
The code block below runs `crossValidate` to estimate four different regressions.
* `reg.cv` and `reg.ss.cv` perform random *k\-fold* cross validation using `Just Risk Factors` and the `Spatial Process` features, respectively.
* `reg.spatialCV` and `reg.ss.spatialCV` perform *LOGO\-CV*, spatial cross\-validation on neighborhood `name`, using the aforementioned two sets of features.
The function makes it easy to swap out different cross\-validation group `id`’s. k\-fold cross validation uses the `cvID`. LOGO\-CV uses the neighborhood `name`. You may also wish to explore results using the Police `districts`. Note, in the `select` operation at the end of the LOGO\-CV models, the cross validation `id` is standardized to `cvID`.
The result of each analysis is a `sf` layer with observed and predicted burglary counts.
```
reg.cv <- crossValidate(
dataset = final_net,
id = "cvID",
dependentVariable = "countBurglaries",
indVariables = reg.vars) %>%
dplyr::select(cvID = cvID, countBurglaries, Prediction, geometry)
reg.ss.cv <- crossValidate(
dataset = final_net,
id = "cvID",
dependentVariable = "countBurglaries",
indVariables = reg.ss.vars) %>%
dplyr::select(cvID = cvID, countBurglaries, Prediction, geometry)
reg.spatialCV <- crossValidate(
dataset = final_net,
id = "name",
dependentVariable = "countBurglaries",
indVariables = reg.vars) %>%
dplyr::select(cvID = name, countBurglaries, Prediction, geometry)
reg.ss.spatialCV <- crossValidate(
dataset = final_net,
id = "name",
dependentVariable = "countBurglaries",
indVariables = reg.ss.vars) %>%
dplyr::select(cvID = name, countBurglaries, Prediction, geometry)
```
### 5\.5\.2 Accuracy \& Generalzability
A host of goodness of fit metrics are calculated below with particular emphasis on generalizability across space. The code block below creates a long form `reg.summary`, that binds together observed/predicted counts and errors for each grid cell and for each `Regression`, along with the `cvID`, and the `geometry`.
```
reg.summary <-
rbind(
mutate(reg.cv, Error = Prediction - countBurglaries,
Regression = "Random k-fold CV: Just Risk Factors"),
mutate(reg.ss.cv, Error = Prediction - countBurglaries,
Regression = "Random k-fold CV: Spatial Process"),
mutate(reg.spatialCV, Error = Prediction - countBurglaries,
Regression = "Spatial LOGO-CV: Just Risk Factors"),
mutate(reg.ss.spatialCV, Error = Prediction - countBurglaries,
Regression = "Spatial LOGO-CV: Spatial Process")) %>%
st_sf()
```
In the code block below, `error_by_reg_and_fold` calculates and visualizes MAE for *each* fold across each regression. The `Spatial Process` features seem to reduce errors overall.
Recall, LOGO\-CV assumes the local spatial process from all *other* neighborhoods generalizes to the hold\-out. When the local spatial process is not accounted for (ie. `Just Risk Factors`), some neighborhood hold\-outs have MAEs greater than 4 burglaries. However, those large errors disappear when the `Spatial Process` features are added. The lesson is that there is a shared local burglary experience across Chicago, and accounting for it improves the model, particularly in the hotspots.
What more can you learn by plotting raw errors in this histogram format?
```
error_by_reg_and_fold <-
reg.summary %>%
group_by(Regression, cvID) %>%
summarize(Mean_Error = mean(Prediction - countBurglaries, na.rm = T),
MAE = mean(abs(Mean_Error), na.rm = T),
SD_MAE = mean(abs(Mean_Error), na.rm = T)) %>%
ungroup()
error_by_reg_and_fold %>%
ggplot(aes(MAE)) +
geom_histogram(bins = 30, colour="black", fill = "#FDE725FF") +
facet_wrap(~Regression) +
geom_vline(xintercept = 0) + scale_x_continuous(breaks = seq(0, 8, by = 1)) +
labs(title="Distribution of MAE", subtitle = "k-fold cross validation vs. LOGO-CV",
x="Mean Absolute Error", y="Count") +
plotTheme()
```
The table below builds on `error_by_reg_and_fold` to calculate the mean and standard deviation in errors by regression (note the additional `group_by`). The result confirms our conclusion that the `Spatial Process` features improve the model. The model appears slightly less robust for the spatial cross\-validation because LOGO\-CV is such a conservative assumption. For intuition on how severe these errors are, compare them to the observed mean `countBurglaries`.
```
st_drop_geometry(error_by_reg_and_fold) %>%
group_by(Regression) %>%
summarize(Mean_MAE = round(mean(MAE), 2),
SD_MAE = round(sd(MAE), 2)) %>%
kable() %>%
kable_styling("striped", full_width = F) %>%
row_spec(2, color = "black", background = "#FDE725FF") %>%
row_spec(4, color = "black", background = "#FDE725FF")
```
| Regression | Mean\_MAE | SD\_MAE |
| --- | --- | --- |
| Random k\-fold CV: Just Risk Factors | 0\.49 | 0\.35 |
| Random k\-fold CV: Spatial Process | 0\.42 | 0\.29 |
| Spatial LOGO\-CV: Just Risk Factors | 0\.98 | 1\.26 |
| Spatial LOGO\-CV: Spatial Process | 0\.62 | 0\.61 |
| MAE by regression |
| --- |
| Table 5\.1 |
Figure 5\.17 visualizes the LOGO\-CV errors spatially. Note the use of `str_detect` in the `filter` operation to pull out just the LOGO\-CV regression errors. These maps visualize where the higher errors occur when the local spatial process is not accounted for. Not surprisingly, the largest errors are in the hotspot locations.
```
error_by_reg_and_fold %>%
filter(str_detect(Regression, "LOGO")) %>%
ggplot() +
geom_sf(aes(fill = MAE)) +
facet_wrap(~Regression) +
scale_fill_viridis() +
labs(title = "Burglary errors by LOGO-CV Regression") +
mapTheme() + theme(legend.position="bottom")
```
As discussed in Chapter 4, accounting for the local spatial process should remove all spatial variation in `countBurglary`, which should leave little spatial autocorrelation in model errors. To test this, the code block below calculates a new `neighborhood.weights`, spatial weights matrix at the neighborhood instead of grid cell scale. Global Moran’s *I* and p\-values are then calculated for each LOGO\-CV regression.
This provide more evidence that the `Spatial Process` features helped account for the spatial variation in burglary, although some still remains. More risk *and* protective factor features would be the next step to improve this, followed perhaps by engineering improved spatial process features.
```
neighborhood.weights <-
filter(error_by_reg_and_fold, Regression == "Spatial LOGO-CV: Spatial Process") %>%
group_by(cvID) %>%
poly2nb(as_Spatial(.), queen=TRUE) %>%
nb2listw(., style="W", zero.policy=TRUE)
filter(error_by_reg_and_fold, str_detect(Regression, "LOGO")) %>%
st_drop_geometry() %>%
group_by(Regression) %>%
summarize(Morans_I = moran.mc(abs(Mean_Error), neighborhood.weights,
nsim = 999, zero.policy = TRUE,
na.action=na.omit)[[1]],
p_value = moran.mc(abs(Mean_Error), neighborhood.weights,
nsim = 999, zero.policy = TRUE,
na.action=na.omit)[[3]])
```
| Regression | Morans\_I | p\_value |
| --- | --- | --- |
| Spatial LOGO\-CV: Just Risk Factors | 0\.2505835 | 0\.001 |
| Spatial LOGO\-CV: Spatial Process | 0\.1501559 | 0\.013 |
| Moran’s I on Errors by Regression |
| --- |
| Table 5\.2 |
On to model predictions. Figure 5\.18 below maps predictions for the LOGO\-CV regressions. The `Spatial Process` features do a better job picking up the hotspots, as intended. Given the rigid assumptions of LOGO\-CV, it is impressive that other local hotpots can generally predict hotspots in hold\-out neighborhoods.
The spatial process features produce a more ‘smooth’ crime risk surface, relative to the observed counts. These predictions represent ‘latent crime risk’ \- areas at risk even if a crime hasn’t actually been observed. Accuracy is not as important as generalizability, nevertheless, of 6817 observed burglaries, `Spatial LOGO-CV: Spatial Process` predicted 6804 burglaries, Citywide.
Interestingly, Figure 5\.19 below shows that all models over\-predict in low burglary areas and under\-predict in hot spot areas. Over\-predictions in lower burglary areas may highlight areas of latent risk. Under\-prediction in higher burglary areas may reflect difficulty predicting the hotspots.
Let’s now test for generalizability across racial\-neighborhood context, as we did in Chapter 4\.
```
st_drop_geometry(reg.summary) %>%
group_by(Regression) %>%
mutate(burglary_Decile = ntile(countBurglaries, 10)) %>%
group_by(Regression, burglary_Decile) %>%
summarize(meanObserved = mean(countBurglaries, na.rm=T),
meanPrediction = mean(Prediction, na.rm=T)) %>%
gather(Variable, Value, -Regression, -burglary_Decile) %>%
ggplot(aes(burglary_Decile, Value, shape = Variable)) +
geom_point(size = 2) + geom_path(aes(group = burglary_Decile), colour = "black") +
scale_shape_manual(values = c(2, 17)) +
facet_wrap(~Regression) + xlim(0,10) +
labs(title = "Predicted and observed burglary by observed burglary decile") +
plotTheme()
```
### 5\.5\.3 Generalizability by neighborhood context
Does the algorithm generalize across different neighborhood contexts? To test this proposition, `tidycensus` is used to pull race data by Census tract. `percentWhite` is calculated and tracts are split into two groups, `Majority_White` and `Majority_Non_White`. A spatial subset is used to get tracts within the study area.
Like Boston, Chicago is a very segregated City, as the map below shows.
```
tracts18 <-
get_acs(geography = "tract", variables = c("B01001_001E","B01001A_001E"),
year = 2018, state=17, county=031, geometry=T) %>%
st_transform('ESRI:102271') %>%
dplyr::select(variable, estimate, GEOID) %>%
spread(variable, estimate) %>%
rename(TotalPop = B01001_001,
NumberWhites = B01001A_001) %>%
mutate(percentWhite = NumberWhites / TotalPop,
raceContext = ifelse(percentWhite > .5, "Majority_White", "Majority_Non_White")) %>%
.[neighborhoods,]
```
As in Chapter 4, `Error` is calculated by subtracting the observed burglary count from the prediction. Thus, a positive difference represents an over\-prediction. The least ideal result is a model that over\-predicts risk in Minority areas, and under\-predicts in White areas. If reporting selection bias is an issue, such a model *may* unfairly allocate police resource disproportionately in Black and Brown communities. The table below compares average (non\-absolute) errors for the LOGO\-CV regressions by `raceContext`, by joining the fishnet grid cell centroids to tract boundaries.
The model on average, under\-predicts in `Majority_Non_White` neighborhoods and over\-predicts in `Majority_White` neighborhoods. The `Spatial Process` model not only reports lower errors overall, but a smaller difference in errors across neighborhood context.
It looks like this algorithm generalizes well with respect to race, right? We will return to this question in the conclusion. In the last stage of the analysis, the utility of this algorithm is judged relative to an alternative police allocation method.
```
reg.summary %>%
filter(str_detect(Regression, "LOGO")) %>%
st_centroid() %>%
st_join(tracts18) %>%
na.omit() %>%
st_drop_geometry() %>%
group_by(Regression, raceContext) %>%
summarize(mean.Error = mean(Error, na.rm = T)) %>%
spread(raceContext, mean.Error) %>%
kable(caption = "Mean Error by neighborhood racial context") %>%
kable_styling("striped", full_width = F)
```
| Regression | Majority\_Non\_White | Majority\_White |
| --- | --- | --- |
| Spatial LOGO\-CV: Just Risk Factors | \-0\.1164018 | 0\.1211148 |
| Spatial LOGO\-CV: Spatial Process | \-0\.0525908 | 0\.0421751 |
| Mean Error by neighborhood racial context |
| --- |
| Table 5\.3 |
### 5\.5\.4 Does this model allocate better than traditional crime hotspots?
Police departments all over the world use hotspot policing to target police resources to the places where crime is most concentrated. In this section, we ask whether risk predictions outperform traditional ‘Kernel density’ hotspot mapping[44](#fn44). To add an element of *across\-time* generalizability, hotspot and risk predictions from these 2017 burglaries are used to predict the location of burglaries from *2018*.
Kernel density works by centering a smooth kernel, or curve, atop each crime point such that the curve is at its highest directly over the point and the lowest at the range of a circular search radius. The density in a particular place is the sum of all the kernels that underlie it. Thus, areas with many nearby points have relatively high densities. The key scale assumption in kernel density is the use of a global search radius parameter. Because of its reliance on nearby points, think of Kernel density as one making ‘predictions’ based purely on spatial autocorrelation.
Figure 5\.20 visualizes three Kernel density maps at three different scales. Note the different burglary hotspot ‘narratives’ depending on the radius used.
The code block below creates a Kernel density map with a `1000` foot search radius using the `spatstat` package. `as.ppp` converts burglary coordinates to a `ppp` class. The `density` function creates the Kernel density. To map, the `ppp` is converted to a data frame and then an `sf` layer. Points are spatially joined to the `final_net` and the mean density is taken. Here density is visualized with a `sample_n` of 1,500 points overlayed on top.
```
burg_ppp <- as.ppp(st_coordinates(burglaries), W = st_bbox(final_net))
burg_KD <- spatstat::density.ppp(burg_ppp, 1000)
as.data.frame(burg_KD) %>%
st_as_sf(coords = c("x", "y"), crs = st_crs(final_net)) %>%
aggregate(., final_net, mean) %>%
ggplot() +
geom_sf(aes(fill=value)) +
geom_sf(data = sample_n(burglaries, 1500), size = .5) +
scale_fill_viridis(name = "Density") +
labs(title = "Kernel density of 2017 burglaries") +
mapTheme()
```
Next, a new goodness of fit indicator is created to illustrate whether the *2017* kernel density or risk predictions capture more of the *2018* burglaries. If the risk predictions capture more observed burglaries than the kernel density, then the risk prediction model provides a more robust targeting tool for allocating police resources. Here are the steps.
1. Download `burglaries18` from the Chicago Open Data site.
2. Compute the Kernel Density on 2017 `burglaries`.
3. Scale the Kernel density values to run from 1\-100 and then reclassify those values into 5 risk categories.
4. Spatial join the density to the fishnet.
5. Join to the fishnet, the count of burglaries *in 2018* for each grid cell.
6. Repeat for the risk predictions.
7. Take the rate of 2018 points by model type and risk category. Map and plot accordingly.
Step 1 downloads `burglaries18`.
```
burglaries18 <-
read.socrata("https://data.cityofchicago.org/Public-Safety/Crimes-2018/3i3m-jwuy") %>%
filter(Primary.Type == "BURGLARY" &
Description == "FORCIBLE ENTRY") %>%
mutate(x = gsub("[()]", "", Location)) %>%
separate(x,into= c("Y","X"), sep=",") %>%
mutate(X = as.numeric(X),
Y = as.numeric(Y)) %>%
na.omit %>%
st_as_sf(coords = c("X", "Y"), crs = 4326, agr = "constant") %>%
st_transform('ESRI:102271') %>%
distinct() %>%
.[fishnet,]
```
Next, Kernel density is computed on the 2017 burglaries. `burg_KDE_sf` converts the density to an sf layer; spatial joins (`aggregate`) it to the fishnet; converts the density to 100 deciles (`ntile`); and again to 5 risk categories. Finally, one last spatial join adds the count of observed burglaries in 2018\.
Check out `head(burg_KDE_sf)` to see the result.
```
burg_ppp <- as.ppp(st_coordinates(burglaries), W = st_bbox(final_net))
burg_KD <- spatstat::density.ppp(burg_ppp, 1000)
burg_KDE_sf <- as.data.frame(burg_KD) %>%
st_as_sf(coords = c("x", "y"), crs = st_crs(final_net)) %>%
aggregate(., final_net, mean) %>%
mutate(label = "Kernel Density",
Risk_Category = ntile(value, 100),
Risk_Category = case_when(
Risk_Category >= 90 ~ "90% to 100%",
Risk_Category >= 70 & Risk_Category <= 89 ~ "70% to 89%",
Risk_Category >= 50 & Risk_Category <= 69 ~ "50% to 69%",
Risk_Category >= 30 & Risk_Category <= 49 ~ "30% to 49%",
Risk_Category >= 1 & Risk_Category <= 29 ~ "1% to 29%")) %>%
cbind(
aggregate(
dplyr::select(burglaries18) %>% mutate(burgCount = 1), ., sum) %>%
mutate(burgCount = replace_na(burgCount, 0))) %>%
dplyr::select(label, Risk_Category, burgCount)
```
The same process is repeated for risk predictions. Note the prediction from the LOGO\-CV with the spatial features is being used here.
```
burg_risk_sf <-
filter(reg.summary, Regression == "Spatial LOGO-CV: Spatial Process") %>%
mutate(label = "Risk Predictions",
Risk_Category = ntile(Prediction, 100),
Risk_Category = case_when(
Risk_Category >= 90 ~ "90% to 100%",
Risk_Category >= 70 & Risk_Category <= 89 ~ "70% to 89%",
Risk_Category >= 50 & Risk_Category <= 69 ~ "50% to 69%",
Risk_Category >= 30 & Risk_Category <= 49 ~ "30% to 49%",
Risk_Category >= 1 & Risk_Category <= 29 ~ "1% to 29%")) %>%
cbind(
aggregate(
dplyr::select(burglaries18) %>% mutate(burgCount = 1), ., sum) %>%
mutate(burgCount = replace_na(burgCount, 0))) %>%
dplyr::select(label,Risk_Category, burgCount)
```
For each grid cell and model type (density vs. risk prediction), there is now an associated risk category and 2018 burglary count. Below a map is generated of the risk categories for both model types with a sample of `burglary18` points overlaid. A strongly fit model should show that the highest risk category is uniquely targeted to places with a high density of burglary points.
Is this what we see? High risk categories with few 2018 observed burglaries may suggest latent risk or a poorly fit model. This ambiguity is why accuracy for geospatial risk models is tough to judge. Nevertheless, more features/feature engineering would be helpful.
```
rbind(burg_KDE_sf, burg_risk_sf) %>%
na.omit() %>%
gather(Variable, Value, -label, -Risk_Category, -geometry) %>%
ggplot() +
geom_sf(aes(fill = Risk_Category), colour = NA) +
geom_sf(data = sample_n(burglaries18, 3000), size = .5, colour = "black") +
facet_wrap(~label, ) +
scale_fill_viridis(discrete = TRUE) +
labs(title="Comparison of Kernel Density and Risk Predictions",
subtitle="2017 burglar risk predictions; 2018 burglaries") +
mapTheme()
```
Finally, the code block below calculates the rate of 2018 burglary points by risk category and model type. A well fit model should show that the risk predictions capture a greater share of 2018 burglaries *in the highest risk category* relative to the Kernel density.
The risk prediction model narrowly edges out the Kernel Density in the top two highest risk categories \- suggesting this simple model has some value relative to the business\-as\-usual hot spot approach. Thus, we may have developed a tool to help target police resources, but is it a useful planning tool?
```
rbind(burg_KDE_sf, burg_risk_sf) %>%
st_set_geometry(NULL) %>% na.omit() %>%
gather(Variable, Value, -label, -Risk_Category) %>%
group_by(label, Risk_Category) %>%
summarize(countBurglaries = sum(Value)) %>%
ungroup() %>%
group_by(label) %>%
mutate(Rate_of_test_set_crimes = countBurglaries / sum(countBurglaries)) %>%
ggplot(aes(Risk_Category,Rate_of_test_set_crimes)) +
geom_bar(aes(fill=label), position="dodge", stat="identity") +
scale_fill_viridis(discrete = TRUE) +
labs(title = "Risk prediction vs. Kernel density, 2018 burglaries") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, vjust = 0.5))
```
5\.6 Conclusion \- Bias but useful?
-----------------------------------
In this chapter, a geospatial risk prediction model borrows the burglary experience in places where it has been observed, and tests whether that experience generalizes to places where burglary risk may be high, despite few actual events. Should these tests hold, the resulting predictions can be thought of as ‘latent risk’ for burglary and can be used to allocate police response across space.
We introduced new and powerful feature engineering strategies to capture the local spatial process. Spatial cross\-validation was also introduced as an important test of across\-space generalizability.
Despite finding that the model generalizes well across different neighborhood contexts, we cannot be sure the model doesn’t suffer from selection bias. As discussed in 5\.1, if law enforcement systemtically over\-polices certain communities, and this selection criteria goes unaccounted for in the model, then the model may be biased regardless of the above tests.
Nevertheless, we have demonstrated that even a simple risk prediction algorithm may outperform traditional hot spot analysis. By adding the element of time/seasonality (see Chapter 8\) and deploying predictions in an intuitive user interface, this approach could easily compete with commercial Predictive Policing products. Should it?
Imagine if ‘back\-testing’ the algorithm on historical data showed that its use would have predicted 20% more burglary than the current allocation process (either Kernel density or otherwise). What if we could show Chicago officials that paying us $100k/year for a license to our algorithm would reduce property crime by $10 million \- should they buy it?
Such a cost/benefit argument makes for a powerful sales pitch \- and it is why data science is such a fast growing industry. However, as I have mentioned, economic bottom lines are far from the only bottom lines in government.
What if the $10 million in savings lead police to increase enforcement and surveillance disproportionately in Black and Brown communities? Worse, what about feedback effects where steering police to these neighborhoods causes more reported crime, which then leads to increased predicted risk?
Stakeholders will either conclude, “but this tech reduces crime \- this is a no\-brainer!” or, “this technology perpetuates a legacy of systematic racism, fear, and disenfranchisement!”
Both opinions have merit and the right approach is a function of community standards. Many big cities are currently at this point. Law enforcement sees machine learning as the next logical progression in the analytics they have been building for years. At the same time, critics and community stakeholders see these as tools of the surveillance state.
These two groups need a forum for communicating these views and for agreeing on the appropriate community standard. I refer to such a forum as one being part of the ‘algorithmic governance’ process, discussed in more detail in the book’s Conclusion.
Finally, while these models may not be appropriate for crime prediction, there are a host of other Planning outcomes that could benefit greatly. In the past I have built these models to predict risks for outcomes like fires and child maltreatment. I have also built them to predict where a company’s next retail store should go. I urge readers to think about how these models could be used to predict ‘opportunity’, not just risk.
As always, this begins with an understanding of the use case and the current resource allocation process. The outcome of all this code should not be just a few maps and tables, but a strategic plan that converts these predictions into actionable intelligence.
5\.7 Assignment \- Predict risk
-------------------------------
Your job is to build a version of this model **for a different outcome that likely suffers from more selection bias than burglary**. You can build this model in Chicago or any other city with sufficient open data resources. Please also add at least **two** new features not used above, and iteratively build models until you have landed on one that optimizes for accuracy and generalizability.
Your final deliverable should be **in R markdown** form **with code blocks**. Please provide the following materials with **brief annotations** (please don’t forget this):
1. A map of your outcome of interest in point form, with some description of what, when, and why you think selection bias may be an issue.
2. A map of your outcome joined to the fishnet.
3. A small multiple map of your risk factors in the fishnet (counts, distance and/or other feature engineering approaches).
4. Local Moran’s I\-related small multiple map of your outcome (see 5\.4\.1\)
5. A small multiple scatterplot with correlations.
6. A histogram of your dependent variable.
7. A small multiple map of model errors by random k\-fold and spatial cross validation.
8. A table of MAE and standard deviation MAE by regression.
9. A table of raw errors by race context for a random k\-fold vs. spatial cross validation regression.
10. The map comparing kernel density to risk predictions for *the next year’s crime*.
11. The bar plot making this comparison.
12. Two paragraphs on why or why not you would recommend your algorithm be put into production.
| Field Specific |
urbanspatial.github.io | https://urbanspatial.github.io/PublicPolicyAnalytics/people-based-ml-models.html |
Chapter 6 People\-based ML models
=================================
(I’m a data scientist, not a graphic designer)
6\.1 Bounce to work
-------------------
Most organizations work on behalf of ‘clients’. Government agencies provide services to firms and households; non\-profits rely on donors to keep the lights on; and businesses sell products and services to customers. At any point, a client may no longer wish to participate, donate, or purchase a good, which will affect the bottom line of the organization.
We have learned how data science can identify risk and opportunity in space, and not surprisingly, these methods can also be used to identify risk/opportunity for clients. In this chapter, we learn how to predict risk for individuals and use the resulting intelligence to develop cost/benefit analyses. Specifically, the goal is to predict ‘churn’ \- the decision of a client not to re\-subscribe to a product or service.
Imagine you are the head of sales and marketing for a pogo\-transit start\-up called ‘Bounce to Work!’. Two years ago, Bounce to Work! rolled out 2,000 dockless, GPS\-enabled pogo sticks around the city charging riders $3 per bounce or a membership of $30/month for unlimited bouncing. Bounce to Work!’s first year resulted in more bouncing then the company could have ever imagined, and to keep its customers engaged, the company is looking to embark on a membership drive.
You have noticed that every month, between 20% and 25% of the roughly 30,000 members ‘churn’ or do not renew their membership at month’s end. Not accounting for new members, that is a revenue loss of as much as $225,000 (7,500 members \* $30 per membership) per month! This volatility is creating some uncertainty in the company’s expansion efforts. They have asked you to put your data science skills to the test, by predicting for every member, the probability that they will churn, conditional on a host of bouncing data collected by the company. Those *predicted probabilities* will then be used to prioritize who gets a $2 marketing mailer that includes a 20% off membership coupon (an $8 expenditure, in total).
Predicting churn has all sorts of interesting cost/benefit implications for Bounce to Work!. If your algorithm predicts a customer will *not* churn and a mailer is not sent, but they do in fact churn (a *false negative*), then the company is out $30\. If you predict a customer will churn and send them a mailer but they had no intention of churning, then you loose $8 (20% off a $30 membership plus a $2 mailer).
While Bounce to Work! is a slightly absurd premise for this chapter, data science can be most impactful (both positively and negatively) when used to target families and individuals for critical services. In this chapter, a classic churn\-related dataset from IBM is used, although I have altered the variables names to make it more apropos.[45](#fn45)
The next section performs exploratory analysis; Section 3 introduces logistic regression; Sections 4 and 5 focus on goodness of fit for these models as well as cross\-validation. Section 6 delves into cost/benefit and 7 concludes. Below is the data dictionary. Each row is a customer and the outcome of interest, `Churn`, consists of two levels, `Churn` and `No_Churn`. Our goal is to predict this ‘binary’ outcome using demographic and ridership\-specific variables.
| Variable | Description |
| --- | --- |
| SeniorCitizen | Whether the customer is a senior citizen or not (1, 0\) |
| WeekendBouncer | Does this customer bounce on the weekends or only on weekdays? (Yes, No) |
| avgBounceTime | Average length of a pogo trip in minutes |
| bounceInStreet | Does this customer tend to bounce on streets (Yes) or on sidewalks (No)? |
| phoneType | The operating system of the customer’s phone (IPhone, Android, Unknown) |
| avgBounceDistance | Distance of a customer’s average bounce as a categorical variable (\<1 ft., 1\-4 ft., 4\+ ft.) |
| avgBounceHeight | Height of a customer’s Average bounce as a categorical variable (\<1 ft., 1\-2 ft., 2\+ ft.) |
| PaperlessBilling | Whether the customer has paperless billing or not (Yes, No) |
| monthlyDistanceBounced | The amount of distance in miles coverered last month |
| totalDistanceBounced | The amount of distance in miles coverered since becoming a member |
| Churn | Whether the customer churned or not (Churn or No\_Churn) |
| Data Dictionary |
| --- |
| Table 6\.1 |
```
options(scipen=10000000)
library(tidyverse)
library(caret)
library(knitr)
library(pscl)
library(plotROC)
library(pROC)
library(scales)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
palette5 <- c("#981FAC","#CB0F8B","#FF006A","#FE4C35","#FE9900")
palette4 <- c("#981FAC","#FF006A","#FE4C35","#FE9900")
palette2 <- c("#981FAC","#FF006A")
churn <- read.csv(file.path(root.dir,"/Chapter6/churnBounce.csv")) %>%
mutate(churnNumeric = as.factor(ifelse(Churn == "Churn", 1, 0))) %>%
na.omit()
```
Let’s start by loading libraries and the data. In the code block above, the outcome is recoded to a binary, `0` and `1` variable called `churnNumeric`. Any field with `NA` is removed from the data.
6\.2 Exploratory analysis
-------------------------
Churn is defined as a customer not re\-subscribing to a service. In this section, the data is explored by visualizing correlation between `churn` and the predictive features. Correlation for a continuous outcome like home price, can be visualized with a scatterplot. However, when the dependent variable is binary, with two possible outcomes, a different approach is needed. Useful features are those that exhibit *significant differences* across the `Churn` and `No_Churn` outcomes.
First, note that 1869 of 7032 customers in the dataset churned (27%).
Figure 6\.1 plots the *mean* for 2 continuous features grouped by `Churn` or `No_Churn`. The interpretation is that the longer a customer bounces, both by trip and historically, the more likely, on average, a customer will re\-up their membership (i.e. `No_Churn`).
```
churn %>%
dplyr::select(Churn,avgBounceTime, totalDistanceBounced) %>%
gather(Variable, value, -Churn) %>%
ggplot(aes(Churn, value, fill=Churn)) +
geom_bar(position = "dodge", stat = "summary", fun = "mean") +
facet_wrap(~Variable, scales = "free") +
scale_fill_manual(values = palette2) +
labs(x="Churn", y="Mean",
title = "Feature associations with the likelihood of churn",
subtitle = "(Continous outcomes)") +
plotTheme() + theme(legend.position = "none")
```
Not only is the dependent variable categorical, most of the features are as well. The plots below illustrate whether differences in customer factors associate with the likelihood that they will churn. The `count` function below calculates the total number of customers reported as ‘Yes’ for a given feature.
The interpretation is that more people who re\-up their membership (`No_Churn`) tend to bounce in the street, pay by internet, and bounce on weekends.
```
churn %>%
dplyr::select(Churn,SeniorCitizen, WeekendBouncer, bounceInStreet, PaperlessBilling) %>%
gather(Variable, value, -Churn) %>%
count(Variable, value, Churn) %>%
filter(value == "Yes") %>%
ggplot(aes(Churn, n, fill = Churn)) +
geom_bar(position = "dodge", stat="identity") +
facet_wrap(~Variable, scales = "free", ncol=4) +
scale_fill_manual(values = palette2) +
labs(x="Churn", y="Count",
title = "Feature associations with the likelihood of churn",
subtitle = "Two category features (Yes and No)") +
plotTheme() + theme(legend.position = "none")
```
Finally, the code block below plots three category associations. The plot suggests that a customer who bounces 4 feet at a time and upwards of 2 feet in the air, has a lower likelihood of `Churn`. Clearly, more experienced bouncers are more likely to continue their membership.
```
churn %>%
dplyr::select(Churn, phoneType, avgBounceDistance, avgBounceHeight) %>%
gather(Variable, value, -Churn) %>%
count(Variable, value, Churn) %>%
ggplot(aes(value, n, fill = Churn)) +
geom_bar(position = "dodge", stat="identity") +
facet_wrap(~Variable, scales="free") +
scale_fill_manual(values = palette2) +
labs(x="Churn", y="Count",
title = "Feature associations with the likelihood of churn",
subtitle = "Three category features") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
Churn has two possible outcomes, but imagine in Figure 6\.4 below, if `churnNumeric` varied continuously as a function of `totalDistanceBounced`. The resulting scatterplot is awkward. OLS regression is not appropriate for binomial outcomes. Instead Logistic regression is introduced below.
6\.3 Logistic regression
------------------------
Logistic regression, in the Generalized Linear Model (glm) family, predicts the probability an observation is part of a group, like `Churn` or `Not_Churn`, conditional on the features of the model. OLS fits a linear line by minimizing the sum of squared errors, but model fitting in Logistic regression is based on a technique called Maximum Likelihood Estimation. While the math is beyond the scope of this book, the idea is to iteratively fit regression coefficients to the model to maximize the probability of the observed data.
Logistic regression fits an S\-shaped logistic curve like the one below. For a simple illustration, imagine I trained an algorithm from the `Observed` test\-taking experience of previous students, to estimate the `Predicted` probability that current students will pass an exam, conditional on one feature \- hours spent studying. According to leftmost panel in Figure 6\.5, as `Observed` study hours increases so does the probability of passing.
The `Predicted` probabilities for the Logistic model fall along the logistic curve and run from 0% probability of passing to 1, or 100% probability of passing. In the rightmost panel above, a model is fit, and the predicted probabilities are plotted along the fitted line colored by their observed Pass/Fail designation.
The predicted probabilities alone would be useful for the students, but I could also set a threshold above which a student would be ‘classified’ as `Pass`. Assume the threshold is 50%. The `Predicted` panel reveals one student who studied for just 2 hours but still passed the test. A threshold of 50% would incorrectly predict or classify that student as having failed the test \- a *False Negative*. As we’ll learn, classification errors are useful for cost/benefit analysis.
### 6\.3\.1 Training/Testing sets
Recall, a generalizable model 1\) performs well on new data and 2\) predicts with comparable accuracy across groups. Bounce to Work! doesn’t collect race or geographic coordinates in the data, so groups will not be the focus. Thus, the focus will be on the first definition, which is critical, given that in a production setting, the model is only useful if it can predict for next month’s churn\-ers.
In this section, training and test sets are created. `createDataPartition` is used to split the data. A 50% sample is used here to reduce processing time.
```
set.seed(3456)
trainIndex <- createDataPartition(churn$Churn, p = .50,
list = FALSE,
times = 1)
churnTrain <- churn[ trainIndex,]
churnTest <- churn[-trainIndex,]
```
### 6\.3\.2 Estimate a churn model
Next, a Logistic regression model is estimated as `churnreg1`. To keep things simple, the features are input as is, without any additional feature engineering or feature selection. As you now know, feature engineering is often the difference between a good and a great predictive model and is a critical part of the machine learning workflow.
Unlike an OLS regression which is estimated with the `lm` function, the Logistic regression is estimated with the `glm` function. Here, the `select` operation is piped directly into `glm` to remove two features that are marginally significant as well as the `Churn` feature encoded as a string.[46](#fn46)
```
churnreg1 <- glm(churnNumeric ~ .,
data=churnTrain %>% dplyr::select(-SeniorCitizen,
-WeekendBouncer,
-Churn),
family="binomial" (link="logit"))
summary(churnreg1)
```
OLS Regression estimates coefficients on the scale of the dependent variable. Logistic regression coefficients are on the scale of ‘log\-odds’. Exponentiating (`exp()`) an estimate provides a coefficient as an ‘odds ratio’. In the regression table below, column 1 suggests, for example, that all else equal, bouncing in the street reduces the likelihood of churn by 45%.
Like OLS, Logistic regression also provides p\-value estimates of statistical significance. The missing coefficients may reflect colinearity or small variability across the 3 levels of `avgBounceDistance`. Perhaps the only signal in this feature that really matters is the `1-4 ft.` indicator. Thus, `avgBounceDistance` is recoded in `churnreg2` such that any value that does not equal `1-4 ft.` receives `Other`. New training/testing sets and generated, and the model is estimated once again.
Note there is no R\-Squared presented in the `summary`. Goodness of fit for Logistic regression is not as straightforward as it is for OLS. In the next section, goodness of fit is judged in the context of classification errors.
```
churn <-
mutate(churn, avgBounceDistance = ifelse(avgBounceDistance == "1-4 ft.", "1-4 ft.",
"Other"))
set.seed(3456)
trainIndex <- createDataPartition(churn$Churn, p = .50,
list = FALSE,
times = 1)
churnTrain <- churn[ trainIndex,]
churnTest <- churn[-trainIndex,]
churnreg2 <- glm(churnNumeric ~ .,
data=churnTrain %>% dplyr::select(-SeniorCitizen,
-WeekendBouncer,
-Churn),
family="binomial" (link="logit"))
summary(churnreg2)
```
**Table 6\.2: Churn Regressions**
| | | |
| | churnNumeric | |
| | Churn Regression 1 | Churn Regression 2 |
| | (1\) | (2\) |
| | | |
| avgBounceTime | \-0\.059\*\*\* (0\.009\) | \-0\.059\*\*\* (0\.009\) |
| bounceInStreetYes | \-0\.807\*\*\* (0\.207\) | \-0\.807\*\*\* (0\.207\) |
| PaperlessBillingYes | 0\.494\*\*\* (0\.103\) | 0\.494\*\*\* (0\.103\) |
| monthlyDistanceBounced | 0\.012\*\* (0\.006\) | 0\.012\*\* (0\.006\) |
| totalDistanceBounced | 0\.0003\*\*\* (0\.0001\) | 0\.0003\*\*\* (0\.0001\) |
| phoneTypeIPhone | \-1\.211\*\*\* (0\.380\) | \-1\.211\*\*\* (0\.380\) |
| phoneTypeUnknown | \-0\.559\*\*\* (0\.195\) | \-0\.559\*\*\* (0\.195\) |
| avgBounceDistance1\-4 ft. | \-0\.490\*\*\* (0\.123\) | |
| avgBounceDistance4\+ ft. | | |
| avgBounceDistanceOther | | 0\.490\*\*\* (0\.123\) |
| avgBounceHeight1\-2 ft. | \-0\.688\*\*\* (0\.143\) | \-0\.688\*\*\* (0\.143\) |
| avgBounceHeight2\+ ft. | \-1\.608\*\*\* (0\.259\) | \-1\.608\*\*\* (0\.259\) |
| Constant | 0\.339 (0\.417\) | \-0\.151 (0\.460\) |
| N | 3,517 | 3,517 |
| Log Likelihood | \-1,482\.423 | \-1,482\.423 |
| AIC | 2,986\.847 | 2,986\.847 |
| | | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | | |
6\.4 Goodness of Fit
--------------------
For Logistic regression, a robust model is one which can accurately predict instances of both `Churn` and `No_Churn`. In this section, several options are considered.
The first and weakest option is the ‘Psuedo R Squared’, which, unlike regular R\-Squared, does not vary linearly from 0 to 1\. It does not describe the proportion of the variance in the dependent variable explained by the model. However, it is useful for quickly comparing different model specifications, which may be helpful for feature selection. Below, the ‘McFadden R\-Squared’ is demonstrated \- the higher, the better.
```
pR2(churnreg2)[4]
```
```
## fitting null model for pseudo-r2
```
```
## McFadden
## 0.2721287
```
A more useful approach to goodness of fit is to predict for `churnTest` then tally up the rate that `Churn` and `No_Churn` are predicted correctly. The first step is to create a data frame of test set probabilities, `testProbs`, which includes both the observed churn `Outcome` and predicted probabilities for each observation.
Setting `type="response"` in the `predict` function ensures the predictions are in the form of predicted probabilities. Thus, a probability of 0\.75 means that customer has a 75% probability of churning.
```
testProbs <- data.frame(Outcome = as.factor(churnTest$churnNumeric),
Probs = predict(churnreg2, churnTest, type= "response"))
head(testProbs)
```
```
## Outcome Probs
## 1 0 0.6429831
## 7 0 0.5832574
## 8 0 0.4089603
## 9 1 0.4943297
## 12 0 0.0195177
## 13 0 0.1369811
```
There are a number of interesting data visualizations that can be created by relating the predicted probabilities to the observed churn `Outcome`. Figure 6\.6 shows the distribution of predicted probabilities (x\-axis) for `Churn` and `No_Churn`, recoded as `1` and `0` (y\-axis), respectively.
If `churnreg2` was very predictive, the ‘hump’ of predicted probabilities for `Churn` would cluster around `1` on the x\-axis, while the predicted probabilities for `No_Churn` would cluster around `0`. In reality, the humps are where we might expect them, but with long tails.
```
ggplot(testProbs, aes(x = Probs, fill = as.factor(Outcome))) +
geom_density() +
facet_grid(Outcome ~ .) +
scale_fill_manual(values = palette2) + xlim(0, 1) +
labs(x = "Churn", y = "Density of probabilities",
title = "Distribution of predicted probabilities by observed outcome") +
plotTheme() + theme(strip.text.x = element_text(size = 18),
legend.position = "none")
```
Next, a variable called `predOutcome` is created that classifies any predicted probability greater than 0\.50 (or 50%) as a predicted `Churn` event. 50% seems like a reasonable threshold to start with, but one that we will explore in great detail below.
```
testProbs <-
testProbs %>%
mutate(predOutcome = as.factor(ifelse(testProbs$Probs > 0.5 , 1, 0)))
head(testProbs)
```
```
## Outcome Probs predOutcome
## 1 0 0.6429831 1
## 2 0 0.5832574 1
## 3 0 0.4089603 0
## 4 1 0.4943297 0
## 5 0 0.0195177 0
## 6 0 0.1369811 0
```
Many interesting questions can now be asked. What is overall accuracy rate? Does the model do a better job predicting `Churn` or `No_Churn`? To answer these questions, the code block below outputs a ‘Confusion Matrix’. A `positive` parameter is specified to let the function know that a value of `1` designates churn.
The table at the top of the output is the Confusion Matrix which shows the number of ‘Reference’ or observed instances of churn that are predicted as such. Each entry in the matrix provides a different comparison between observed and predicted, given the 50% threshold.
There were 506 *true positives*, instances where observed `Churn` was correctly predicted as `Churn`. There were 428 *false positives*, instances where `Churn` was incorrectly predicted as `No_Churn`.
There were 2306 *true negatives*, instances where observed `No_Churn` was correctly predicted as `No_Churn`. Finally, there were 275 *false negatives*, instances where `No_Churn` was incorrectly predicted as `Churn`.
```
caret::confusionMatrix(testProbs$predOutcome, testProbs$Outcome,
positive = "1")
```
```
## Confusion Matrix and Statistics
##
## Reference
## Prediction 0 1
## 0 2306 428
## 1 275 506
##
## Accuracy : 0.8
## 95% CI : (0.7864, 0.8131)
## No Information Rate : 0.7343
## P-Value [Acc > NIR] : < 0.00000000000000022
##
## Kappa : 0.4592
##
## Mcnemar's Test P-Value : 0.000000009879
##
## Sensitivity : 0.5418
## Specificity : 0.8935
## Pos Pred Value : 0.6479
## Neg Pred Value : 0.8435
## Prevalence : 0.2657
## Detection Rate : 0.1440
## Detection Prevalence : 0.2222
## Balanced Accuracy : 0.7176
##
## 'Positive' Class : 1
##
```
The confusion matrix also calculates overall accuracy, defined as the number of true positives plus true negatives divided by the total number of observations. Here the Accuracy is 80%. Is that good?
Two other metrics, ‘Sensitivity’ and ‘Specificity’, provide even more useful intelligence. The Sensitivity of the model is the proportion of actual positives (1’s) that were predicted to be positive. This is also known as the “True Positive Rate”. The Specificity of the model is the proportion of actual negatives (0’s) that were predicted to be negatives. Also known as the “True Negative Rate”.
The Sensitivity and Specificity of `churnreg2` is 54% and 89%, respectively. The interpretation is that the model is better at predicting those who are not going to churn than those who will. These metrics provide important intuition about *how useful our model is as a resource allocation tool*. It’s not surprising that the model is better at predicting no churn given that 75% of the data has this outcome. However, given the business process at hand, we would prefer to do a bit better at predicting the churn outcome.
New features, better feature engineering, and a more powerful predictive algorithm would significantly improve this model. Another approach for improving the model is to search for an ‘optimal’ threshold that can limit the most costly errors (from a business standpoint). Most of the remainder of this chapter is devoted to this optimization, beginning with the ‘ROC Curve’ below, a common goodness of fit metric for binary classification models.
### 6\.4\.1 Roc Curves
```
ggplot(testProbs, aes(d = as.numeric(testProbs$Outcome), m = Probs)) +
geom_roc(n.cuts = 50, labels = FALSE, colour = "#FE9900") +
style_roc(theme = theme_grey) +
geom_abline(slope = 1, intercept = 0, size = 1.5, color = 'grey') +
labs(title = "ROC Curve - churnModel")
```
Let’s quickly revisit the business problem: The goal is to identify those customers at risk for churn, so we can offer them a 20% off membership coupon which costs Bounce to Work! $8\.
Consider that at each predicted probability threshold, there is a different set of confusion metrics. A threshold of 10% for instance, means that most predictions will be classified as churn, and most customers will receive a coupon. This may mean Bounce to Work! would ultimately loose money on the promotion. As we’ll learn below, different confusion metrics have different costs and benefits, and searching for an optimal threshold, helps ensure we can stay in the black.
The Receiver Operating Characteristic Curve or ROC Curve is useful because it visualizes trade\-offs for two important confusion metrics, while also providing a single goodness of fit indicator. The code block above uses the `plotROC` and `ggplot` packages to create a ROC Curve for the `churnreg2`.
The y\-axis of the ROC curve (topmost, Figure 6\.7\) shows the rate of true positives (observed churn, predicted as churn) for each threshold from 0\.01 to 1\. The x\-axis shows the rate of false positives (observed churn, predicted as no churn) for each threshold.
The notion of trade\-offs is really important here. Follow the y\-axis to 0\.75 and then across to the orange curve. According to the ROC Curve, a threshold that predicts churn correctly 75% of the time, will predict `Churn` incorrectly \>20% of the time. The critical question is whether this trade\-off is appropriate given the cost/benefit of the business process?
What is really interesting about these trade\-offs is that they come with diminishing returns. for every additional improvement in true positive rate, the model will make a greater proportion of false positive errors. Moving from a 75% to a 90% true positive rate dramatically increases the false positive rate.
To understand how the ROC Curve doubles as a goodness of fit tool, it is helpful to start with the diagonal line in the bottom panel of Figure 6\.7\. Also known as the ‘Coin Flip line’, any true positive rate on this line has an equal corresponding false positive rate. Any classifier with an ROC Curve along the diagonal line is no better than a coin flip. An ROC Curve below the diagonal line represents a very poor fit. Consider along this line, a model that gets it right \~7% of the time, gets it wrong 50% of the time.
A ‘Perfect Fit’ may seem desirable, but it is actually indicative of an overfit model. A ROC Curve like this yields 100% true positives and 0 false positives. If the model is so strongly fit to experiences in the training set, it will not likely generalize to experiences in new data. This is a really important point to remember. As before, we can test this model for its out\-of\-sample generalizability with cross\-validation.
Another look at Figure 6\.7 suggests that the usefulness of the algorithm can be judged by the proportion of the plotting area that is *under* the ROC curve. The ‘Area Under the Curve’ metric or AUC for `churnreg2` is 0\.8408635\. AUC is another quick goodness of fit measure to guide feature selection across different models. 50% of the plotting area is under the Coin Flip line and 100% of the plotting area is underneath the Perfect Fit line. Thus, a reasonable AUC is between 0\.5 and 1\.
```
pROC::auc(testProbs$Outcome, testProbs$Probs)
```
```
## Area under the curve: 0.8409
```
6\.5 Cross\-validation
----------------------
As a data scientist working at Bounce to Work!, your goal should be to train a predictive model that will be useful for many months to come, rather than re\-training a model every month. Thus, the model is only as good as its ability to generalize to new data. This section performs cross\-validation using the `caret::train` function as we saw in Chapter 3\.
The `trainControl` parameter is set to run 100 k\-folds and to output predicted probabilities, `classProbs`, for ‘two classes’, `Churn` and `No_Churn`. Additional parameters output AUC (the `train` function refers to this as ‘ROC’) and confusion metrics for each fold.
Importantly, the three metrics in the `cvFit` output are for *mean* AUC, Sensitivity, and Specificity across *all 100 folds*. Note that the dependent variable here is `Churn` not `churnNumeric`.
```
ctrl <- trainControl(method = "cv", number = 100, classProbs=TRUE, summaryFunction=twoClassSummary)
cvFit <- train(Churn ~ ., data = churn %>%
dplyr::select(
-SeniorCitizen,
-WeekendBouncer,
-churnNumeric),
method="glm", family="binomial",
metric="ROC", trControl = ctrl)
cvFit
```
```
## Generalized Linear Model
##
## 7032 samples
## 8 predictor
## 2 classes: 'Churn', 'No_Churn'
##
## No pre-processing
## Resampling: Cross-Validated (100 fold)
## Summary of sample sizes: 6961, 6961, 6961, 6962, 6961, 6962, ...
## Resampling results:
##
## ROC Sens Spec
## 0.8407402 0.5333333 0.8949849
```
The means are not as important as the *across* fold goodness of fit. Figure 6\.8 below plots the distribution of AUC, Sensitivity, and Specificity across the 100 folds. `names(cvFit)` shows that `train` creates several outputs. `cvFit$resample` is a data frame with goodness of fit for each of the 100 folds. The code block below joins to this, the mean goodness of fit (`cvFit$results`), and plots the distributions as a small multiple plot.
The tighter each distribution is to its mean, the more generalizable the model. Our model generalizes well with respect to Specificity \- the rate it correctly predicts `No_Churn` ( *the true negatives* ). It does not generalize as well with respect to Sensitivity \- the rate it correctly predicts `Churn` ( *true positives* ). Note that if the model was overfit with an AUC of `1`, it would also not generalize well to new data.
It seems our would\-be decision\-making tool is inconsistent in how it predicts the business\-relevant outcome, churn. That inconsistency could be systematic \- perhaps it works better for younger bouncers or for more serious bouncers. Or the model could simply lack sufficient predictive power. Either way, this inconsistency will have a direct effect on the business process should this algorithm be put into production.
```
dplyr::select(cvFit$resample, -Resample) %>%
gather(metric, value) %>%
left_join(gather(cvFit$results[2:4], metric, mean)) %>%
ggplot(aes(value)) +
geom_histogram(bins=35, fill = "#FF006A") +
facet_wrap(~metric) +
geom_vline(aes(xintercept = mean), colour = "#981FAC", linetype = 3, size = 1.5) +
scale_x_continuous(limits = c(0, 1)) +
labs(x="Goodness of Fit", y="Count", title="CV Goodness of Fit Metrics",
subtitle = "Across-fold mean reprented as dotted lines") +
plotTheme()
```
6\.6 Generating costs and benefits
----------------------------------
Our goal again, is to target those at risk of churning with a 20% off coupon that hopefully convinces them to resubscribe. How can our predictive model help improve revenue? Assuming the model successfully predicts churners, and assuming a 20% coupon was enough incentive to stay, can the algorithmic approach help optimize Bounce to Work!’s marketing campaign?
Let’s make the following assumptions about the marketing campaign:
1. The membership costs $30\.
2. Each mailer to a potential churn\-er costs $8\.00\. It includes an offer for 20% off *this month’s* subscription ($6 off) plus $2\.00 for printing/postage.
3. Of those would\-be churners who are sent a mailer, past campaigns show \~50% of recipients re\-subscribe.
While there are many ways to approach a cost/benefit analysis, our approach will be to use the confusion matrix from `testProbs`. Below the cost/benefit for each outcome in our confusion matrix is calculated, like so[47](#fn47):
* **True negative revenue** “We predicted no churn, did not send a coupon, and the customer did not churn”: $30 \- $0 \= **$30**
* **True positive revenue** “We predicted churn and sent the mailer”: $30 \- $8 \= **$22** return for 50% of cases that re\-subscribe. We lose $30 \+ $2 \= **$\-32** for 50% of cases who were sent the coupon but did not re\-subscribe.
* **False negative revenue** “We predicted no churn, sent no coupon, and the customer churned”: $0 \- 30 \= **\-$30**
* **False positive revenue** “We predicted churn and sent the mailer, the customer was not going churn but used the coupon anyway”: $30 \- $8 \= **$22**
For now, note that the greatest cost comes with 50% of the true positives where we offer a coupon but loose the customer anyway. The greatest marginal benefit is in maximizing the number of true negatives \- customers who we correctly predict will not churn. To calculate the total cost/benefit, these confusion metrics are multiplied by their corresponding costs below.
```
cost_benefit_table <-
testProbs %>%
count(predOutcome, Outcome) %>%
summarize(True_Negative = sum(n[predOutcome==0 & Outcome==0]),
True_Positive = sum(n[predOutcome==1 & Outcome==1]),
False_Negative = sum(n[predOutcome==0 & Outcome==1]),
False_Positive = sum(n[predOutcome==1 & Outcome==0])) %>%
gather(Variable, Count) %>%
mutate(Revenue =
case_when(Variable == "True_Negative" ~ Count * 30,
Variable == "True_Positive" ~ ((30 - 8) * (Count * .50)) +
(-32 * (Count * .50)),
Variable == "False_Negative" ~ (-30) * Count,
Variable == "False_Positive" ~ (30 - 8) * Count)) %>%
bind_cols(data.frame(Description = c(
"We predicted no churn and did not send a mailer",
"We predicted churn and sent the mailer",
"We predicted no churn and the customer churned",
"We predicted churn and the customer did not churn")))
```
| Variable | Count | Revenue | Description |
| --- | --- | --- | --- |
| True\_Negative | 2306 | 69180 | We predicted no churn and did not send a mailer |
| True\_Positive | 506 | \-2530 | We predicted churn and sent the mailer |
| False\_Negative | 428 | \-12840 | We predicted no churn and the customer churned |
| False\_Positive | 275 | 6050 | We predicted churn and the customer did not churn |
| Table 6\.3 |
| --- |
Assuming our algorithm was used, the total `Revenue` (column sum) in the Cost/Benefit Table 6\.3 above is $59,860\. Assuming no algorithm was used and 934 customers churned (from `testProbs`), the cost/benefit would be (2,581 instances of no churn \* $30 \= $77,430\) \+ (934 instances of *observed* churn \* \-30 \= \-$28,020\), which leads to a net revenue of $49,410\.
Thus, the algorithm would save Bounce to Work! $10,450\. This savings is based on a 50% threshold, but maybe other thresholds can yield an even greater cost/benefit. Let’s now try to optimize the threshold.
### 6\.6\.1 Optimizing the cost/benefit relationship
Recall, that a different confusion matrix, set of errors and cost/benefit calculation exists for each threshold. Ideally, the ‘optimal’ threshold is the one that returns the greatest cost/benefit. In this section, a function is created to iteratively loop through each threshold, calculate confusion metrics, and total the revenue for each. The results are then visualized for each threshold.
The `iterateThresholds` function in the `functions.R` script is based on a `while` loop. The threshold *x*, starts at 0\.01 and while it is less than 1, a predicted classification, `predOutcome`, is `mutate`d given *x*; confusion metrics are calculated; the cost benefit is performed and is appended to a data frame called `all_prediction`. Finally *x* is iterated by adding 0\.01 and the process continues until *x* \= 1\.
`iterateThresholds` function is run below and includes a host of goodness of fit indicators, several of which are returned below. Recall, each row is a different `Threshold`.
```
whichThreshold <-
iterateThresholds(
data=testProbs, observedClass = Outcome, predictedProbs = Probs)
whichThreshold[1:5,]
```
```
## # A tibble: 5 x 10
## Count_TN Count_TP Count_FN Count_FP Rate_TP Rate_FP Rate_FN Rate_TN Accuracy
## <int> <int> <int> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 350 929 5 2231 0.995 0.864 0.00535 0.136 0.364
## 2 576 926 8 2005 0.991 0.777 0.00857 0.223 0.427
## 3 697 923 11 1884 0.988 0.730 0.0118 0.270 0.461
## 4 818 918 16 1763 0.983 0.683 0.0171 0.317 0.494
## 5 904 913 21 1677 0.978 0.650 0.0225 0.350 0.517
## # ... with 1 more variable: Threshold <dbl>
```
Next, the result is moved to long form and `Revenue` is calculated for each confusion metric at each threshold.
```
whichThreshold <-
whichThreshold %>%
dplyr::select(starts_with("Count"), Threshold) %>%
gather(Variable, Count, -Threshold) %>%
mutate(Revenue =
case_when(Variable == "Count_TN" ~ Count * 30,
Variable == "Count_TP" ~ ((30 - 8) * (Count * .50)) +
(-32 * (Count * .50)),
Variable == "Count_FN" ~ (-30) * Count,
Variable == "Count_FP" ~ (30 - 8) * Count))
```
Figure 6\.9 below plots the `Revenue` for each confusion metric by threshold. This is a plot of trade\-offs. Each is described below:
* **False negative revenue**: “We predicted no churn and the customer churned” \- As the threshold increases we see more customers churn that a mailer did not get sent to. As fewer mailers go out, the losses mount.
* **False positive revenue**: “We predicted churn and sent the mailer, the customer did not churn but used the coupon anyway” \- A low threshold assumes few customers are predicted to churn, which limits the revenue hit from those who used the coupon anyway, despite their intention to renew.
* **True negative revenue**: “We predicted no churn, did not send a mailer, and the customer did not churn” \- As the threshold goes up, the number of full price paying customers goes up. At higher thresholds, the model assumes no churn and the no revenue is lost to false negatives.
* **True positive revenue**: “We predicted churn and sent the mailer” \- Although the coupon convincing 50% of members to re\-subscribe saves some revenue, the cost of actual churn leads to losses for most thresholds.
```
whichThreshold %>%
ggplot(.,aes(Threshold, Revenue, colour = Variable)) +
geom_point() +
scale_colour_manual(values = palette5[c(5, 1:3)]) +
labs(title = "Revenue by confusion matrix type and threshold",
y = "Revenue") +
plotTheme() +
guides(colour=guide_legend(title = "Confusion Matrix"))
```
The next step of the cost/benefit analysis is to calculate the total `Revenue` across confusion metrics for each threshold.
Below, `actualChurn` and `Actual_Churn_Rate` include 50% of the True Positives (who were not swayed by the coupon) and all of the False Negatives (those who never even received a coupon). `Actual_Churn_Revenue_Loss` is loss from this pay period, and `Revenue_Next_Period` is for the next, assuming no new customers are added.
Assuming `testProbs` is representative of Bounce to Work!’s customers, 934 (27%) of customers will churn resulting in a net loss of \-$28,020 and a net revenue of $49,410\. How do the other thresholds compare?
```
whichThreshold_revenue <-
whichThreshold %>%
mutate(actualChurn = ifelse(Variable == "Count_TP", (Count * .5),
ifelse(Variable == "Count_FN", Count, 0))) %>%
group_by(Threshold) %>%
summarize(Revenue = sum(Revenue),
Actual_Churn_Rate = sum(actualChurn) / sum(Count),
Actual_Churn_Revenue_Loss = sum(actualChurn * 30),
Revenue_Next_Period = Revenue - Actual_Churn_Revenue_Loss)
whichThreshold_revenue[1:5,]
```
```
## # A tibble: 5 x 5
## Threshold Revenue Actual_Churn_Rate Actual_Churn_Revenue_L~ Revenue_Next_Peri~
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.01 54787 0.134 14085 40702
## 2 0.02 56520 0.134 14130 42390
## 3 0.03 57413 0.134 14175 43238
## 4 0.04 58256 0.135 14250 44006
## 5 0.05 58819 0.136 14325 44494
```
A threshold of 26% is optimal and yields the greatest revenue at $62,499\. After that mark, losses associated with False Negatives begin to mount (see Figure 6\.9 above).
`Revenue` (this period) and `Revenue_Next_Period` are plotted below for each `Threshold`. Here we assume no new customers are added next pay period.
```
whichThreshold_revenue %>%
dplyr::select(Threshold, Revenue, Revenue_Next_Period) %>%
gather(Variable, Value, -Threshold) %>%
ggplot(aes(Threshold, Value, colour = Variable)) +
geom_point() +
geom_vline(xintercept = pull(arrange(whichThreshold_revenue, -Revenue)[1,1])) +
scale_colour_manual(values = palette2) +
plotTheme() + ylim(0,70000) +
labs(title = "Revenue this pay period and the next by threshold",
subtitle = "Assuming no new customers added next period. Vertical line denotes optimal threshold")
```
These cost/benefit functions are incredibly revealing. It’s clear that `Revenue` (this\_period) declines after the 26% threshold. The trend for `Revenue_Next_Period` is similar, but interestingly, the slope of the line is much steeper. Why do you think this is?
Table 6\.4 shows the `Revenue` and the rate of churn for no model, the default 50% threshold and the optimal threshold. Not only does the optimal threshold maximize `Revenue` this pay period but it also prevents fewer customers from actually churning in the next pay period, helping to maximize revenues in the long term as well.
| Model | Revenue\_This\_Period | Lost\_Customers\_Next\_Period |
| --- | --- | --- |
| No predictive model | $49,410 | 27% |
| 50% Threshold | $59,860 | 19% |
| 26% Threshold | $62,499 | 16% |
| Table 6\.4 |
| --- |
6\.7 Conclusion \- churn
------------------------
The goal of this chapter is to help pogo\-transit company Bounce to Work! maximize revenues by reducing the number of customers who might not otherwise renew their membership (i.e. churn). A Logistic regression model was estimated to predict `Churn` as a function of customer data and the relevant goodness of fit indicators were introduced. To demonstrate how machine learning can directly influence a business problem, these ‘confusion metrics’ were used to calculate the `Revenue` implications of the model relative to the business\-as\-usual approach, sans model. More experienced readers can swap out `glm` for more advanced machine learning algorithms, and notice an immediate improvement in predictive performance.
Unlike every other chapter in this book, this one focused on resource allocation for a for\-profit company. It may not seem like ‘cost/benefit’ and ‘customer’ are terms that apply to government \- but they absolutely do. Every taxpayer, household, child, parent, and senior is a client of government and all deserve a government that works to their benefit.
The difference is in finding the ‘optimal’ resource allocation approach. This may be possible if the only bottom line is revenue, but it is largely impossible in government, which has many unquantifiable bottom lines like equity and social cohesion. Armed with the necessary domain expertise, however, government data scientists can still use these methods to improve service delivery.
There are no shortage of use cases in government that could benefit from these models. Which homelessness or drug treatment intervention might increase the probability for success? Which landlords are likely renting and evicting tenants illegally? Which buildings are at risk of falling down? Which gang member is at risk for gun crime? Which medicaid recipient might benefit from in\-home nursing care? And the list goes on.
There are two ways to judge the usefulness of these models. First, does the data\-driven approach improve outcomes relative to the business\-as\-usual approach? Chapter 4 compared the geospatial risk model to traditional hot spot mapping. Here, the business\-as\-usual approach may have just allowed churn; perhaps an employee had a method for choosing which customers receive a coupon; or maybe they pulled names from a hat. Consider that a confusion matrix is possible for each of these approaches.
The second way to judge utility is through the lens of ‘fairness’, which is the subject of the next chapter. Just because the model may improve cost/benefit, doesn’t necessarily make for a more useful decision\-making approach if it further disenfranchises one group or another. There are many ways to judge fairness, and our approach, not surprisingly, will be to analyze across\-group generalizability.
Ultimately, an algorithm’s usefulness should not be evaluated by an engineer alone, but by a team of engineers and domain experts who can evaluate models within the appropriate context.
6\.8 Assignment \- Target a subsidy
-----------------------------------
Emil City is considering a more proactive approach for targeting home owners who qualify for a home repair tax credit program. This tax credit program has been around for close to twenty years, and while the Department of Housing and Community Development (HCD) tries to proactively reach out to eligible homeowners ever year, the uptake of the credit is woefully inadequate. Typically only 11% of eligible homeowners they reach out to take the credit.
The consensus at HCD is that the low conversion rate is due to the fact that the agency reaches out to eligible homeowners at random. Unfortunately, we don’t know the cost/benefit of previous campaigns, but we should assume it wasn’t good. To move toward a more targeted campaign, HCD has recently hired you, their very first data scientist, to convert all the client\-level data collected from previous campaigns into a decision\-making analytic that can better target their limited outreach resources.
You have been given a random sample of records. Your goal is to train the best classifier you can and use the results to inform a cost/benefit analysis.
The data for this exercise has been adopted from Moro \& Rita (2014\)[48](#fn48). Some variables have been changed to suit the current use case. The dependent variable is `y`, which is `Yes` to indicate the a homeowner took the credit and `No` to indicate they did not. There are many features related to this outcome described in the table below.
| Variable | Description | Class | Notes |
| --- | --- | --- | --- |
| age | Age of homeowner | Numeric | |
| job | Occupation indicator | Category | |
| marital | Marital Status | Category | |
| education | Educational attainment | Category | |
| taxLien | Is there a lien against the owner’s property? | Category | |
| mortgage | Is the owner carrying a mortgage | Category | |
| taxbill\_in\_phl | Is the owner’s full time residence not in Philadelphia | Category | |
| contact | How have we previously contacted individual? | Category | |
| month | Month we last contacted individual | Category | |
| day\_of\_week | Day of the week we last contacted individual | Category | |
| campaign | \# of contacts for this ind for this campaign | Category | |
| pdays | \# days after ind. last contacted from a previous program | Category | 999 \= client not previously contacted |
| previous | \# of contacts before this campaign for this ind. | Numeric | |
| poutcome | Outcome of the previous marketing campaign | Categorical | |
| unemploy\_rate | Unemployment rate at time of campaign | Numeric | |
| cons.price.idx | Consumer Price Idex at campaign time | Numeric | |
| cons.conf.idx | Consumer confidence index at time of campaign | Numeric | |
| inflation\_rate | US Inflation Rate | Numeric | daily indicator |
| spent\_on\_repairs | Amoung annually spent on home repairs | Numeric | |
| y | Indicates the individual took the credit | Category | Yes/No (but you may wish to recode to numeric) |
| Table 6\.5 |
| --- |
After studying the credit program and related materials, you construct some stylized facts to help guide your cost/benefit analysis. If we predict that a household will take the credit, then HCD is willing to allocate **$2,850** per homeowner which includes staff and resources to facilitate mailers, phone calls, and information/counseling sessions at the HCD offices. Given the new targeting algorithm, we should now assume 25% of contacted eligible homeowners take the credit. The remainder receive the marketing allocation but do not take the credit.
The credit costs **$5,000** per homeowner which can be used toward home improvement. Academic researchers in Philadelphia evaluated the program finding that houses that transacted after taking the credit, sold with a **$10,000** premium, on average. Homes surrounding the repaired home see an aggregate premium of **$56,000**, on average. Below is a run down of the costs and benefits for each potential outcome of the model you will build. This is a public\-sector use case, so the cost/benefit is not as straightforward as Bounce to Work! If you feel that changing a constraint would be helpful, then please do so.
1. True Positive \- Predicted correctly homeowner would take the credit; allocated the marketing resources, and 25% took the credit.
2. True Negative \- Predicted correctly homeowner would not take the credit, no marketing resources were allocated, and no credit was allocated.
3. False Positive \- Predicted incorrectly homeowner would take the credit; allocated marketing resources; no credit allocated.
4. False Negative \- We predicted that a homeowner would not take the credit but they did. These are likely homeowners who signed up for reasons unrelated to the marketing campaign. Thus, we ‘0 out’ this category, assuming the cost/benefit of this is $0\.
Deliverables:
1. One paragraph on the motivation for the analysis.
2. Develop and interpret data visualizations that describe feature importance/correlation.
3. Split your data into a 65/35 training/test set.
4. The Sensitivity (True Positive Rate) for a model with all the features is very low. **Engineer new features** that significantly increase the Sensitivity.
1. Interpret your new features in one paragraph.
2. Show a regression summary for both the kitchen sink and your engineered regression.
3. Cross validate both models; compare and interpret two facetted plots of ROC, Sensitivity and Specificity.
5. Output an ROC curve for your new model and interpret it.
6. Develop a cost benefit analysis.
1. Write out the cost/benefit equation for each confusion metric.
2. Create the ‘Cost/Benefit Table’ as seen above.
3. Plot the confusion metric outcomes for each Threshold.
4. Create two small multiple plots that show `Threshold` as a function of `Total_Revenue` and `Total_Count_of_Credits`. Interpret this.
5. Create a table of the `Total_Revenue` and `Total_Count_of_Credits` allocated for 2 categories. `50%_Threshold` and your `Optimal_Threshold`.
7. Conclude whether and why this model should or shouldn’t be put into production. What could make the model better? What would you do to ensure that the marketing materials resulted in a better response rate?
6\.1 Bounce to work
-------------------
Most organizations work on behalf of ‘clients’. Government agencies provide services to firms and households; non\-profits rely on donors to keep the lights on; and businesses sell products and services to customers. At any point, a client may no longer wish to participate, donate, or purchase a good, which will affect the bottom line of the organization.
We have learned how data science can identify risk and opportunity in space, and not surprisingly, these methods can also be used to identify risk/opportunity for clients. In this chapter, we learn how to predict risk for individuals and use the resulting intelligence to develop cost/benefit analyses. Specifically, the goal is to predict ‘churn’ \- the decision of a client not to re\-subscribe to a product or service.
Imagine you are the head of sales and marketing for a pogo\-transit start\-up called ‘Bounce to Work!’. Two years ago, Bounce to Work! rolled out 2,000 dockless, GPS\-enabled pogo sticks around the city charging riders $3 per bounce or a membership of $30/month for unlimited bouncing. Bounce to Work!’s first year resulted in more bouncing then the company could have ever imagined, and to keep its customers engaged, the company is looking to embark on a membership drive.
You have noticed that every month, between 20% and 25% of the roughly 30,000 members ‘churn’ or do not renew their membership at month’s end. Not accounting for new members, that is a revenue loss of as much as $225,000 (7,500 members \* $30 per membership) per month! This volatility is creating some uncertainty in the company’s expansion efforts. They have asked you to put your data science skills to the test, by predicting for every member, the probability that they will churn, conditional on a host of bouncing data collected by the company. Those *predicted probabilities* will then be used to prioritize who gets a $2 marketing mailer that includes a 20% off membership coupon (an $8 expenditure, in total).
Predicting churn has all sorts of interesting cost/benefit implications for Bounce to Work!. If your algorithm predicts a customer will *not* churn and a mailer is not sent, but they do in fact churn (a *false negative*), then the company is out $30\. If you predict a customer will churn and send them a mailer but they had no intention of churning, then you loose $8 (20% off a $30 membership plus a $2 mailer).
While Bounce to Work! is a slightly absurd premise for this chapter, data science can be most impactful (both positively and negatively) when used to target families and individuals for critical services. In this chapter, a classic churn\-related dataset from IBM is used, although I have altered the variables names to make it more apropos.[45](#fn45)
The next section performs exploratory analysis; Section 3 introduces logistic regression; Sections 4 and 5 focus on goodness of fit for these models as well as cross\-validation. Section 6 delves into cost/benefit and 7 concludes. Below is the data dictionary. Each row is a customer and the outcome of interest, `Churn`, consists of two levels, `Churn` and `No_Churn`. Our goal is to predict this ‘binary’ outcome using demographic and ridership\-specific variables.
| Variable | Description |
| --- | --- |
| SeniorCitizen | Whether the customer is a senior citizen or not (1, 0\) |
| WeekendBouncer | Does this customer bounce on the weekends or only on weekdays? (Yes, No) |
| avgBounceTime | Average length of a pogo trip in minutes |
| bounceInStreet | Does this customer tend to bounce on streets (Yes) or on sidewalks (No)? |
| phoneType | The operating system of the customer’s phone (IPhone, Android, Unknown) |
| avgBounceDistance | Distance of a customer’s average bounce as a categorical variable (\<1 ft., 1\-4 ft., 4\+ ft.) |
| avgBounceHeight | Height of a customer’s Average bounce as a categorical variable (\<1 ft., 1\-2 ft., 2\+ ft.) |
| PaperlessBilling | Whether the customer has paperless billing or not (Yes, No) |
| monthlyDistanceBounced | The amount of distance in miles coverered last month |
| totalDistanceBounced | The amount of distance in miles coverered since becoming a member |
| Churn | Whether the customer churned or not (Churn or No\_Churn) |
| Data Dictionary |
| --- |
| Table 6\.1 |
```
options(scipen=10000000)
library(tidyverse)
library(caret)
library(knitr)
library(pscl)
library(plotROC)
library(pROC)
library(scales)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
palette5 <- c("#981FAC","#CB0F8B","#FF006A","#FE4C35","#FE9900")
palette4 <- c("#981FAC","#FF006A","#FE4C35","#FE9900")
palette2 <- c("#981FAC","#FF006A")
churn <- read.csv(file.path(root.dir,"/Chapter6/churnBounce.csv")) %>%
mutate(churnNumeric = as.factor(ifelse(Churn == "Churn", 1, 0))) %>%
na.omit()
```
Let’s start by loading libraries and the data. In the code block above, the outcome is recoded to a binary, `0` and `1` variable called `churnNumeric`. Any field with `NA` is removed from the data.
6\.2 Exploratory analysis
-------------------------
Churn is defined as a customer not re\-subscribing to a service. In this section, the data is explored by visualizing correlation between `churn` and the predictive features. Correlation for a continuous outcome like home price, can be visualized with a scatterplot. However, when the dependent variable is binary, with two possible outcomes, a different approach is needed. Useful features are those that exhibit *significant differences* across the `Churn` and `No_Churn` outcomes.
First, note that 1869 of 7032 customers in the dataset churned (27%).
Figure 6\.1 plots the *mean* for 2 continuous features grouped by `Churn` or `No_Churn`. The interpretation is that the longer a customer bounces, both by trip and historically, the more likely, on average, a customer will re\-up their membership (i.e. `No_Churn`).
```
churn %>%
dplyr::select(Churn,avgBounceTime, totalDistanceBounced) %>%
gather(Variable, value, -Churn) %>%
ggplot(aes(Churn, value, fill=Churn)) +
geom_bar(position = "dodge", stat = "summary", fun = "mean") +
facet_wrap(~Variable, scales = "free") +
scale_fill_manual(values = palette2) +
labs(x="Churn", y="Mean",
title = "Feature associations with the likelihood of churn",
subtitle = "(Continous outcomes)") +
plotTheme() + theme(legend.position = "none")
```
Not only is the dependent variable categorical, most of the features are as well. The plots below illustrate whether differences in customer factors associate with the likelihood that they will churn. The `count` function below calculates the total number of customers reported as ‘Yes’ for a given feature.
The interpretation is that more people who re\-up their membership (`No_Churn`) tend to bounce in the street, pay by internet, and bounce on weekends.
```
churn %>%
dplyr::select(Churn,SeniorCitizen, WeekendBouncer, bounceInStreet, PaperlessBilling) %>%
gather(Variable, value, -Churn) %>%
count(Variable, value, Churn) %>%
filter(value == "Yes") %>%
ggplot(aes(Churn, n, fill = Churn)) +
geom_bar(position = "dodge", stat="identity") +
facet_wrap(~Variable, scales = "free", ncol=4) +
scale_fill_manual(values = palette2) +
labs(x="Churn", y="Count",
title = "Feature associations with the likelihood of churn",
subtitle = "Two category features (Yes and No)") +
plotTheme() + theme(legend.position = "none")
```
Finally, the code block below plots three category associations. The plot suggests that a customer who bounces 4 feet at a time and upwards of 2 feet in the air, has a lower likelihood of `Churn`. Clearly, more experienced bouncers are more likely to continue their membership.
```
churn %>%
dplyr::select(Churn, phoneType, avgBounceDistance, avgBounceHeight) %>%
gather(Variable, value, -Churn) %>%
count(Variable, value, Churn) %>%
ggplot(aes(value, n, fill = Churn)) +
geom_bar(position = "dodge", stat="identity") +
facet_wrap(~Variable, scales="free") +
scale_fill_manual(values = palette2) +
labs(x="Churn", y="Count",
title = "Feature associations with the likelihood of churn",
subtitle = "Three category features") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
Churn has two possible outcomes, but imagine in Figure 6\.4 below, if `churnNumeric` varied continuously as a function of `totalDistanceBounced`. The resulting scatterplot is awkward. OLS regression is not appropriate for binomial outcomes. Instead Logistic regression is introduced below.
6\.3 Logistic regression
------------------------
Logistic regression, in the Generalized Linear Model (glm) family, predicts the probability an observation is part of a group, like `Churn` or `Not_Churn`, conditional on the features of the model. OLS fits a linear line by minimizing the sum of squared errors, but model fitting in Logistic regression is based on a technique called Maximum Likelihood Estimation. While the math is beyond the scope of this book, the idea is to iteratively fit regression coefficients to the model to maximize the probability of the observed data.
Logistic regression fits an S\-shaped logistic curve like the one below. For a simple illustration, imagine I trained an algorithm from the `Observed` test\-taking experience of previous students, to estimate the `Predicted` probability that current students will pass an exam, conditional on one feature \- hours spent studying. According to leftmost panel in Figure 6\.5, as `Observed` study hours increases so does the probability of passing.
The `Predicted` probabilities for the Logistic model fall along the logistic curve and run from 0% probability of passing to 1, or 100% probability of passing. In the rightmost panel above, a model is fit, and the predicted probabilities are plotted along the fitted line colored by their observed Pass/Fail designation.
The predicted probabilities alone would be useful for the students, but I could also set a threshold above which a student would be ‘classified’ as `Pass`. Assume the threshold is 50%. The `Predicted` panel reveals one student who studied for just 2 hours but still passed the test. A threshold of 50% would incorrectly predict or classify that student as having failed the test \- a *False Negative*. As we’ll learn, classification errors are useful for cost/benefit analysis.
### 6\.3\.1 Training/Testing sets
Recall, a generalizable model 1\) performs well on new data and 2\) predicts with comparable accuracy across groups. Bounce to Work! doesn’t collect race or geographic coordinates in the data, so groups will not be the focus. Thus, the focus will be on the first definition, which is critical, given that in a production setting, the model is only useful if it can predict for next month’s churn\-ers.
In this section, training and test sets are created. `createDataPartition` is used to split the data. A 50% sample is used here to reduce processing time.
```
set.seed(3456)
trainIndex <- createDataPartition(churn$Churn, p = .50,
list = FALSE,
times = 1)
churnTrain <- churn[ trainIndex,]
churnTest <- churn[-trainIndex,]
```
### 6\.3\.2 Estimate a churn model
Next, a Logistic regression model is estimated as `churnreg1`. To keep things simple, the features are input as is, without any additional feature engineering or feature selection. As you now know, feature engineering is often the difference between a good and a great predictive model and is a critical part of the machine learning workflow.
Unlike an OLS regression which is estimated with the `lm` function, the Logistic regression is estimated with the `glm` function. Here, the `select` operation is piped directly into `glm` to remove two features that are marginally significant as well as the `Churn` feature encoded as a string.[46](#fn46)
```
churnreg1 <- glm(churnNumeric ~ .,
data=churnTrain %>% dplyr::select(-SeniorCitizen,
-WeekendBouncer,
-Churn),
family="binomial" (link="logit"))
summary(churnreg1)
```
OLS Regression estimates coefficients on the scale of the dependent variable. Logistic regression coefficients are on the scale of ‘log\-odds’. Exponentiating (`exp()`) an estimate provides a coefficient as an ‘odds ratio’. In the regression table below, column 1 suggests, for example, that all else equal, bouncing in the street reduces the likelihood of churn by 45%.
Like OLS, Logistic regression also provides p\-value estimates of statistical significance. The missing coefficients may reflect colinearity or small variability across the 3 levels of `avgBounceDistance`. Perhaps the only signal in this feature that really matters is the `1-4 ft.` indicator. Thus, `avgBounceDistance` is recoded in `churnreg2` such that any value that does not equal `1-4 ft.` receives `Other`. New training/testing sets and generated, and the model is estimated once again.
Note there is no R\-Squared presented in the `summary`. Goodness of fit for Logistic regression is not as straightforward as it is for OLS. In the next section, goodness of fit is judged in the context of classification errors.
```
churn <-
mutate(churn, avgBounceDistance = ifelse(avgBounceDistance == "1-4 ft.", "1-4 ft.",
"Other"))
set.seed(3456)
trainIndex <- createDataPartition(churn$Churn, p = .50,
list = FALSE,
times = 1)
churnTrain <- churn[ trainIndex,]
churnTest <- churn[-trainIndex,]
churnreg2 <- glm(churnNumeric ~ .,
data=churnTrain %>% dplyr::select(-SeniorCitizen,
-WeekendBouncer,
-Churn),
family="binomial" (link="logit"))
summary(churnreg2)
```
**Table 6\.2: Churn Regressions**
| | | |
| | churnNumeric | |
| | Churn Regression 1 | Churn Regression 2 |
| | (1\) | (2\) |
| | | |
| avgBounceTime | \-0\.059\*\*\* (0\.009\) | \-0\.059\*\*\* (0\.009\) |
| bounceInStreetYes | \-0\.807\*\*\* (0\.207\) | \-0\.807\*\*\* (0\.207\) |
| PaperlessBillingYes | 0\.494\*\*\* (0\.103\) | 0\.494\*\*\* (0\.103\) |
| monthlyDistanceBounced | 0\.012\*\* (0\.006\) | 0\.012\*\* (0\.006\) |
| totalDistanceBounced | 0\.0003\*\*\* (0\.0001\) | 0\.0003\*\*\* (0\.0001\) |
| phoneTypeIPhone | \-1\.211\*\*\* (0\.380\) | \-1\.211\*\*\* (0\.380\) |
| phoneTypeUnknown | \-0\.559\*\*\* (0\.195\) | \-0\.559\*\*\* (0\.195\) |
| avgBounceDistance1\-4 ft. | \-0\.490\*\*\* (0\.123\) | |
| avgBounceDistance4\+ ft. | | |
| avgBounceDistanceOther | | 0\.490\*\*\* (0\.123\) |
| avgBounceHeight1\-2 ft. | \-0\.688\*\*\* (0\.143\) | \-0\.688\*\*\* (0\.143\) |
| avgBounceHeight2\+ ft. | \-1\.608\*\*\* (0\.259\) | \-1\.608\*\*\* (0\.259\) |
| Constant | 0\.339 (0\.417\) | \-0\.151 (0\.460\) |
| N | 3,517 | 3,517 |
| Log Likelihood | \-1,482\.423 | \-1,482\.423 |
| AIC | 2,986\.847 | 2,986\.847 |
| | | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | | |
### 6\.3\.1 Training/Testing sets
Recall, a generalizable model 1\) performs well on new data and 2\) predicts with comparable accuracy across groups. Bounce to Work! doesn’t collect race or geographic coordinates in the data, so groups will not be the focus. Thus, the focus will be on the first definition, which is critical, given that in a production setting, the model is only useful if it can predict for next month’s churn\-ers.
In this section, training and test sets are created. `createDataPartition` is used to split the data. A 50% sample is used here to reduce processing time.
```
set.seed(3456)
trainIndex <- createDataPartition(churn$Churn, p = .50,
list = FALSE,
times = 1)
churnTrain <- churn[ trainIndex,]
churnTest <- churn[-trainIndex,]
```
### 6\.3\.2 Estimate a churn model
Next, a Logistic regression model is estimated as `churnreg1`. To keep things simple, the features are input as is, without any additional feature engineering or feature selection. As you now know, feature engineering is often the difference between a good and a great predictive model and is a critical part of the machine learning workflow.
Unlike an OLS regression which is estimated with the `lm` function, the Logistic regression is estimated with the `glm` function. Here, the `select` operation is piped directly into `glm` to remove two features that are marginally significant as well as the `Churn` feature encoded as a string.[46](#fn46)
```
churnreg1 <- glm(churnNumeric ~ .,
data=churnTrain %>% dplyr::select(-SeniorCitizen,
-WeekendBouncer,
-Churn),
family="binomial" (link="logit"))
summary(churnreg1)
```
OLS Regression estimates coefficients on the scale of the dependent variable. Logistic regression coefficients are on the scale of ‘log\-odds’. Exponentiating (`exp()`) an estimate provides a coefficient as an ‘odds ratio’. In the regression table below, column 1 suggests, for example, that all else equal, bouncing in the street reduces the likelihood of churn by 45%.
Like OLS, Logistic regression also provides p\-value estimates of statistical significance. The missing coefficients may reflect colinearity or small variability across the 3 levels of `avgBounceDistance`. Perhaps the only signal in this feature that really matters is the `1-4 ft.` indicator. Thus, `avgBounceDistance` is recoded in `churnreg2` such that any value that does not equal `1-4 ft.` receives `Other`. New training/testing sets and generated, and the model is estimated once again.
Note there is no R\-Squared presented in the `summary`. Goodness of fit for Logistic regression is not as straightforward as it is for OLS. In the next section, goodness of fit is judged in the context of classification errors.
```
churn <-
mutate(churn, avgBounceDistance = ifelse(avgBounceDistance == "1-4 ft.", "1-4 ft.",
"Other"))
set.seed(3456)
trainIndex <- createDataPartition(churn$Churn, p = .50,
list = FALSE,
times = 1)
churnTrain <- churn[ trainIndex,]
churnTest <- churn[-trainIndex,]
churnreg2 <- glm(churnNumeric ~ .,
data=churnTrain %>% dplyr::select(-SeniorCitizen,
-WeekendBouncer,
-Churn),
family="binomial" (link="logit"))
summary(churnreg2)
```
**Table 6\.2: Churn Regressions**
| | | |
| | churnNumeric | |
| | Churn Regression 1 | Churn Regression 2 |
| | (1\) | (2\) |
| | | |
| avgBounceTime | \-0\.059\*\*\* (0\.009\) | \-0\.059\*\*\* (0\.009\) |
| bounceInStreetYes | \-0\.807\*\*\* (0\.207\) | \-0\.807\*\*\* (0\.207\) |
| PaperlessBillingYes | 0\.494\*\*\* (0\.103\) | 0\.494\*\*\* (0\.103\) |
| monthlyDistanceBounced | 0\.012\*\* (0\.006\) | 0\.012\*\* (0\.006\) |
| totalDistanceBounced | 0\.0003\*\*\* (0\.0001\) | 0\.0003\*\*\* (0\.0001\) |
| phoneTypeIPhone | \-1\.211\*\*\* (0\.380\) | \-1\.211\*\*\* (0\.380\) |
| phoneTypeUnknown | \-0\.559\*\*\* (0\.195\) | \-0\.559\*\*\* (0\.195\) |
| avgBounceDistance1\-4 ft. | \-0\.490\*\*\* (0\.123\) | |
| avgBounceDistance4\+ ft. | | |
| avgBounceDistanceOther | | 0\.490\*\*\* (0\.123\) |
| avgBounceHeight1\-2 ft. | \-0\.688\*\*\* (0\.143\) | \-0\.688\*\*\* (0\.143\) |
| avgBounceHeight2\+ ft. | \-1\.608\*\*\* (0\.259\) | \-1\.608\*\*\* (0\.259\) |
| Constant | 0\.339 (0\.417\) | \-0\.151 (0\.460\) |
| N | 3,517 | 3,517 |
| Log Likelihood | \-1,482\.423 | \-1,482\.423 |
| AIC | 2,986\.847 | 2,986\.847 |
| | | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | | |
6\.4 Goodness of Fit
--------------------
For Logistic regression, a robust model is one which can accurately predict instances of both `Churn` and `No_Churn`. In this section, several options are considered.
The first and weakest option is the ‘Psuedo R Squared’, which, unlike regular R\-Squared, does not vary linearly from 0 to 1\. It does not describe the proportion of the variance in the dependent variable explained by the model. However, it is useful for quickly comparing different model specifications, which may be helpful for feature selection. Below, the ‘McFadden R\-Squared’ is demonstrated \- the higher, the better.
```
pR2(churnreg2)[4]
```
```
## fitting null model for pseudo-r2
```
```
## McFadden
## 0.2721287
```
A more useful approach to goodness of fit is to predict for `churnTest` then tally up the rate that `Churn` and `No_Churn` are predicted correctly. The first step is to create a data frame of test set probabilities, `testProbs`, which includes both the observed churn `Outcome` and predicted probabilities for each observation.
Setting `type="response"` in the `predict` function ensures the predictions are in the form of predicted probabilities. Thus, a probability of 0\.75 means that customer has a 75% probability of churning.
```
testProbs <- data.frame(Outcome = as.factor(churnTest$churnNumeric),
Probs = predict(churnreg2, churnTest, type= "response"))
head(testProbs)
```
```
## Outcome Probs
## 1 0 0.6429831
## 7 0 0.5832574
## 8 0 0.4089603
## 9 1 0.4943297
## 12 0 0.0195177
## 13 0 0.1369811
```
There are a number of interesting data visualizations that can be created by relating the predicted probabilities to the observed churn `Outcome`. Figure 6\.6 shows the distribution of predicted probabilities (x\-axis) for `Churn` and `No_Churn`, recoded as `1` and `0` (y\-axis), respectively.
If `churnreg2` was very predictive, the ‘hump’ of predicted probabilities for `Churn` would cluster around `1` on the x\-axis, while the predicted probabilities for `No_Churn` would cluster around `0`. In reality, the humps are where we might expect them, but with long tails.
```
ggplot(testProbs, aes(x = Probs, fill = as.factor(Outcome))) +
geom_density() +
facet_grid(Outcome ~ .) +
scale_fill_manual(values = palette2) + xlim(0, 1) +
labs(x = "Churn", y = "Density of probabilities",
title = "Distribution of predicted probabilities by observed outcome") +
plotTheme() + theme(strip.text.x = element_text(size = 18),
legend.position = "none")
```
Next, a variable called `predOutcome` is created that classifies any predicted probability greater than 0\.50 (or 50%) as a predicted `Churn` event. 50% seems like a reasonable threshold to start with, but one that we will explore in great detail below.
```
testProbs <-
testProbs %>%
mutate(predOutcome = as.factor(ifelse(testProbs$Probs > 0.5 , 1, 0)))
head(testProbs)
```
```
## Outcome Probs predOutcome
## 1 0 0.6429831 1
## 2 0 0.5832574 1
## 3 0 0.4089603 0
## 4 1 0.4943297 0
## 5 0 0.0195177 0
## 6 0 0.1369811 0
```
Many interesting questions can now be asked. What is overall accuracy rate? Does the model do a better job predicting `Churn` or `No_Churn`? To answer these questions, the code block below outputs a ‘Confusion Matrix’. A `positive` parameter is specified to let the function know that a value of `1` designates churn.
The table at the top of the output is the Confusion Matrix which shows the number of ‘Reference’ or observed instances of churn that are predicted as such. Each entry in the matrix provides a different comparison between observed and predicted, given the 50% threshold.
There were 506 *true positives*, instances where observed `Churn` was correctly predicted as `Churn`. There were 428 *false positives*, instances where `Churn` was incorrectly predicted as `No_Churn`.
There were 2306 *true negatives*, instances where observed `No_Churn` was correctly predicted as `No_Churn`. Finally, there were 275 *false negatives*, instances where `No_Churn` was incorrectly predicted as `Churn`.
```
caret::confusionMatrix(testProbs$predOutcome, testProbs$Outcome,
positive = "1")
```
```
## Confusion Matrix and Statistics
##
## Reference
## Prediction 0 1
## 0 2306 428
## 1 275 506
##
## Accuracy : 0.8
## 95% CI : (0.7864, 0.8131)
## No Information Rate : 0.7343
## P-Value [Acc > NIR] : < 0.00000000000000022
##
## Kappa : 0.4592
##
## Mcnemar's Test P-Value : 0.000000009879
##
## Sensitivity : 0.5418
## Specificity : 0.8935
## Pos Pred Value : 0.6479
## Neg Pred Value : 0.8435
## Prevalence : 0.2657
## Detection Rate : 0.1440
## Detection Prevalence : 0.2222
## Balanced Accuracy : 0.7176
##
## 'Positive' Class : 1
##
```
The confusion matrix also calculates overall accuracy, defined as the number of true positives plus true negatives divided by the total number of observations. Here the Accuracy is 80%. Is that good?
Two other metrics, ‘Sensitivity’ and ‘Specificity’, provide even more useful intelligence. The Sensitivity of the model is the proportion of actual positives (1’s) that were predicted to be positive. This is also known as the “True Positive Rate”. The Specificity of the model is the proportion of actual negatives (0’s) that were predicted to be negatives. Also known as the “True Negative Rate”.
The Sensitivity and Specificity of `churnreg2` is 54% and 89%, respectively. The interpretation is that the model is better at predicting those who are not going to churn than those who will. These metrics provide important intuition about *how useful our model is as a resource allocation tool*. It’s not surprising that the model is better at predicting no churn given that 75% of the data has this outcome. However, given the business process at hand, we would prefer to do a bit better at predicting the churn outcome.
New features, better feature engineering, and a more powerful predictive algorithm would significantly improve this model. Another approach for improving the model is to search for an ‘optimal’ threshold that can limit the most costly errors (from a business standpoint). Most of the remainder of this chapter is devoted to this optimization, beginning with the ‘ROC Curve’ below, a common goodness of fit metric for binary classification models.
### 6\.4\.1 Roc Curves
```
ggplot(testProbs, aes(d = as.numeric(testProbs$Outcome), m = Probs)) +
geom_roc(n.cuts = 50, labels = FALSE, colour = "#FE9900") +
style_roc(theme = theme_grey) +
geom_abline(slope = 1, intercept = 0, size = 1.5, color = 'grey') +
labs(title = "ROC Curve - churnModel")
```
Let’s quickly revisit the business problem: The goal is to identify those customers at risk for churn, so we can offer them a 20% off membership coupon which costs Bounce to Work! $8\.
Consider that at each predicted probability threshold, there is a different set of confusion metrics. A threshold of 10% for instance, means that most predictions will be classified as churn, and most customers will receive a coupon. This may mean Bounce to Work! would ultimately loose money on the promotion. As we’ll learn below, different confusion metrics have different costs and benefits, and searching for an optimal threshold, helps ensure we can stay in the black.
The Receiver Operating Characteristic Curve or ROC Curve is useful because it visualizes trade\-offs for two important confusion metrics, while also providing a single goodness of fit indicator. The code block above uses the `plotROC` and `ggplot` packages to create a ROC Curve for the `churnreg2`.
The y\-axis of the ROC curve (topmost, Figure 6\.7\) shows the rate of true positives (observed churn, predicted as churn) for each threshold from 0\.01 to 1\. The x\-axis shows the rate of false positives (observed churn, predicted as no churn) for each threshold.
The notion of trade\-offs is really important here. Follow the y\-axis to 0\.75 and then across to the orange curve. According to the ROC Curve, a threshold that predicts churn correctly 75% of the time, will predict `Churn` incorrectly \>20% of the time. The critical question is whether this trade\-off is appropriate given the cost/benefit of the business process?
What is really interesting about these trade\-offs is that they come with diminishing returns. for every additional improvement in true positive rate, the model will make a greater proportion of false positive errors. Moving from a 75% to a 90% true positive rate dramatically increases the false positive rate.
To understand how the ROC Curve doubles as a goodness of fit tool, it is helpful to start with the diagonal line in the bottom panel of Figure 6\.7\. Also known as the ‘Coin Flip line’, any true positive rate on this line has an equal corresponding false positive rate. Any classifier with an ROC Curve along the diagonal line is no better than a coin flip. An ROC Curve below the diagonal line represents a very poor fit. Consider along this line, a model that gets it right \~7% of the time, gets it wrong 50% of the time.
A ‘Perfect Fit’ may seem desirable, but it is actually indicative of an overfit model. A ROC Curve like this yields 100% true positives and 0 false positives. If the model is so strongly fit to experiences in the training set, it will not likely generalize to experiences in new data. This is a really important point to remember. As before, we can test this model for its out\-of\-sample generalizability with cross\-validation.
Another look at Figure 6\.7 suggests that the usefulness of the algorithm can be judged by the proportion of the plotting area that is *under* the ROC curve. The ‘Area Under the Curve’ metric or AUC for `churnreg2` is 0\.8408635\. AUC is another quick goodness of fit measure to guide feature selection across different models. 50% of the plotting area is under the Coin Flip line and 100% of the plotting area is underneath the Perfect Fit line. Thus, a reasonable AUC is between 0\.5 and 1\.
```
pROC::auc(testProbs$Outcome, testProbs$Probs)
```
```
## Area under the curve: 0.8409
```
### 6\.4\.1 Roc Curves
```
ggplot(testProbs, aes(d = as.numeric(testProbs$Outcome), m = Probs)) +
geom_roc(n.cuts = 50, labels = FALSE, colour = "#FE9900") +
style_roc(theme = theme_grey) +
geom_abline(slope = 1, intercept = 0, size = 1.5, color = 'grey') +
labs(title = "ROC Curve - churnModel")
```
Let’s quickly revisit the business problem: The goal is to identify those customers at risk for churn, so we can offer them a 20% off membership coupon which costs Bounce to Work! $8\.
Consider that at each predicted probability threshold, there is a different set of confusion metrics. A threshold of 10% for instance, means that most predictions will be classified as churn, and most customers will receive a coupon. This may mean Bounce to Work! would ultimately loose money on the promotion. As we’ll learn below, different confusion metrics have different costs and benefits, and searching for an optimal threshold, helps ensure we can stay in the black.
The Receiver Operating Characteristic Curve or ROC Curve is useful because it visualizes trade\-offs for two important confusion metrics, while also providing a single goodness of fit indicator. The code block above uses the `plotROC` and `ggplot` packages to create a ROC Curve for the `churnreg2`.
The y\-axis of the ROC curve (topmost, Figure 6\.7\) shows the rate of true positives (observed churn, predicted as churn) for each threshold from 0\.01 to 1\. The x\-axis shows the rate of false positives (observed churn, predicted as no churn) for each threshold.
The notion of trade\-offs is really important here. Follow the y\-axis to 0\.75 and then across to the orange curve. According to the ROC Curve, a threshold that predicts churn correctly 75% of the time, will predict `Churn` incorrectly \>20% of the time. The critical question is whether this trade\-off is appropriate given the cost/benefit of the business process?
What is really interesting about these trade\-offs is that they come with diminishing returns. for every additional improvement in true positive rate, the model will make a greater proportion of false positive errors. Moving from a 75% to a 90% true positive rate dramatically increases the false positive rate.
To understand how the ROC Curve doubles as a goodness of fit tool, it is helpful to start with the diagonal line in the bottom panel of Figure 6\.7\. Also known as the ‘Coin Flip line’, any true positive rate on this line has an equal corresponding false positive rate. Any classifier with an ROC Curve along the diagonal line is no better than a coin flip. An ROC Curve below the diagonal line represents a very poor fit. Consider along this line, a model that gets it right \~7% of the time, gets it wrong 50% of the time.
A ‘Perfect Fit’ may seem desirable, but it is actually indicative of an overfit model. A ROC Curve like this yields 100% true positives and 0 false positives. If the model is so strongly fit to experiences in the training set, it will not likely generalize to experiences in new data. This is a really important point to remember. As before, we can test this model for its out\-of\-sample generalizability with cross\-validation.
Another look at Figure 6\.7 suggests that the usefulness of the algorithm can be judged by the proportion of the plotting area that is *under* the ROC curve. The ‘Area Under the Curve’ metric or AUC for `churnreg2` is 0\.8408635\. AUC is another quick goodness of fit measure to guide feature selection across different models. 50% of the plotting area is under the Coin Flip line and 100% of the plotting area is underneath the Perfect Fit line. Thus, a reasonable AUC is between 0\.5 and 1\.
```
pROC::auc(testProbs$Outcome, testProbs$Probs)
```
```
## Area under the curve: 0.8409
```
6\.5 Cross\-validation
----------------------
As a data scientist working at Bounce to Work!, your goal should be to train a predictive model that will be useful for many months to come, rather than re\-training a model every month. Thus, the model is only as good as its ability to generalize to new data. This section performs cross\-validation using the `caret::train` function as we saw in Chapter 3\.
The `trainControl` parameter is set to run 100 k\-folds and to output predicted probabilities, `classProbs`, for ‘two classes’, `Churn` and `No_Churn`. Additional parameters output AUC (the `train` function refers to this as ‘ROC’) and confusion metrics for each fold.
Importantly, the three metrics in the `cvFit` output are for *mean* AUC, Sensitivity, and Specificity across *all 100 folds*. Note that the dependent variable here is `Churn` not `churnNumeric`.
```
ctrl <- trainControl(method = "cv", number = 100, classProbs=TRUE, summaryFunction=twoClassSummary)
cvFit <- train(Churn ~ ., data = churn %>%
dplyr::select(
-SeniorCitizen,
-WeekendBouncer,
-churnNumeric),
method="glm", family="binomial",
metric="ROC", trControl = ctrl)
cvFit
```
```
## Generalized Linear Model
##
## 7032 samples
## 8 predictor
## 2 classes: 'Churn', 'No_Churn'
##
## No pre-processing
## Resampling: Cross-Validated (100 fold)
## Summary of sample sizes: 6961, 6961, 6961, 6962, 6961, 6962, ...
## Resampling results:
##
## ROC Sens Spec
## 0.8407402 0.5333333 0.8949849
```
The means are not as important as the *across* fold goodness of fit. Figure 6\.8 below plots the distribution of AUC, Sensitivity, and Specificity across the 100 folds. `names(cvFit)` shows that `train` creates several outputs. `cvFit$resample` is a data frame with goodness of fit for each of the 100 folds. The code block below joins to this, the mean goodness of fit (`cvFit$results`), and plots the distributions as a small multiple plot.
The tighter each distribution is to its mean, the more generalizable the model. Our model generalizes well with respect to Specificity \- the rate it correctly predicts `No_Churn` ( *the true negatives* ). It does not generalize as well with respect to Sensitivity \- the rate it correctly predicts `Churn` ( *true positives* ). Note that if the model was overfit with an AUC of `1`, it would also not generalize well to new data.
It seems our would\-be decision\-making tool is inconsistent in how it predicts the business\-relevant outcome, churn. That inconsistency could be systematic \- perhaps it works better for younger bouncers or for more serious bouncers. Or the model could simply lack sufficient predictive power. Either way, this inconsistency will have a direct effect on the business process should this algorithm be put into production.
```
dplyr::select(cvFit$resample, -Resample) %>%
gather(metric, value) %>%
left_join(gather(cvFit$results[2:4], metric, mean)) %>%
ggplot(aes(value)) +
geom_histogram(bins=35, fill = "#FF006A") +
facet_wrap(~metric) +
geom_vline(aes(xintercept = mean), colour = "#981FAC", linetype = 3, size = 1.5) +
scale_x_continuous(limits = c(0, 1)) +
labs(x="Goodness of Fit", y="Count", title="CV Goodness of Fit Metrics",
subtitle = "Across-fold mean reprented as dotted lines") +
plotTheme()
```
6\.6 Generating costs and benefits
----------------------------------
Our goal again, is to target those at risk of churning with a 20% off coupon that hopefully convinces them to resubscribe. How can our predictive model help improve revenue? Assuming the model successfully predicts churners, and assuming a 20% coupon was enough incentive to stay, can the algorithmic approach help optimize Bounce to Work!’s marketing campaign?
Let’s make the following assumptions about the marketing campaign:
1. The membership costs $30\.
2. Each mailer to a potential churn\-er costs $8\.00\. It includes an offer for 20% off *this month’s* subscription ($6 off) plus $2\.00 for printing/postage.
3. Of those would\-be churners who are sent a mailer, past campaigns show \~50% of recipients re\-subscribe.
While there are many ways to approach a cost/benefit analysis, our approach will be to use the confusion matrix from `testProbs`. Below the cost/benefit for each outcome in our confusion matrix is calculated, like so[47](#fn47):
* **True negative revenue** “We predicted no churn, did not send a coupon, and the customer did not churn”: $30 \- $0 \= **$30**
* **True positive revenue** “We predicted churn and sent the mailer”: $30 \- $8 \= **$22** return for 50% of cases that re\-subscribe. We lose $30 \+ $2 \= **$\-32** for 50% of cases who were sent the coupon but did not re\-subscribe.
* **False negative revenue** “We predicted no churn, sent no coupon, and the customer churned”: $0 \- 30 \= **\-$30**
* **False positive revenue** “We predicted churn and sent the mailer, the customer was not going churn but used the coupon anyway”: $30 \- $8 \= **$22**
For now, note that the greatest cost comes with 50% of the true positives where we offer a coupon but loose the customer anyway. The greatest marginal benefit is in maximizing the number of true negatives \- customers who we correctly predict will not churn. To calculate the total cost/benefit, these confusion metrics are multiplied by their corresponding costs below.
```
cost_benefit_table <-
testProbs %>%
count(predOutcome, Outcome) %>%
summarize(True_Negative = sum(n[predOutcome==0 & Outcome==0]),
True_Positive = sum(n[predOutcome==1 & Outcome==1]),
False_Negative = sum(n[predOutcome==0 & Outcome==1]),
False_Positive = sum(n[predOutcome==1 & Outcome==0])) %>%
gather(Variable, Count) %>%
mutate(Revenue =
case_when(Variable == "True_Negative" ~ Count * 30,
Variable == "True_Positive" ~ ((30 - 8) * (Count * .50)) +
(-32 * (Count * .50)),
Variable == "False_Negative" ~ (-30) * Count,
Variable == "False_Positive" ~ (30 - 8) * Count)) %>%
bind_cols(data.frame(Description = c(
"We predicted no churn and did not send a mailer",
"We predicted churn and sent the mailer",
"We predicted no churn and the customer churned",
"We predicted churn and the customer did not churn")))
```
| Variable | Count | Revenue | Description |
| --- | --- | --- | --- |
| True\_Negative | 2306 | 69180 | We predicted no churn and did not send a mailer |
| True\_Positive | 506 | \-2530 | We predicted churn and sent the mailer |
| False\_Negative | 428 | \-12840 | We predicted no churn and the customer churned |
| False\_Positive | 275 | 6050 | We predicted churn and the customer did not churn |
| Table 6\.3 |
| --- |
Assuming our algorithm was used, the total `Revenue` (column sum) in the Cost/Benefit Table 6\.3 above is $59,860\. Assuming no algorithm was used and 934 customers churned (from `testProbs`), the cost/benefit would be (2,581 instances of no churn \* $30 \= $77,430\) \+ (934 instances of *observed* churn \* \-30 \= \-$28,020\), which leads to a net revenue of $49,410\.
Thus, the algorithm would save Bounce to Work! $10,450\. This savings is based on a 50% threshold, but maybe other thresholds can yield an even greater cost/benefit. Let’s now try to optimize the threshold.
### 6\.6\.1 Optimizing the cost/benefit relationship
Recall, that a different confusion matrix, set of errors and cost/benefit calculation exists for each threshold. Ideally, the ‘optimal’ threshold is the one that returns the greatest cost/benefit. In this section, a function is created to iteratively loop through each threshold, calculate confusion metrics, and total the revenue for each. The results are then visualized for each threshold.
The `iterateThresholds` function in the `functions.R` script is based on a `while` loop. The threshold *x*, starts at 0\.01 and while it is less than 1, a predicted classification, `predOutcome`, is `mutate`d given *x*; confusion metrics are calculated; the cost benefit is performed and is appended to a data frame called `all_prediction`. Finally *x* is iterated by adding 0\.01 and the process continues until *x* \= 1\.
`iterateThresholds` function is run below and includes a host of goodness of fit indicators, several of which are returned below. Recall, each row is a different `Threshold`.
```
whichThreshold <-
iterateThresholds(
data=testProbs, observedClass = Outcome, predictedProbs = Probs)
whichThreshold[1:5,]
```
```
## # A tibble: 5 x 10
## Count_TN Count_TP Count_FN Count_FP Rate_TP Rate_FP Rate_FN Rate_TN Accuracy
## <int> <int> <int> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 350 929 5 2231 0.995 0.864 0.00535 0.136 0.364
## 2 576 926 8 2005 0.991 0.777 0.00857 0.223 0.427
## 3 697 923 11 1884 0.988 0.730 0.0118 0.270 0.461
## 4 818 918 16 1763 0.983 0.683 0.0171 0.317 0.494
## 5 904 913 21 1677 0.978 0.650 0.0225 0.350 0.517
## # ... with 1 more variable: Threshold <dbl>
```
Next, the result is moved to long form and `Revenue` is calculated for each confusion metric at each threshold.
```
whichThreshold <-
whichThreshold %>%
dplyr::select(starts_with("Count"), Threshold) %>%
gather(Variable, Count, -Threshold) %>%
mutate(Revenue =
case_when(Variable == "Count_TN" ~ Count * 30,
Variable == "Count_TP" ~ ((30 - 8) * (Count * .50)) +
(-32 * (Count * .50)),
Variable == "Count_FN" ~ (-30) * Count,
Variable == "Count_FP" ~ (30 - 8) * Count))
```
Figure 6\.9 below plots the `Revenue` for each confusion metric by threshold. This is a plot of trade\-offs. Each is described below:
* **False negative revenue**: “We predicted no churn and the customer churned” \- As the threshold increases we see more customers churn that a mailer did not get sent to. As fewer mailers go out, the losses mount.
* **False positive revenue**: “We predicted churn and sent the mailer, the customer did not churn but used the coupon anyway” \- A low threshold assumes few customers are predicted to churn, which limits the revenue hit from those who used the coupon anyway, despite their intention to renew.
* **True negative revenue**: “We predicted no churn, did not send a mailer, and the customer did not churn” \- As the threshold goes up, the number of full price paying customers goes up. At higher thresholds, the model assumes no churn and the no revenue is lost to false negatives.
* **True positive revenue**: “We predicted churn and sent the mailer” \- Although the coupon convincing 50% of members to re\-subscribe saves some revenue, the cost of actual churn leads to losses for most thresholds.
```
whichThreshold %>%
ggplot(.,aes(Threshold, Revenue, colour = Variable)) +
geom_point() +
scale_colour_manual(values = palette5[c(5, 1:3)]) +
labs(title = "Revenue by confusion matrix type and threshold",
y = "Revenue") +
plotTheme() +
guides(colour=guide_legend(title = "Confusion Matrix"))
```
The next step of the cost/benefit analysis is to calculate the total `Revenue` across confusion metrics for each threshold.
Below, `actualChurn` and `Actual_Churn_Rate` include 50% of the True Positives (who were not swayed by the coupon) and all of the False Negatives (those who never even received a coupon). `Actual_Churn_Revenue_Loss` is loss from this pay period, and `Revenue_Next_Period` is for the next, assuming no new customers are added.
Assuming `testProbs` is representative of Bounce to Work!’s customers, 934 (27%) of customers will churn resulting in a net loss of \-$28,020 and a net revenue of $49,410\. How do the other thresholds compare?
```
whichThreshold_revenue <-
whichThreshold %>%
mutate(actualChurn = ifelse(Variable == "Count_TP", (Count * .5),
ifelse(Variable == "Count_FN", Count, 0))) %>%
group_by(Threshold) %>%
summarize(Revenue = sum(Revenue),
Actual_Churn_Rate = sum(actualChurn) / sum(Count),
Actual_Churn_Revenue_Loss = sum(actualChurn * 30),
Revenue_Next_Period = Revenue - Actual_Churn_Revenue_Loss)
whichThreshold_revenue[1:5,]
```
```
## # A tibble: 5 x 5
## Threshold Revenue Actual_Churn_Rate Actual_Churn_Revenue_L~ Revenue_Next_Peri~
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.01 54787 0.134 14085 40702
## 2 0.02 56520 0.134 14130 42390
## 3 0.03 57413 0.134 14175 43238
## 4 0.04 58256 0.135 14250 44006
## 5 0.05 58819 0.136 14325 44494
```
A threshold of 26% is optimal and yields the greatest revenue at $62,499\. After that mark, losses associated with False Negatives begin to mount (see Figure 6\.9 above).
`Revenue` (this period) and `Revenue_Next_Period` are plotted below for each `Threshold`. Here we assume no new customers are added next pay period.
```
whichThreshold_revenue %>%
dplyr::select(Threshold, Revenue, Revenue_Next_Period) %>%
gather(Variable, Value, -Threshold) %>%
ggplot(aes(Threshold, Value, colour = Variable)) +
geom_point() +
geom_vline(xintercept = pull(arrange(whichThreshold_revenue, -Revenue)[1,1])) +
scale_colour_manual(values = palette2) +
plotTheme() + ylim(0,70000) +
labs(title = "Revenue this pay period and the next by threshold",
subtitle = "Assuming no new customers added next period. Vertical line denotes optimal threshold")
```
These cost/benefit functions are incredibly revealing. It’s clear that `Revenue` (this\_period) declines after the 26% threshold. The trend for `Revenue_Next_Period` is similar, but interestingly, the slope of the line is much steeper. Why do you think this is?
Table 6\.4 shows the `Revenue` and the rate of churn for no model, the default 50% threshold and the optimal threshold. Not only does the optimal threshold maximize `Revenue` this pay period but it also prevents fewer customers from actually churning in the next pay period, helping to maximize revenues in the long term as well.
| Model | Revenue\_This\_Period | Lost\_Customers\_Next\_Period |
| --- | --- | --- |
| No predictive model | $49,410 | 27% |
| 50% Threshold | $59,860 | 19% |
| 26% Threshold | $62,499 | 16% |
| Table 6\.4 |
| --- |
### 6\.6\.1 Optimizing the cost/benefit relationship
Recall, that a different confusion matrix, set of errors and cost/benefit calculation exists for each threshold. Ideally, the ‘optimal’ threshold is the one that returns the greatest cost/benefit. In this section, a function is created to iteratively loop through each threshold, calculate confusion metrics, and total the revenue for each. The results are then visualized for each threshold.
The `iterateThresholds` function in the `functions.R` script is based on a `while` loop. The threshold *x*, starts at 0\.01 and while it is less than 1, a predicted classification, `predOutcome`, is `mutate`d given *x*; confusion metrics are calculated; the cost benefit is performed and is appended to a data frame called `all_prediction`. Finally *x* is iterated by adding 0\.01 and the process continues until *x* \= 1\.
`iterateThresholds` function is run below and includes a host of goodness of fit indicators, several of which are returned below. Recall, each row is a different `Threshold`.
```
whichThreshold <-
iterateThresholds(
data=testProbs, observedClass = Outcome, predictedProbs = Probs)
whichThreshold[1:5,]
```
```
## # A tibble: 5 x 10
## Count_TN Count_TP Count_FN Count_FP Rate_TP Rate_FP Rate_FN Rate_TN Accuracy
## <int> <int> <int> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 350 929 5 2231 0.995 0.864 0.00535 0.136 0.364
## 2 576 926 8 2005 0.991 0.777 0.00857 0.223 0.427
## 3 697 923 11 1884 0.988 0.730 0.0118 0.270 0.461
## 4 818 918 16 1763 0.983 0.683 0.0171 0.317 0.494
## 5 904 913 21 1677 0.978 0.650 0.0225 0.350 0.517
## # ... with 1 more variable: Threshold <dbl>
```
Next, the result is moved to long form and `Revenue` is calculated for each confusion metric at each threshold.
```
whichThreshold <-
whichThreshold %>%
dplyr::select(starts_with("Count"), Threshold) %>%
gather(Variable, Count, -Threshold) %>%
mutate(Revenue =
case_when(Variable == "Count_TN" ~ Count * 30,
Variable == "Count_TP" ~ ((30 - 8) * (Count * .50)) +
(-32 * (Count * .50)),
Variable == "Count_FN" ~ (-30) * Count,
Variable == "Count_FP" ~ (30 - 8) * Count))
```
Figure 6\.9 below plots the `Revenue` for each confusion metric by threshold. This is a plot of trade\-offs. Each is described below:
* **False negative revenue**: “We predicted no churn and the customer churned” \- As the threshold increases we see more customers churn that a mailer did not get sent to. As fewer mailers go out, the losses mount.
* **False positive revenue**: “We predicted churn and sent the mailer, the customer did not churn but used the coupon anyway” \- A low threshold assumes few customers are predicted to churn, which limits the revenue hit from those who used the coupon anyway, despite their intention to renew.
* **True negative revenue**: “We predicted no churn, did not send a mailer, and the customer did not churn” \- As the threshold goes up, the number of full price paying customers goes up. At higher thresholds, the model assumes no churn and the no revenue is lost to false negatives.
* **True positive revenue**: “We predicted churn and sent the mailer” \- Although the coupon convincing 50% of members to re\-subscribe saves some revenue, the cost of actual churn leads to losses for most thresholds.
```
whichThreshold %>%
ggplot(.,aes(Threshold, Revenue, colour = Variable)) +
geom_point() +
scale_colour_manual(values = palette5[c(5, 1:3)]) +
labs(title = "Revenue by confusion matrix type and threshold",
y = "Revenue") +
plotTheme() +
guides(colour=guide_legend(title = "Confusion Matrix"))
```
The next step of the cost/benefit analysis is to calculate the total `Revenue` across confusion metrics for each threshold.
Below, `actualChurn` and `Actual_Churn_Rate` include 50% of the True Positives (who were not swayed by the coupon) and all of the False Negatives (those who never even received a coupon). `Actual_Churn_Revenue_Loss` is loss from this pay period, and `Revenue_Next_Period` is for the next, assuming no new customers are added.
Assuming `testProbs` is representative of Bounce to Work!’s customers, 934 (27%) of customers will churn resulting in a net loss of \-$28,020 and a net revenue of $49,410\. How do the other thresholds compare?
```
whichThreshold_revenue <-
whichThreshold %>%
mutate(actualChurn = ifelse(Variable == "Count_TP", (Count * .5),
ifelse(Variable == "Count_FN", Count, 0))) %>%
group_by(Threshold) %>%
summarize(Revenue = sum(Revenue),
Actual_Churn_Rate = sum(actualChurn) / sum(Count),
Actual_Churn_Revenue_Loss = sum(actualChurn * 30),
Revenue_Next_Period = Revenue - Actual_Churn_Revenue_Loss)
whichThreshold_revenue[1:5,]
```
```
## # A tibble: 5 x 5
## Threshold Revenue Actual_Churn_Rate Actual_Churn_Revenue_L~ Revenue_Next_Peri~
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.01 54787 0.134 14085 40702
## 2 0.02 56520 0.134 14130 42390
## 3 0.03 57413 0.134 14175 43238
## 4 0.04 58256 0.135 14250 44006
## 5 0.05 58819 0.136 14325 44494
```
A threshold of 26% is optimal and yields the greatest revenue at $62,499\. After that mark, losses associated with False Negatives begin to mount (see Figure 6\.9 above).
`Revenue` (this period) and `Revenue_Next_Period` are plotted below for each `Threshold`. Here we assume no new customers are added next pay period.
```
whichThreshold_revenue %>%
dplyr::select(Threshold, Revenue, Revenue_Next_Period) %>%
gather(Variable, Value, -Threshold) %>%
ggplot(aes(Threshold, Value, colour = Variable)) +
geom_point() +
geom_vline(xintercept = pull(arrange(whichThreshold_revenue, -Revenue)[1,1])) +
scale_colour_manual(values = palette2) +
plotTheme() + ylim(0,70000) +
labs(title = "Revenue this pay period and the next by threshold",
subtitle = "Assuming no new customers added next period. Vertical line denotes optimal threshold")
```
These cost/benefit functions are incredibly revealing. It’s clear that `Revenue` (this\_period) declines after the 26% threshold. The trend for `Revenue_Next_Period` is similar, but interestingly, the slope of the line is much steeper. Why do you think this is?
Table 6\.4 shows the `Revenue` and the rate of churn for no model, the default 50% threshold and the optimal threshold. Not only does the optimal threshold maximize `Revenue` this pay period but it also prevents fewer customers from actually churning in the next pay period, helping to maximize revenues in the long term as well.
| Model | Revenue\_This\_Period | Lost\_Customers\_Next\_Period |
| --- | --- | --- |
| No predictive model | $49,410 | 27% |
| 50% Threshold | $59,860 | 19% |
| 26% Threshold | $62,499 | 16% |
| Table 6\.4 |
| --- |
6\.7 Conclusion \- churn
------------------------
The goal of this chapter is to help pogo\-transit company Bounce to Work! maximize revenues by reducing the number of customers who might not otherwise renew their membership (i.e. churn). A Logistic regression model was estimated to predict `Churn` as a function of customer data and the relevant goodness of fit indicators were introduced. To demonstrate how machine learning can directly influence a business problem, these ‘confusion metrics’ were used to calculate the `Revenue` implications of the model relative to the business\-as\-usual approach, sans model. More experienced readers can swap out `glm` for more advanced machine learning algorithms, and notice an immediate improvement in predictive performance.
Unlike every other chapter in this book, this one focused on resource allocation for a for\-profit company. It may not seem like ‘cost/benefit’ and ‘customer’ are terms that apply to government \- but they absolutely do. Every taxpayer, household, child, parent, and senior is a client of government and all deserve a government that works to their benefit.
The difference is in finding the ‘optimal’ resource allocation approach. This may be possible if the only bottom line is revenue, but it is largely impossible in government, which has many unquantifiable bottom lines like equity and social cohesion. Armed with the necessary domain expertise, however, government data scientists can still use these methods to improve service delivery.
There are no shortage of use cases in government that could benefit from these models. Which homelessness or drug treatment intervention might increase the probability for success? Which landlords are likely renting and evicting tenants illegally? Which buildings are at risk of falling down? Which gang member is at risk for gun crime? Which medicaid recipient might benefit from in\-home nursing care? And the list goes on.
There are two ways to judge the usefulness of these models. First, does the data\-driven approach improve outcomes relative to the business\-as\-usual approach? Chapter 4 compared the geospatial risk model to traditional hot spot mapping. Here, the business\-as\-usual approach may have just allowed churn; perhaps an employee had a method for choosing which customers receive a coupon; or maybe they pulled names from a hat. Consider that a confusion matrix is possible for each of these approaches.
The second way to judge utility is through the lens of ‘fairness’, which is the subject of the next chapter. Just because the model may improve cost/benefit, doesn’t necessarily make for a more useful decision\-making approach if it further disenfranchises one group or another. There are many ways to judge fairness, and our approach, not surprisingly, will be to analyze across\-group generalizability.
Ultimately, an algorithm’s usefulness should not be evaluated by an engineer alone, but by a team of engineers and domain experts who can evaluate models within the appropriate context.
6\.8 Assignment \- Target a subsidy
-----------------------------------
Emil City is considering a more proactive approach for targeting home owners who qualify for a home repair tax credit program. This tax credit program has been around for close to twenty years, and while the Department of Housing and Community Development (HCD) tries to proactively reach out to eligible homeowners ever year, the uptake of the credit is woefully inadequate. Typically only 11% of eligible homeowners they reach out to take the credit.
The consensus at HCD is that the low conversion rate is due to the fact that the agency reaches out to eligible homeowners at random. Unfortunately, we don’t know the cost/benefit of previous campaigns, but we should assume it wasn’t good. To move toward a more targeted campaign, HCD has recently hired you, their very first data scientist, to convert all the client\-level data collected from previous campaigns into a decision\-making analytic that can better target their limited outreach resources.
You have been given a random sample of records. Your goal is to train the best classifier you can and use the results to inform a cost/benefit analysis.
The data for this exercise has been adopted from Moro \& Rita (2014\)[48](#fn48). Some variables have been changed to suit the current use case. The dependent variable is `y`, which is `Yes` to indicate the a homeowner took the credit and `No` to indicate they did not. There are many features related to this outcome described in the table below.
| Variable | Description | Class | Notes |
| --- | --- | --- | --- |
| age | Age of homeowner | Numeric | |
| job | Occupation indicator | Category | |
| marital | Marital Status | Category | |
| education | Educational attainment | Category | |
| taxLien | Is there a lien against the owner’s property? | Category | |
| mortgage | Is the owner carrying a mortgage | Category | |
| taxbill\_in\_phl | Is the owner’s full time residence not in Philadelphia | Category | |
| contact | How have we previously contacted individual? | Category | |
| month | Month we last contacted individual | Category | |
| day\_of\_week | Day of the week we last contacted individual | Category | |
| campaign | \# of contacts for this ind for this campaign | Category | |
| pdays | \# days after ind. last contacted from a previous program | Category | 999 \= client not previously contacted |
| previous | \# of contacts before this campaign for this ind. | Numeric | |
| poutcome | Outcome of the previous marketing campaign | Categorical | |
| unemploy\_rate | Unemployment rate at time of campaign | Numeric | |
| cons.price.idx | Consumer Price Idex at campaign time | Numeric | |
| cons.conf.idx | Consumer confidence index at time of campaign | Numeric | |
| inflation\_rate | US Inflation Rate | Numeric | daily indicator |
| spent\_on\_repairs | Amoung annually spent on home repairs | Numeric | |
| y | Indicates the individual took the credit | Category | Yes/No (but you may wish to recode to numeric) |
| Table 6\.5 |
| --- |
After studying the credit program and related materials, you construct some stylized facts to help guide your cost/benefit analysis. If we predict that a household will take the credit, then HCD is willing to allocate **$2,850** per homeowner which includes staff and resources to facilitate mailers, phone calls, and information/counseling sessions at the HCD offices. Given the new targeting algorithm, we should now assume 25% of contacted eligible homeowners take the credit. The remainder receive the marketing allocation but do not take the credit.
The credit costs **$5,000** per homeowner which can be used toward home improvement. Academic researchers in Philadelphia evaluated the program finding that houses that transacted after taking the credit, sold with a **$10,000** premium, on average. Homes surrounding the repaired home see an aggregate premium of **$56,000**, on average. Below is a run down of the costs and benefits for each potential outcome of the model you will build. This is a public\-sector use case, so the cost/benefit is not as straightforward as Bounce to Work! If you feel that changing a constraint would be helpful, then please do so.
1. True Positive \- Predicted correctly homeowner would take the credit; allocated the marketing resources, and 25% took the credit.
2. True Negative \- Predicted correctly homeowner would not take the credit, no marketing resources were allocated, and no credit was allocated.
3. False Positive \- Predicted incorrectly homeowner would take the credit; allocated marketing resources; no credit allocated.
4. False Negative \- We predicted that a homeowner would not take the credit but they did. These are likely homeowners who signed up for reasons unrelated to the marketing campaign. Thus, we ‘0 out’ this category, assuming the cost/benefit of this is $0\.
Deliverables:
1. One paragraph on the motivation for the analysis.
2. Develop and interpret data visualizations that describe feature importance/correlation.
3. Split your data into a 65/35 training/test set.
4. The Sensitivity (True Positive Rate) for a model with all the features is very low. **Engineer new features** that significantly increase the Sensitivity.
1. Interpret your new features in one paragraph.
2. Show a regression summary for both the kitchen sink and your engineered regression.
3. Cross validate both models; compare and interpret two facetted plots of ROC, Sensitivity and Specificity.
5. Output an ROC curve for your new model and interpret it.
6. Develop a cost benefit analysis.
1. Write out the cost/benefit equation for each confusion metric.
2. Create the ‘Cost/Benefit Table’ as seen above.
3. Plot the confusion metric outcomes for each Threshold.
4. Create two small multiple plots that show `Threshold` as a function of `Total_Revenue` and `Total_Count_of_Credits`. Interpret this.
5. Create a table of the `Total_Revenue` and `Total_Count_of_Credits` allocated for 2 categories. `50%_Threshold` and your `Optimal_Threshold`.
7. Conclude whether and why this model should or shouldn’t be put into production. What could make the model better? What would you do to ensure that the marketing materials resulted in a better response rate?
| Field Specific |
urbanspatial.github.io | https://urbanspatial.github.io/PublicPolicyAnalytics/people-based-ml-models-algorithmic-fairness.html |
Chapter 7 People\-Based ML Models: Algorithmic Fairness
=======================================================
7\.1 Introduction
-----------------
The churn use case from Chapter 6 is one example of how machine learning algorithms increasingly make decisions in place of humans. Marketing campaigns, insurance, credit cards, bank loans, news and shopping recommendations, are all now allocated with these methods. Bestsellers like Cathy O’Neil’s, “Weapons of Math Destruction” and relentless news coverage of tech company data mining, suggests these algorithms can bring as much peril as they do promise.[49](#fn49)
Government is still unsure how to regulate private\-sector algorithms \- their inner\-workings cast as intellectual property and closed off from public scrutiny. In the public\-sector however, there is an expectation that algorithms are open and transparent. While governments *today* are using algorithms to automate their decision\-making, many lack the regulatory or Planning wherewithal to ensure these models are fair.
In this chapter, we learn how to open the black box of these algorithms to better judge them for fairness and better understand the pertinent social costs. A person\-based model is estimated to predict ‘recidivism’ \- the term given to an offender released from prison, who then re\-offends and must go back to prison. These algorithms have exploded in recent years \- another example of the criminal justice system as an early adopter of machine learning.
A recidivism predictive model may be used by a judge to inform sentencing or parole decisions. It may also be used to prioritize who gets access to prisoner reentry programs like job training, for example.
If churn was the only context we had for the efficacy of these tools, we might think applying machine learning to recidivism is a no\-brainer. This could not be further from the truth. Here, as was the case in Predictive Policing (Chapter 5\), we will see that when human bias is baked into machine learning predictions, the result is a decision\-making tool that is not useful.
As before, a model that is not useful is one that lacks generalizability across groups. With churn, generalizability was not considered, and the costs of getting it wrong is simply a loss of revenue. In geospatial Predictive Policing, the cost is racially discriminate police surveillance. With recidivism, the costs are the systematic and disproportional over\-imprisonment of one race relative to another.
In other words, a biased business algorithm can cost money, but a biased government algorithm can cost lives and livelihoods. The associated social and economic costs could be massive, and we must learn to evaluate these models with these costs in mind.
One example of bias in the recidivism use case is higher False Positive rates for African American ex\-offenders compared to Caucasians. A False Positive in this context means that the algorithm predicted an ex\-offender would recidivate but they did not. If a judge uses a predicted recidivism ‘risk score’ to aid in his or her sentencing decision, and such a bias exists, then a disproportional number of African Americans may be incarcerated for longer than they otherwise deserve. Taken across tens of thousands of citizens in one metropolitan area, the social costs are unfathomable.
### 7\.1\.1 The spectre of disparate impact
Social Scientists are familiar with issues of fairness and discrimination, but identifying algorithmic discrimination is just as nuanced as it would be in say, housing and labor markets. It is unlikely that a jurisdiction would create a discriminatory algorithm on purpose. Given the black box nature of these models, it is more likely they would create a decision\-making tool that has a “disparate impact” on members of a protected class. Disparate impact is a legal theory positing that although a policy or program may not be discriminatory *prima facie*, it still may have an adverse discriminatory effect, even if unintended.
Recall our need to ensure that *geospatial* predictive algorithms generalize from one urban context to the next. This same rationale applies with people\-based models, but the concern is generalizability across different protected classes, like gender and race.
If an algorithm does not generalize to one group, its use for resource allocation may have a disparate impact on that group. The False Positive example for African Americans relative to Caucasians is one such example. This may occur because the algorithm lacks appropriate features to accurately model the African American “experience”. It may also be that the training data itself is biased, a critique discussed at length in Chapter 5\.
As a reminder, systematic over\-policing of historically disenfranchised communities creates a feedback loop where more reported crime leads to more cops on patrol, who then report more crimes, that ultimately lead to more convictions. In a predictive setting, if this *selection bias* goes unobserved, the systematic error will move into the error term and lead to bias and unintended social costs.
It is impossible to identify the effect of unobserved variables. As an alternative, researchers have very recently developed a series of fairness metrics that can be used to judge disparate impact.[50](#fn50) A 2018 review by Verma \& Rubin is particularly relevant for policy\-makers interested in learning more about fairness metrics.[51](#fn51)
For example, in 2016, journalists from ProPublica released an evaluation of the COMPAS recidivism prediction algorithm built by a company called Northpointe, finding that while the algorithm had comparable *accuracy* rates across different racial groups, there were clear racial differences for *errors* that had high social costs.[52](#fn52) This paradox lead ProPublica to ask a fascinating question \- “how could an algorithm simultaneously be fair and unfair?”[53](#fn53) In this chapter, we will make use of the COMPAS data ProPublica used for their analysis and develop fairness metrics that can help identify disparate impact.
### 7\.1\.2 Modeling judicial outcomes
In the criminal justice system, as in life, decisions are made by weighing risks. Among a host of Federal sentencing guidelines, judges are to “protect the public from further crimes of the defendant.”[54](#fn54) Rhetorically, this sounds straightforward \- identify the risk that an individual will cause the public harm and impose a sentence to reduce this risk. However, bias always plays a role in decision\-making. We would never ask the average citizen to weigh risks and punish accordingly because we do not believe the average citizen could act with impartiality. Although this is the standard we impose on judges, even they make systematic mistakes.[55](#fn55)
And the use of these data\-driven risk models in the criminal justice system is only increasing in recent years.[56](#fn56) Algorithms are predicting risk for a host of use cases including bail hearings[57](#fn57), parole[58](#fn58), and to support sentencing decisions by assessing future criminal behavior.[59](#fn59)
Can an algorithm help judges make better decisions? Recent research determined that even with much less data on hand, people without a criminal justice background make recidivism predictions as accurately as the COMPAS algorithm.[60](#fn60) Very importantly, studies have also shown that introducing prediction into the decision\-making process can reduce the odds of re\-arrests.[61](#fn61)
Collectively, this research suggests that there may be benefits for governments in adopting these tools \- but do these benefits outweigh the social costs? No doubt, more research is needed on the social justice implications of these algorithms. However, the more timely need is for government to proactively explore biases in the models they are currently developing.
As was the case with churn, the confusion metrics are instrumental in communicating biases to non\-technical decision\-makers because they directly reflect the business process at hand.
### 7\.1\.3 Accuracy and generalizability in recidivism algorithms
Accuracy and generalizability continue to be the two yardsticks we use to measure the utility of our algorithms. The goal of a recidivism classifier is to predict two binary outcomes \- `Recidivate` and `notRecidivate`. While the “percent of correct predictions” is a simple measure of accuracy, it lacks the nuance needed to detect disparate impact. As they were in Chapter 6, confusion metrics will continue to be key.
The basic premise of the recidivism model is to learn the recidivism experience of ex\-offenders in the recent past and test the extent to which this experience generalizes to a population for which the propensity to recidivate is unknown. The prediction from the model is a “risk score” running from 0 to 1, interpreted as “the probability person *i* will recidivate.” The model can then be validated by comparing predicted classifications to observed classifications, giving a host of more nuanced errors including:
**True Positive (“Sensitivity”)** \- “The person was predicted to recidivate and actually recidivated.”
**True Negative (“Specificity”)** \- “The person was predicted not to recidivate and actually did not recidivate.”
**False Positive** \- “The person was predicted to recidivate and actually did not recidivate.”
**False Negative** \- “The person was predicted not to recidivate and actually did recidivate.”
7\.2 Data and exploratory analysis
----------------------------------
Begin by loading the necessary R packages, reading in the `plotTheme` with the source file, and some color palettes.
```
library(lubridate)
library(tidyverse)
library(caret)
library(kableExtra)
library(ModelMetrics)
library(plotROC)
library(knitr)
library(grid)
library(gridExtra)
library(QuantPsyc)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
palette_9_colors <- c("#FF2AD4","#E53AD8","#CC4ADC","#996AE5","#7F7BE9",
"#668BED","#33ABF6","#19BBFA","#00CCFF")
palette_3_colors <- c("#FF2AD4","#7F7BE9","#00CCFF")
palette_2_colors <- c("#FF2AD4", "#00CCFF")
palette_1_colors <- c("#00CCFF")
```
The data for this chapter comes directly from ProPublica’s Github repository[62](#fn62), and was the impetus for a series of articles on bias in criminal justice algorithms.[63](#fn63)
At the time of writing, no data dictionary had been posted, thus much of the feature engineering routines employed below were copied directly from ProPublica’s IPython Notebook.[64](#fn64) While this is not ideal, it is at times, the nature of working with open data. The below table shows each variable used in the analysis.
| Variable | Description |
| --- | --- |
| sex | Categorical variables that indicates whether the ex\-offender is male or female |
| age | The age of the person |
| age\_cat | Variable that categories ex\-offenders into three groups by age: Less than 25, 25 to 45, Greater than 45 |
| race | The race of the person |
| priors\_count | The number of prior crimes committed |
| two\_year\_recid | Numerical binary variable of whether the person recidivated or not, where 0 is not recidivate and 1 is the person recidivated |
| r\_charge\_desc | Description of the charge upon recidivating |
| c\_charge\_desc | Description of the original criminal charge |
| c\_charge\_degree | Degree of the original charge |
| r\_charge\_degree | Degree of the charge upon recidivating |
| juv\_other\_count | Categorical variable of the number of prior juvenile convictions that are not considered either felonies or misdemeanors |
| length\_of\_stay | How long the person stayed in jail |
| Recidivated | Character binary variable of whether the person recidivated (Recidivate) or not (notRecidivate) |
| |
| --- |
| Table 7\.1 |
The cleaned dataset describes 6,162 ex\-offenders screened by COMPAS in 2013 and 2014\. There are 53 columns in the original data describing length of jail stays, type of charges, the degree of crimes committed, and criminal history. Many variables were added by Northpointe, the original author of the COMPAS algorithm, and are not relevant to the model building process. Also, noticeably absent, are economic and educational outcomes for these individuals. The model developed below is simplistic \- it is not a replication of the existing Northpointe algorithm.
```
raw_data <- read.csv(file.path(root.dir,"Chapter7/compas-scores-two-years.csv"))
df <-
raw_data %>%
filter(days_b_screening_arrest <= 30) %>%
filter(days_b_screening_arrest >= -30) %>%
filter(is_recid != -1) %>%
filter(c_charge_degree != "O") %>%
filter(priors_count != "36") %>%
filter(priors_count != "25") %>%
mutate(length_of_stay = as.numeric(as.Date(c_jail_out) - as.Date(c_jail_in)),
priors_count = as.factor(priors_count),
Recidivated = as.factor(ifelse(two_year_recid == 1,"Recidivate","notRecidivate")),
recidivatedNumeric = ifelse(Recidivated == "Recidivate", 1, 0),
race2 = case_when(race == "Caucasian" ~ "Caucasian",
race == "African-American" ~ "African-American",
TRUE ~ "Other")) %>%
dplyr::select(sex,age,age_cat,race,race2,priors_count,two_year_recid,r_charge_desc,
c_charge_desc,c_charge_degree,r_charge_degree,juv_other_count,
length_of_stay,priors_count,Recidivated,recidivatedNumeric) %>%
filter(priors_count != 38)
```
Figure 7\.1 illustrates the most frequent initial charge. Crimes of varying severity are included in the dataset. Note the use of `reorder` and `FUN = max` in the `ggplot` call.
```
group_by(df, c_charge_desc) %>%
summarize(count = n()) %>%
mutate(rate = count / sum(count)) %>%
arrange(-rate) %>% head(9) %>%
ggplot(aes(reorder(c_charge_desc, rate, FUN = max),
rate, fill = c_charge_desc)) +
geom_col() + coord_flip() +
scale_fill_manual(values = palette_9_colors) +
labs(x = "Charge", y = "Rate", title= "Most frequent initial charges") +
plotTheme() + theme(legend.position = "none")
```
Figure 7\.2 visualizes the rate of recidivism by race. Note the rate of recidivism for African Americans is twice that (59%) of Caucasians (29%). If this reported rate is driven by reporting or other bias, then it may have important implications for the model’s usefulness.
```
df %>%
group_by(Recidivated, race) %>%
summarize(n = n()) %>%
mutate(freq = n / sum(n)) %>% filter(Recidivated == "Recidivate") %>%
ggplot(aes(reorder(race, -freq), freq)) +
geom_bar(stat = "identity", position = "dodge", fill = palette_2_colors[2]) +
labs(title = "Recidivism rate by race",
y = "Rate", x = "Race") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
7\.3 Estimate two recidivism models
-----------------------------------
“You mustn’t include race in the model because that will ensure resource allocation decisions will in part, be guided by race”. I’ve heard this line countless times in my career. but as we will learn, the bottom line is that if racial bias is baked into the training data, then controlling explicitly for race, is not likely to remove it. This section tests this theory by estimating a Logistic regression with race and one without.
The dependent variable is `Recidivated` which is coded as `1` for inmates who experienced a recidivism event and `0` for those that did not. Aside from race, the below two models include sex, age, the number of “other” convictions as a juvenile, the count of prior adult convictions, and the length of stay in prison.
The data is split into a 75% training set and a 25% test set using a simple `dplyr` approach.
```
train <- df %>% dplyr::sample_frac(.75)
train_index <- as.numeric(rownames(train))
test <- df[-train_index, ]
```
`reg.noRace` and `reg.withRace` are estimated below. `summary(reg.noRace)` shows that all features are statistically significant and their signs reasonable. For example, as `age` increases, the probability of recidivism decreases. Conversely, the longer `length_of_stay` in the prison system, the greater the likelihood that an individual recidivates.
Note that `priors_count` is input as a factor. If it were input as a continuous feature, the interpretation would be ‘a one unit increase in priors leads to a corresponding increase in the propensity to recidivate.’ By converting to factor, the interpretation is that there is a statistically significant difference between 0 and *n* priors. Most of these fixed effects are significant.
```
reg.noRace <- glm(Recidivated ~ ., data =
train %>% dplyr::select(sex, age, age_cat,
juv_other_count, length_of_stay,
priors_count, Recidivated),
family = "binomial"(link = "logit"))
reg.withRace <- glm(Recidivated ~ ., data =
train %>% dplyr::select(sex, age, age_cat, race,
juv_other_count, length_of_stay,
priors_count, Recidivated),
family = "binomial"(link = "logit"))
```
The summary of `reg.withRace` is quite revealing. You may try two specifications with both the current 6\-category `race` feature or an alternative `race2` feature including categories for just `Caucasian`, `African-American`, and `Other`. In both instances the race variables are largely insignificant suggesting that differences in race are not driving propensity to recidivate.
How can that be given the differences illustrated in Figure 7\.2 above? To explore further, try to estimate another regression, the same as `reg.withRace`, but with `race2` (instead of `race`) and without `priors_count`.
Why is race significant when `priors_count` is removed? Figure 7\.3 below shows the mean `priors_count` by race. African Americans are reported to have far higher prior rates than other races. Thus, `race` and `priors_count` tell the same story, and this colinearity renders race insignificant when both are included in the model.
As race plays no role in the usefulness of our model, `reg.noRace` is used for the remainder of the analysis.
```
group_by(df, race2) %>%
summarize(averagePriors = mean(as.numeric(priors_count))) %>%
ggplot(aes(race2, averagePriors, fill = race2)) +
geom_bar(stat="identity", position = "dodge") +
labs(title="Mean priors by race", y = "Mean Priors", x = "Race") +
scale_fill_manual(values = palette_3_colors, name = "Recidivism") +
plotTheme() + theme(legend.position = "none")
```
### 7\.3\.1 Accuracy \& Generalizability
Both accuracy and the confusion metrics are discussed here with emphasis on generalizability across race. To begin, the code below calculates a predicted recidivism class, `predClass`, for any predicted probability over 0\.50\.
```
testProbs <-
data.frame(class = test$recidivatedNumeric,
probs = predict(reg.noRace, test, type = "response"),
Race = test$race2)
```
The first cause for concern comes in Figure 7\.4 below which contrasts observed and predicted recidivism rates given the 50% threshold. About 45% of ex\-offenders are observed to recidivate across all races, but only 40% are predicted to do so. This underprediction is far more pronouced for Caucasians and other races, relative to African Americans.
```
mutate(testProbs, predClass = ifelse(probs >= .5, 1, 0)) %>%
group_by(Race) %>%
summarize(Observed.recidivism = sum(class) / n(),
Predicted.recidivism = sum(predClass) / n()) %>%
gather(Variable, Value, -Race) %>%
ggplot(aes(Race, Value)) +
geom_bar(aes(fill = Race), position="dodge", stat="identity") +
scale_fill_manual(values = palette_3_colors) +
facet_wrap(~Variable) +
labs(title = "Observed and predicted recidivism", x = "Race", y = "Rate") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
Let’s delve a bit deeper by visualizing confusion metrics by race. Northpointe, the company that markets decision\-making tools based on these data, has argued that the algorithm is fair because of the comparable across\-race accuracy rates.[65](#fn65) Table 7\.2 below, confirms this claim \- but that is far from the entire story.
Despite equal accuracy rates, the issue is in the disparities for each confusion metric. The `iterateThreshold` function, first used in Chapter 6, will be used again to calculate confusion metrics for each threshold by race.
The function takes several inputs including the `data` frame of predicted probabilities; an `observedClass`; a column of predicted probabilities, `predictedProbs`, and an optional `group` parameter that provides confusion metrics by race.
Below, the function is run and the results are filtered for just the 50% threshold. Accuracy and the confusion metrics as rates are selected out, converted to long form and then plotted as a grouped bar plot. Let’s interpret each metric in the context of social cost.
```
iterateThresholds <- function(data, observedClass, predictedProbs, group) {
observedClass <- enquo(observedClass)
predictedProbs <- enquo(predictedProbs)
group <- enquo(group)
x = .01
all_prediction <- data.frame()
if (missing(group)) {
while (x <= 1) {
this_prediction <- data.frame()
this_prediction <-
data %>%
mutate(predclass = ifelse(!!predictedProbs > x, 1,0)) %>%
count(predclass, !!observedClass) %>%
summarize(Count_TN = sum(n[predclass==0 & !!observedClass==0]),
Count_TP = sum(n[predclass==1 & !!observedClass==1]),
Count_FN = sum(n[predclass==0 & !!observedClass==1]),
Count_FP = sum(n[predclass==1 & !!observedClass==0]),
Rate_TP = Count_TP / (Count_TP + Count_FN),
Rate_FP = Count_FP / (Count_FP + Count_TN),
Rate_FN = Count_FN / (Count_FN + Count_TP),
Rate_TN = Count_TN / (Count_TN + Count_FP),
Accuracy = (Count_TP + Count_TN) /
(Count_TP + Count_TN + Count_FN + Count_FP)) %>%
mutate(Threshold = round(x,2))
all_prediction <- rbind(all_prediction,this_prediction)
x <- x + .01
}
return(all_prediction)
}
else if (!missing(group)) {
while (x <= 1) {
this_prediction <- data.frame()
this_prediction <-
data %>%
mutate(predclass = ifelse(!!predictedProbs > x, 1,0)) %>%
group_by(!!group) %>%
count(predclass, !!observedClass) %>%
summarize(Count_TN = sum(n[predclass==0 & !!observedClass==0]),
Count_TP = sum(n[predclass==1 & !!observedClass==1]),
Count_FN = sum(n[predclass==0 & !!observedClass==1]),
Count_FP = sum(n[predclass==1 & !!observedClass==0]),
Rate_TP = Count_TP / (Count_TP + Count_FN),
Rate_FP = Count_FP / (Count_FP + Count_TN),
Rate_FN = Count_FN / (Count_FN + Count_TP),
Rate_TN = Count_TN / (Count_TN + Count_FP),
Accuracy = (Count_TP + Count_TN) /
(Count_TP + Count_TN + Count_FN + Count_FP)) %>%
mutate(Threshold = round(x, 2))
all_prediction <- rbind(all_prediction, this_prediction)
x <- x + .01
}
return(all_prediction)
}
}
```
```
testProbs.thresholds <-
iterateThresholds(data=testProbs, observedClass = class,
predictedProbs = probs, group = Race)
filter(testProbs.thresholds, Threshold == .5) %>%
dplyr::select(Accuracy, Race, starts_with("Rate")) %>%
gather(Variable, Value, -Race) %>%
ggplot(aes(Variable, Value, fill = Race)) +
geom_bar(aes(fill = Race), position = "dodge", stat = "identity") +
scale_fill_manual(values = palette_3_colors) +
labs(title="Confusion matrix rates by race",
subtitle = "50% threshold", x = "Outcome",y = "Rate") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
* **False Negatives** \- The rate at which African Americans are incorrectly predicted *not* to recividate is noticeably lower than for other races. A judge granting paroll errant in this way, means an individual is incorrectly released from prison only to commit another crime. Here, the social cost is on society not the ex\-offender. In the context of disparate impact, Caucasians will be incorrectly released at greater rates than African Americans.
* **False Positives** \- The rate at which African Americans are incorrectly predicted to recidivate is noticeably higher than for other races. A judge faced with a paroll decision when this error comes into play will incorrectly prevent a prisoner from being released. Here, the social cost is most certainly with the ex\-offender, with a much greater disparate impact for African Americans.
These two metrics alone suggest that the use of this algorithm may have a disparate impact on African Americans. Thus, we should be weary of how useful this algorithm is for making criminal justice decisions.
7\.4 What about the threshold?
------------------------------
As we learned in the previous chapter, the threshold at which an individual outcome is classified to be true can make all the difference. The same is true in this use case, but can finding an optimal threshold help to erase the disparate impact? Let’s explore some metrics related to the threshold, beginning with an across race ROC Curve.
The ROC Curve measures trade\-offs in true positive and false positive rates for each threshold. Recall from Figure 6\.7 (the ‘football plot’), that the diagonal ‘coin flip line’ suggests a classifier which gets it right 50% of the time, but also gets it wrong 50% of the time. Anything classified on or below the coin flip line is not useful.
```
aucTable <-
testProbs %>%
group_by(Race) %>%
summarize(AUC = auc(class,probs)) %>%
mutate(AUC = as.character(round(AUC, 3)))
mutate(testProbs.thresholds, pointSize = ifelse(Threshold == .48, 24, 16)) %>%
ggplot(aes(Rate_FP, Rate_TP, colour=Race)) +
geom_point(aes(shape = pointSize)) + geom_line() + scale_shape_identity() +
scale_color_manual(values = palette_3_colors) +
geom_abline(slope = 1, intercept = 0, size = 1.5, color = 'grey') +
annotation_custom(tableGrob(aucTable, rows = NULL), xmin = .65, xmax = 1, ymin = 0, ymax = .25) +
labs(title="ROC Curves by race", x="False Positive Rate", y="True Positive Rate") +
plotTheme()
```
Above, the ROC curve is generated for each race and shows there are really three different curves. `testProbs.thresholds` shows that a threshold that gives an African American True Positive rate of roughly 76% of the time, gives one for Caucasians 48% of the time. At that same threshold however, the model makes more False Positives for African Americans (37%) than it does for Caucasians (22%).
To see evidence of these disparities, note the triangles in the figure above or try `filter(testProbs.thresholds, Threshold == 0.48)`.
What is the appropriate risk score threshold at which we should conclude an ex\-offender will recidivate? The ROC curves suggest a threshold suitable for one race may not be robust for another. It stands to reason then that perhaps a different threshold for each race may help to equalize some of these confusion metric differences.
7\.5 Optimizing ‘equitable’ thresholds
--------------------------------------
If a given predicted threshold leads to a disparate impact for one race, then it is inequitable. Let’s explore the possibility of reducing disparate impact by searching for an equitable threshold for each race. I was first exposed to the idea of equitable thresholds through the work of colleagues at Oregon’s State Department of Child Protective Services, who develop machine learning models to predict child welfare outcomes and allocate limited social worker resources. Fairness is a key concern for the development team who published a paper on their approach for ‘fairness correction’.[66](#fn66) In this section, I replicate a more simple version of that work.
Consider a simple measure of disparity that argues for a comparable absolute difference in False Positive (`Rate_FP`) and False Negative (`Rate_FN`) rates across both races. The below table shows what that disparity looks like at the 50% threshold. We see a comparable `Difference` in accuracy, but diverging error rates across races.
| Var | African\-American | Caucasian | Difference |
| --- | --- | --- | --- |
| Accuracy | 0\.6871089 | 0\.6824458 | 0\.0046631 |
| Rate\_FN | 0\.2925659 | 0\.5392670 | 0\.2467011 |
| Rate\_FP | 0\.3350785 | 0\.1835443 | 0\.1515342 |
| Disparities in across\-race confusion metrics (50% threshold) |
| --- |
| Table 7\.2 |
Let’s try to achieve equitable thresholds that:
1. Exhibit little difference in across\-race False Positive and Negative Rates; while ensuring both rates are relatively low.
2. Exhibit little difference in across\-race Accuracy rates; while ensuring these rates are relatively high.
To create this optimization, across\-race confusion metrics and accuracies are calculated for each possible threshold combination. Here, to keep it simple, thresholds are analyzed at 10% intervals and only for two races. Doing so results in the below visualization.
Here, the x\-axis runs from 0 (no difference in across\-race confusion metrics) and 1 (perfect accuracy). The y\-axis shows each threshold pair. Shades of purple represent accuracy rates \- which we hope are close to 1 and comparable, across races. Shades of blue represent differences in across\-rate confusion metrics, which should be close to 0 and comparable. False error rates in shades of green are shown for only African Americans, given the associated social costs.
The plot is sorted in a way that maximizes the (euclidean) distance between accuracies and differences, such that a distance of `1` would mean both differences are `0` and both accuracies are `1`.
When the `0.5, 0.5` threshold (in red) is visualized this way, it is clear that a 50% default is far from ideal. There are some interesting candidates here for most equitable \- but tradeoffs exist. `0.6, 0.5` has high and comparable accuracy rates; differences in false positives close to 0; but an overall high false negative rate for African Americans. Consider how this contrasts with `0.5, 0.4`, which lowers the threshold for Caucasians instead of increasing it for African Americans. What more can you say about the seemingly most equitable threshold pair, 0\.6, 0\.5?
Below is table of results for each threshold pair. Note that these results are likely influenced by the results of the training/test split. Iteratively sampling or ‘bootstrapping’ many splits, and calculating equitable thresholds for each, would be a better approach. In addition, estimating results using thresholds at the hundredths of a percent may be more robust than at the tenths of a percent.
| threshold | False\_Negative\_Diff | False\_Positive\_Diff | African.American.accuracy | Caucasian.accuracy | African.American.FP | African.American.FN |
| --- | --- | --- | --- | --- | --- | --- |
| 0\.6, 0\.5 | 0\.07 | 0\.01 | 0\.6708385 | 0\.6824458 | 0\.1727749 | 0\.4724221 |
| 0\.7, 0\.6 | 0\.03 | 0\.01 | 0\.6132666 | 0\.6903353 | 0\.0890052 | 0\.6594724 |
| 0\.5, 0\.4 | 0\.08 | 0\.02 | 0\.6871089 | 0\.6410256 | 0\.3350785 | 0\.2925659 |
| 0\.8, 0\.7 | 0\.05 | 0\.01 | 0\.5419274 | 0\.6765286 | 0\.0157068 | 0\.8633094 |
| 0\.4, 0\.3 | 0\.10 | 0\.04 | 0\.6720901 | 0\.6035503 | 0\.5445026 | 0\.1294964 |
| 0\.9, 0\.8 | 0\.01 | 0\.01 | 0\.4956195 | 0\.6351085 | 0\.0026178 | 0\.9640288 |
| 0\.3, 0\.2 | 0\.01 | 0\.01 | 0\.6207760 | 0\.5128205 | 0\.7225131 | 0\.0647482 |
| 1, 1 | 0\.00 | 0\.00 | 0\.4780976 | 0\.6232742 | 0\.0000000 | 1\.0000000 |
| 0\.9, 0\.9 | 0\.03 | 0\.00 | 0\.4956195 | 0\.6232742 | 0\.0026178 | 0\.9640288 |
| 0\.9, 1 | 0\.04 | 0\.00 | 0\.4956195 | 0\.6232742 | 0\.0026178 | 0\.9640288 |
| 1, 0\.9 | 0\.01 | 0\.01 | 0\.4780976 | 0\.6232742 | 0\.0000000 | 1\.0000000 |
| 0\.8, 0\.8 | 0\.09 | 0\.01 | 0\.5419274 | 0\.6351085 | 0\.0157068 | 0\.8633094 |
| 0\.7, 0\.7 | 0\.15 | 0\.06 | 0\.6132666 | 0\.6765286 | 0\.0890052 | 0\.6594724 |
| 0\.7, 0\.5 | 0\.12 | 0\.09 | 0\.6132666 | 0\.6824458 | 0\.0890052 | 0\.6594724 |
| 0\.5, 0\.3 | 0\.07 | 0\.16 | 0\.6871089 | 0\.6035503 | 0\.3350785 | 0\.2925659 |
| 1, 0\.8 | 0\.05 | 0\.01 | 0\.4780976 | 0\.6351085 | 0\.0000000 | 1\.0000000 |
| 1, 0\.1 | 0\.99 | 0\.96 | 0\.4780976 | 0\.4023669 | 0\.0000000 | 1\.0000000 |
| 0\.6, 0\.6 | 0\.22 | 0\.09 | 0\.6708385 | 0\.6903353 | 0\.1727749 | 0\.4724221 |
| 0\.8, 0\.9 | 0\.13 | 0\.01 | 0\.5419274 | 0\.6232742 | 0\.0157068 | 0\.8633094 |
| 0\.6, 0\.4 | 0\.10 | 0\.18 | 0\.6708385 | 0\.6410256 | 0\.1727749 | 0\.4724221 |
| 0\.9, 0\.7 | 0\.15 | 0\.03 | 0\.4956195 | 0\.6765286 | 0\.0026178 | 0\.9640288 |
| 0\.8, 0\.6 | 0\.17 | 0\.06 | 0\.5419274 | 0\.6903353 | 0\.0157068 | 0\.8633094 |
| 0\.8, 1 | 0\.14 | 0\.02 | 0\.5419274 | 0\.6232742 | 0\.0157068 | 0\.8633094 |
| 0\.9, 0\.1 | 0\.96 | 0\.95 | 0\.4956195 | 0\.4023669 | 0\.0026178 | 0\.9640288 |
| 1, 0\.7 | 0\.19 | 0\.03 | 0\.4780976 | 0\.6765286 | 0\.0000000 | 1\.0000000 |
| 0\.4, 0\.2 | 0\.06 | 0\.19 | 0\.6720901 | 0\.5128205 | 0\.5445026 | 0\.1294964 |
| 0\.5, 0\.5 | 0\.25 | 0\.15 | 0\.6871089 | 0\.6824458 | 0\.3350785 | 0\.2925659 |
| 0\.9, 0\.6 | 0\.27 | 0\.08 | 0\.4956195 | 0\.6903353 | 0\.0026178 | 0\.9640288 |
| 0\.7, 0\.8 | 0\.29 | 0\.08 | 0\.6132666 | 0\.6351085 | 0\.0890052 | 0\.6594724 |
| 0\.2, 0\.1 | 0\.02 | 0\.04 | 0\.5469337 | 0\.4023669 | 0\.9188482 | 0\.0263789 |
| 0\.1, 0\.1 | 0\.00 | 0\.04 | 0\.5231539 | 0\.4023669 | 0\.9947644 | 0\.0023981 |
| 1, 0\.6 | 0\.31 | 0\.08 | 0\.4780976 | 0\.6903353 | 0\.0000000 | 1\.0000000 |
| 0\.6, 0\.7 | 0\.34 | 0\.14 | 0\.6708385 | 0\.6765286 | 0\.1727749 | 0\.4724221 |
| 0\.8, 0\.1 | 0\.86 | 0\.94 | 0\.5419274 | 0\.4023669 | 0\.0157068 | 0\.8633094 |
| 0\.4, 0\.4 | 0\.24 | 0\.19 | 0\.6720901 | 0\.6410256 | 0\.5445026 | 0\.1294964 |
| 0\.7, 0\.9 | 0\.33 | 0\.08 | 0\.6132666 | 0\.6232742 | 0\.0890052 | 0\.6594724 |
| 0\.1, 1 | 1\.00 | 0\.99 | 0\.5231539 | 0\.6232742 | 0\.9947644 | 0\.0023981 |
| 0\.3, 0\.3 | 0\.16 | 0\.22 | 0\.6207760 | 0\.6035503 | 0\.7225131 | 0\.0647482 |
| 0\.7, 1 | 0\.34 | 0\.09 | 0\.6132666 | 0\.6232742 | 0\.0890052 | 0\.6594724 |
| 0\.2, 0\.2 | 0\.05 | 0\.18 | 0\.5469337 | 0\.5128205 | 0\.9188482 | 0\.0263789 |
| 0\.1, 0\.9 | 0\.99 | 0\.99 | 0\.5231539 | 0\.6232742 | 0\.9947644 | 0\.0023981 |
| 0\.3, 0\.1 | 0\.06 | 0\.23 | 0\.6207760 | 0\.4023669 | 0\.7225131 | 0\.0647482 |
| 0\.8, 0\.5 | 0\.32 | 0\.17 | 0\.5419274 | 0\.6824458 | 0\.0157068 | 0\.8633094 |
| 0\.1, 0\.8 | 0\.95 | 0\.99 | 0\.5231539 | 0\.6351085 | 0\.9947644 | 0\.0023981 |
| 0\.4, 0\.1 | 0\.12 | 0\.41 | 0\.6720901 | 0\.4023669 | 0\.5445026 | 0\.1294964 |
| 0\.1, 0\.2 | 0\.07 | 0\.26 | 0\.5231539 | 0\.5128205 | 0\.9947644 | 0\.0023981 |
| 0\.5, 0\.6 | 0\.40 | 0\.26 | 0\.6871089 | 0\.6903353 | 0\.3350785 | 0\.2925659 |
| 0\.2, 1 | 0\.97 | 0\.92 | 0\.5469337 | 0\.6232742 | 0\.9188482 | 0\.0263789 |
| 0\.6, 0\.8 | 0\.48 | 0\.16 | 0\.6708385 | 0\.6351085 | 0\.1727749 | 0\.4724221 |
| 0\.9, 0\.5 | 0\.42 | 0\.18 | 0\.4956195 | 0\.6824458 | 0\.0026178 | 0\.9640288 |
| 1, 0\.2 | 0\.93 | 0\.74 | 0\.4780976 | 0\.5128205 | 0\.0000000 | 1\.0000000 |
| 0\.2, 0\.9 | 0\.96 | 0\.91 | 0\.5469337 | 0\.6232742 | 0\.9188482 | 0\.0263789 |
| 1, 0\.5 | 0\.46 | 0\.18 | 0\.4780976 | 0\.6824458 | 0\.0000000 | 1\.0000000 |
| 0\.6, 0\.3 | 0\.25 | 0\.33 | 0\.6708385 | 0\.6035503 | 0\.1727749 | 0\.4724221 |
| 0\.7, 0\.4 | 0\.29 | 0\.26 | 0\.6132666 | 0\.6410256 | 0\.0890052 | 0\.6594724 |
| 0\.5, 0\.2 | 0\.22 | 0\.40 | 0\.6871089 | 0\.5128205 | 0\.3350785 | 0\.2925659 |
| 0\.6, 0\.9 | 0\.52 | 0\.17 | 0\.6708385 | 0\.6232742 | 0\.1727749 | 0\.4724221 |
| 0\.6, 1 | 0\.53 | 0\.17 | 0\.6708385 | 0\.6232742 | 0\.1727749 | 0\.4724221 |
| 0\.2, 0\.8 | 0\.93 | 0\.91 | 0\.5469337 | 0\.6351085 | 0\.9188482 | 0\.0263789 |
| 0\.7, 0\.1 | 0\.65 | 0\.87 | 0\.6132666 | 0\.4023669 | 0\.0890052 | 0\.6594724 |
| 0\.5, 0\.1 | 0\.29 | 0\.62 | 0\.6871089 | 0\.4023669 | 0\.3350785 | 0\.2925659 |
| 0\.9, 0\.2 | 0\.89 | 0\.73 | 0\.4956195 | 0\.5128205 | 0\.0026178 | 0\.9640288 |
| 0\.6, 0\.1 | 0\.47 | 0\.78 | 0\.6708385 | 0\.4023669 | 0\.1727749 | 0\.4724221 |
| 0\.4, 0\.5 | 0\.41 | 0\.36 | 0\.6720901 | 0\.6824458 | 0\.5445026 | 0\.1294964 |
| 0\.3, 0\.4 | 0\.31 | 0\.37 | 0\.6207760 | 0\.6410256 | 0\.7225131 | 0\.0647482 |
| 0\.1, 0\.7 | 0\.81 | 0\.97 | 0\.5231539 | 0\.6765286 | 0\.9947644 | 0\.0023981 |
| 0\.5, 0\.7 | 0\.52 | 0\.31 | 0\.6871089 | 0\.6765286 | 0\.3350785 | 0\.2925659 |
| 0\.2, 0\.3 | 0\.20 | 0\.42 | 0\.5469337 | 0\.6035503 | 0\.9188482 | 0\.0263789 |
| 0\.3, 1 | 0\.94 | 0\.72 | 0\.6207760 | 0\.6232742 | 0\.7225131 | 0\.0647482 |
| 1, 0\.4 | 0\.63 | 0\.35 | 0\.4780976 | 0\.6410256 | 0\.0000000 | 1\.0000000 |
| 0\.1, 0\.3 | 0\.22 | 0\.49 | 0\.5231539 | 0\.6035503 | 0\.9947644 | 0\.0023981 |
| 0\.2, 0\.7 | 0\.79 | 0\.89 | 0\.5469337 | 0\.6765286 | 0\.9188482 | 0\.0263789 |
| 0\.8, 0\.2 | 0\.79 | 0\.72 | 0\.5419274 | 0\.5128205 | 0\.0157068 | 0\.8633094 |
| 0\.3, 0\.9 | 0\.92 | 0\.72 | 0\.6207760 | 0\.6232742 | 0\.7225131 | 0\.0647482 |
| 1, 0\.3 | 0\.77 | 0\.50 | 0\.4780976 | 0\.6035503 | 0\.0000000 | 1\.0000000 |
| 0\.5, 0\.8 | 0\.66 | 0\.33 | 0\.6871089 | 0\.6351085 | 0\.3350785 | 0\.2925659 |
| 0\.9, 0\.4 | 0\.59 | 0\.35 | 0\.4956195 | 0\.6410256 | 0\.0026178 | 0\.9640288 |
| 0\.8, 0\.4 | 0\.49 | 0\.34 | 0\.5419274 | 0\.6410256 | 0\.0157068 | 0\.8633094 |
| 0\.5, 0\.9 | 0\.70 | 0\.33 | 0\.6871089 | 0\.6232742 | 0\.3350785 | 0\.2925659 |
| 0\.5, 1 | 0\.71 | 0\.34 | 0\.6871089 | 0\.6232742 | 0\.3350785 | 0\.2925659 |
| 0\.1, 0\.6 | 0\.69 | 0\.92 | 0\.5231539 | 0\.6903353 | 0\.9947644 | 0\.0023981 |
| 0\.3, 0\.8 | 0\.89 | 0\.71 | 0\.6207760 | 0\.6351085 | 0\.7225131 | 0\.0647482 |
| 0\.6, 0\.2 | 0\.40 | 0\.56 | 0\.6708385 | 0\.5128205 | 0\.1727749 | 0\.4724221 |
| 0\.7, 0\.3 | 0\.43 | 0\.41 | 0\.6132666 | 0\.6035503 | 0\.0890052 | 0\.6594724 |
| 0\.9, 0\.3 | 0\.74 | 0\.50 | 0\.4956195 | 0\.6035503 | 0\.0026178 | 0\.9640288 |
| 0\.4, 0\.6 | 0\.56 | 0\.47 | 0\.6720901 | 0\.6903353 | 0\.5445026 | 0\.1294964 |
| 0\.4, 1 | 0\.87 | 0\.54 | 0\.6720901 | 0\.6232742 | 0\.5445026 | 0\.1294964 |
| 0\.2, 0\.4 | 0\.35 | 0\.57 | 0\.5469337 | 0\.6410256 | 0\.9188482 | 0\.0263789 |
| 0\.3, 0\.5 | 0\.47 | 0\.54 | 0\.6207760 | 0\.6824458 | 0\.7225131 | 0\.0647482 |
| 0\.4, 0\.9 | 0\.86 | 0\.54 | 0\.6720901 | 0\.6232742 | 0\.5445026 | 0\.1294964 |
| 0\.2, 0\.6 | 0\.66 | 0\.84 | 0\.5469337 | 0\.6903353 | 0\.9188482 | 0\.0263789 |
| 0\.4, 0\.8 | 0\.82 | 0\.54 | 0\.6720901 | 0\.6351085 | 0\.5445026 | 0\.1294964 |
| 0\.8, 0\.3 | 0\.64 | 0\.48 | 0\.5419274 | 0\.6035503 | 0\.0157068 | 0\.8633094 |
| 0\.4, 0\.7 | 0\.68 | 0\.52 | 0\.6720901 | 0\.6765286 | 0\.5445026 | 0\.1294964 |
| 0\.1, 0\.4 | 0\.37 | 0\.64 | 0\.5231539 | 0\.6410256 | 0\.9947644 | 0\.0023981 |
| 0\.7, 0\.2 | 0\.59 | 0\.65 | 0\.6132666 | 0\.5128205 | 0\.0890052 | 0\.6594724 |
| 0\.3, 0\.7 | 0\.75 | 0\.69 | 0\.6207760 | 0\.6765286 | 0\.7225131 | 0\.0647482 |
| 0\.1, 0\.5 | 0\.54 | 0\.81 | 0\.5231539 | 0\.6824458 | 0\.9947644 | 0\.0023981 |
| 0\.2, 0\.5 | 0\.51 | 0\.74 | 0\.5469337 | 0\.6824458 | 0\.9188482 | 0\.0263789 |
| 0\.3, 0\.6 | 0\.63 | 0\.64 | 0\.6207760 | 0\.6903353 | 0\.7225131 | 0\.0647482 |
| |
| --- |
| Table 7\.3 |
Does engineering equitable thresholds help reduce disparate impact? Perhaps, but like many interesting social science phenomena, trade\-offs must be made and no threshold is perfect. Consider how such an outcome relates to the across\-race selection bias in these data. In this chapter, the focus is has been on social costs \- which is just one of many potential decision\-making bottom lines in government.
While it is often difficult to qualitatively or quantitatively judge the social costs and benefits of algorithms, I urge you to try. The results may not communicate neatly with dollar signs. Instead, careful discussion is needed on the decision\-making process, the business\-as\-usual approach, the underlying data and all the many trade\-offs. More so, terms like ‘False Positive’, ROC curves etc. will likely be lost on the Police Chief, Mayor or other non\-technical decisions\-makers.
That’s fine. The Police Chief has enough domain expertise to understand how the trade\-offs effect the efficiency and effectiveness of her agency \- so long as you or a member of your team can communicate well. Keep in mind, that these decisions are already being made by human beings, and you can create a confusion matrix for the business\-as\-usual decision\-making.
By now, it is clear that these methods are imperfect. Algorithms make mistakes, as do humans and even judges. The question is, are these methods useful?
Judging a useful algorithm thus requires far more than data science prowess. For use cases with potentially significant social costs, proper governance is key. We return to the role of ‘Algorithmic Governance’ next, in the book’s conclusion.
7\.6 Assignment \- Memo to the Mayor
------------------------------------
Let’s see how good you are at communicating these very nuanced data science results to non\-technical domain experts. Assume you are a data scientist working for the Department of Prisons and you are to *make recommendations* to your city’s Mayor about whether or not she should adopt a new recidivism algorithm into how the City allocates an ex\-offender job training program. Some in the Administration have expressed their concern that expanding the City’s limited job training resources on ex\-offenders who recidivate shortly after their release is not good policy. What do you think?
Begin by sketching out the basics of the job training program, quantifying the program costs. Although our concern is social costs, research the financial costs to individuals and society related to imprisonment. Calculate costs and benefits using this research as well as your own sensible assumptions; choose a threshold from `testProbs.thresholds`, and use it to develop a qualitative and quantitative cost/benefit analysis.
In your memo, no shorter than 600 words and no longer than 800 words, **you must argue for the use of the algorithm**, including the following information:
1. Explain the job training program, why the City is considering an algorithm to allocate the program and how the algorithm works, in brief.
2. Acknowledge and explain to the Mayor why she should be concerned with algorithmic fairness.
3. Present your cost/benefit analysis for your equitable threshold of choice **including** an across\-race grouped bar plot of each confusion metric the 0\.5, 0\.5 threshold and your optimal threshold.
4. Interpret the trade\-off between accuracy and generalizability as it relates to *the use* of this algorithm in prioritizing a job training program and advocate that the Mayor adopt the algorithm.
Remember, if your memo is too technical, you will lose the Mayor’s attention. If you are disingenuous about disparate impact, the press and other advocates will eventually get hold of your memo and use it to hurt the Mayor politically. If you are too strong about the implications for fairness, the Mayor will not agree to use the algorithm and your boss, may fire you. The best hint I can give you is to focus your memo on the use case at hand \- but don’t forget, politics are an important consideration.
7\.1 Introduction
-----------------
The churn use case from Chapter 6 is one example of how machine learning algorithms increasingly make decisions in place of humans. Marketing campaigns, insurance, credit cards, bank loans, news and shopping recommendations, are all now allocated with these methods. Bestsellers like Cathy O’Neil’s, “Weapons of Math Destruction” and relentless news coverage of tech company data mining, suggests these algorithms can bring as much peril as they do promise.[49](#fn49)
Government is still unsure how to regulate private\-sector algorithms \- their inner\-workings cast as intellectual property and closed off from public scrutiny. In the public\-sector however, there is an expectation that algorithms are open and transparent. While governments *today* are using algorithms to automate their decision\-making, many lack the regulatory or Planning wherewithal to ensure these models are fair.
In this chapter, we learn how to open the black box of these algorithms to better judge them for fairness and better understand the pertinent social costs. A person\-based model is estimated to predict ‘recidivism’ \- the term given to an offender released from prison, who then re\-offends and must go back to prison. These algorithms have exploded in recent years \- another example of the criminal justice system as an early adopter of machine learning.
A recidivism predictive model may be used by a judge to inform sentencing or parole decisions. It may also be used to prioritize who gets access to prisoner reentry programs like job training, for example.
If churn was the only context we had for the efficacy of these tools, we might think applying machine learning to recidivism is a no\-brainer. This could not be further from the truth. Here, as was the case in Predictive Policing (Chapter 5\), we will see that when human bias is baked into machine learning predictions, the result is a decision\-making tool that is not useful.
As before, a model that is not useful is one that lacks generalizability across groups. With churn, generalizability was not considered, and the costs of getting it wrong is simply a loss of revenue. In geospatial Predictive Policing, the cost is racially discriminate police surveillance. With recidivism, the costs are the systematic and disproportional over\-imprisonment of one race relative to another.
In other words, a biased business algorithm can cost money, but a biased government algorithm can cost lives and livelihoods. The associated social and economic costs could be massive, and we must learn to evaluate these models with these costs in mind.
One example of bias in the recidivism use case is higher False Positive rates for African American ex\-offenders compared to Caucasians. A False Positive in this context means that the algorithm predicted an ex\-offender would recidivate but they did not. If a judge uses a predicted recidivism ‘risk score’ to aid in his or her sentencing decision, and such a bias exists, then a disproportional number of African Americans may be incarcerated for longer than they otherwise deserve. Taken across tens of thousands of citizens in one metropolitan area, the social costs are unfathomable.
### 7\.1\.1 The spectre of disparate impact
Social Scientists are familiar with issues of fairness and discrimination, but identifying algorithmic discrimination is just as nuanced as it would be in say, housing and labor markets. It is unlikely that a jurisdiction would create a discriminatory algorithm on purpose. Given the black box nature of these models, it is more likely they would create a decision\-making tool that has a “disparate impact” on members of a protected class. Disparate impact is a legal theory positing that although a policy or program may not be discriminatory *prima facie*, it still may have an adverse discriminatory effect, even if unintended.
Recall our need to ensure that *geospatial* predictive algorithms generalize from one urban context to the next. This same rationale applies with people\-based models, but the concern is generalizability across different protected classes, like gender and race.
If an algorithm does not generalize to one group, its use for resource allocation may have a disparate impact on that group. The False Positive example for African Americans relative to Caucasians is one such example. This may occur because the algorithm lacks appropriate features to accurately model the African American “experience”. It may also be that the training data itself is biased, a critique discussed at length in Chapter 5\.
As a reminder, systematic over\-policing of historically disenfranchised communities creates a feedback loop where more reported crime leads to more cops on patrol, who then report more crimes, that ultimately lead to more convictions. In a predictive setting, if this *selection bias* goes unobserved, the systematic error will move into the error term and lead to bias and unintended social costs.
It is impossible to identify the effect of unobserved variables. As an alternative, researchers have very recently developed a series of fairness metrics that can be used to judge disparate impact.[50](#fn50) A 2018 review by Verma \& Rubin is particularly relevant for policy\-makers interested in learning more about fairness metrics.[51](#fn51)
For example, in 2016, journalists from ProPublica released an evaluation of the COMPAS recidivism prediction algorithm built by a company called Northpointe, finding that while the algorithm had comparable *accuracy* rates across different racial groups, there were clear racial differences for *errors* that had high social costs.[52](#fn52) This paradox lead ProPublica to ask a fascinating question \- “how could an algorithm simultaneously be fair and unfair?”[53](#fn53) In this chapter, we will make use of the COMPAS data ProPublica used for their analysis and develop fairness metrics that can help identify disparate impact.
### 7\.1\.2 Modeling judicial outcomes
In the criminal justice system, as in life, decisions are made by weighing risks. Among a host of Federal sentencing guidelines, judges are to “protect the public from further crimes of the defendant.”[54](#fn54) Rhetorically, this sounds straightforward \- identify the risk that an individual will cause the public harm and impose a sentence to reduce this risk. However, bias always plays a role in decision\-making. We would never ask the average citizen to weigh risks and punish accordingly because we do not believe the average citizen could act with impartiality. Although this is the standard we impose on judges, even they make systematic mistakes.[55](#fn55)
And the use of these data\-driven risk models in the criminal justice system is only increasing in recent years.[56](#fn56) Algorithms are predicting risk for a host of use cases including bail hearings[57](#fn57), parole[58](#fn58), and to support sentencing decisions by assessing future criminal behavior.[59](#fn59)
Can an algorithm help judges make better decisions? Recent research determined that even with much less data on hand, people without a criminal justice background make recidivism predictions as accurately as the COMPAS algorithm.[60](#fn60) Very importantly, studies have also shown that introducing prediction into the decision\-making process can reduce the odds of re\-arrests.[61](#fn61)
Collectively, this research suggests that there may be benefits for governments in adopting these tools \- but do these benefits outweigh the social costs? No doubt, more research is needed on the social justice implications of these algorithms. However, the more timely need is for government to proactively explore biases in the models they are currently developing.
As was the case with churn, the confusion metrics are instrumental in communicating biases to non\-technical decision\-makers because they directly reflect the business process at hand.
### 7\.1\.3 Accuracy and generalizability in recidivism algorithms
Accuracy and generalizability continue to be the two yardsticks we use to measure the utility of our algorithms. The goal of a recidivism classifier is to predict two binary outcomes \- `Recidivate` and `notRecidivate`. While the “percent of correct predictions” is a simple measure of accuracy, it lacks the nuance needed to detect disparate impact. As they were in Chapter 6, confusion metrics will continue to be key.
The basic premise of the recidivism model is to learn the recidivism experience of ex\-offenders in the recent past and test the extent to which this experience generalizes to a population for which the propensity to recidivate is unknown. The prediction from the model is a “risk score” running from 0 to 1, interpreted as “the probability person *i* will recidivate.” The model can then be validated by comparing predicted classifications to observed classifications, giving a host of more nuanced errors including:
**True Positive (“Sensitivity”)** \- “The person was predicted to recidivate and actually recidivated.”
**True Negative (“Specificity”)** \- “The person was predicted not to recidivate and actually did not recidivate.”
**False Positive** \- “The person was predicted to recidivate and actually did not recidivate.”
**False Negative** \- “The person was predicted not to recidivate and actually did recidivate.”
### 7\.1\.1 The spectre of disparate impact
Social Scientists are familiar with issues of fairness and discrimination, but identifying algorithmic discrimination is just as nuanced as it would be in say, housing and labor markets. It is unlikely that a jurisdiction would create a discriminatory algorithm on purpose. Given the black box nature of these models, it is more likely they would create a decision\-making tool that has a “disparate impact” on members of a protected class. Disparate impact is a legal theory positing that although a policy or program may not be discriminatory *prima facie*, it still may have an adverse discriminatory effect, even if unintended.
Recall our need to ensure that *geospatial* predictive algorithms generalize from one urban context to the next. This same rationale applies with people\-based models, but the concern is generalizability across different protected classes, like gender and race.
If an algorithm does not generalize to one group, its use for resource allocation may have a disparate impact on that group. The False Positive example for African Americans relative to Caucasians is one such example. This may occur because the algorithm lacks appropriate features to accurately model the African American “experience”. It may also be that the training data itself is biased, a critique discussed at length in Chapter 5\.
As a reminder, systematic over\-policing of historically disenfranchised communities creates a feedback loop where more reported crime leads to more cops on patrol, who then report more crimes, that ultimately lead to more convictions. In a predictive setting, if this *selection bias* goes unobserved, the systematic error will move into the error term and lead to bias and unintended social costs.
It is impossible to identify the effect of unobserved variables. As an alternative, researchers have very recently developed a series of fairness metrics that can be used to judge disparate impact.[50](#fn50) A 2018 review by Verma \& Rubin is particularly relevant for policy\-makers interested in learning more about fairness metrics.[51](#fn51)
For example, in 2016, journalists from ProPublica released an evaluation of the COMPAS recidivism prediction algorithm built by a company called Northpointe, finding that while the algorithm had comparable *accuracy* rates across different racial groups, there were clear racial differences for *errors* that had high social costs.[52](#fn52) This paradox lead ProPublica to ask a fascinating question \- “how could an algorithm simultaneously be fair and unfair?”[53](#fn53) In this chapter, we will make use of the COMPAS data ProPublica used for their analysis and develop fairness metrics that can help identify disparate impact.
### 7\.1\.2 Modeling judicial outcomes
In the criminal justice system, as in life, decisions are made by weighing risks. Among a host of Federal sentencing guidelines, judges are to “protect the public from further crimes of the defendant.”[54](#fn54) Rhetorically, this sounds straightforward \- identify the risk that an individual will cause the public harm and impose a sentence to reduce this risk. However, bias always plays a role in decision\-making. We would never ask the average citizen to weigh risks and punish accordingly because we do not believe the average citizen could act with impartiality. Although this is the standard we impose on judges, even they make systematic mistakes.[55](#fn55)
And the use of these data\-driven risk models in the criminal justice system is only increasing in recent years.[56](#fn56) Algorithms are predicting risk for a host of use cases including bail hearings[57](#fn57), parole[58](#fn58), and to support sentencing decisions by assessing future criminal behavior.[59](#fn59)
Can an algorithm help judges make better decisions? Recent research determined that even with much less data on hand, people without a criminal justice background make recidivism predictions as accurately as the COMPAS algorithm.[60](#fn60) Very importantly, studies have also shown that introducing prediction into the decision\-making process can reduce the odds of re\-arrests.[61](#fn61)
Collectively, this research suggests that there may be benefits for governments in adopting these tools \- but do these benefits outweigh the social costs? No doubt, more research is needed on the social justice implications of these algorithms. However, the more timely need is for government to proactively explore biases in the models they are currently developing.
As was the case with churn, the confusion metrics are instrumental in communicating biases to non\-technical decision\-makers because they directly reflect the business process at hand.
### 7\.1\.3 Accuracy and generalizability in recidivism algorithms
Accuracy and generalizability continue to be the two yardsticks we use to measure the utility of our algorithms. The goal of a recidivism classifier is to predict two binary outcomes \- `Recidivate` and `notRecidivate`. While the “percent of correct predictions” is a simple measure of accuracy, it lacks the nuance needed to detect disparate impact. As they were in Chapter 6, confusion metrics will continue to be key.
The basic premise of the recidivism model is to learn the recidivism experience of ex\-offenders in the recent past and test the extent to which this experience generalizes to a population for which the propensity to recidivate is unknown. The prediction from the model is a “risk score” running from 0 to 1, interpreted as “the probability person *i* will recidivate.” The model can then be validated by comparing predicted classifications to observed classifications, giving a host of more nuanced errors including:
**True Positive (“Sensitivity”)** \- “The person was predicted to recidivate and actually recidivated.”
**True Negative (“Specificity”)** \- “The person was predicted not to recidivate and actually did not recidivate.”
**False Positive** \- “The person was predicted to recidivate and actually did not recidivate.”
**False Negative** \- “The person was predicted not to recidivate and actually did recidivate.”
7\.2 Data and exploratory analysis
----------------------------------
Begin by loading the necessary R packages, reading in the `plotTheme` with the source file, and some color palettes.
```
library(lubridate)
library(tidyverse)
library(caret)
library(kableExtra)
library(ModelMetrics)
library(plotROC)
library(knitr)
library(grid)
library(gridExtra)
library(QuantPsyc)
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
palette_9_colors <- c("#FF2AD4","#E53AD8","#CC4ADC","#996AE5","#7F7BE9",
"#668BED","#33ABF6","#19BBFA","#00CCFF")
palette_3_colors <- c("#FF2AD4","#7F7BE9","#00CCFF")
palette_2_colors <- c("#FF2AD4", "#00CCFF")
palette_1_colors <- c("#00CCFF")
```
The data for this chapter comes directly from ProPublica’s Github repository[62](#fn62), and was the impetus for a series of articles on bias in criminal justice algorithms.[63](#fn63)
At the time of writing, no data dictionary had been posted, thus much of the feature engineering routines employed below were copied directly from ProPublica’s IPython Notebook.[64](#fn64) While this is not ideal, it is at times, the nature of working with open data. The below table shows each variable used in the analysis.
| Variable | Description |
| --- | --- |
| sex | Categorical variables that indicates whether the ex\-offender is male or female |
| age | The age of the person |
| age\_cat | Variable that categories ex\-offenders into three groups by age: Less than 25, 25 to 45, Greater than 45 |
| race | The race of the person |
| priors\_count | The number of prior crimes committed |
| two\_year\_recid | Numerical binary variable of whether the person recidivated or not, where 0 is not recidivate and 1 is the person recidivated |
| r\_charge\_desc | Description of the charge upon recidivating |
| c\_charge\_desc | Description of the original criminal charge |
| c\_charge\_degree | Degree of the original charge |
| r\_charge\_degree | Degree of the charge upon recidivating |
| juv\_other\_count | Categorical variable of the number of prior juvenile convictions that are not considered either felonies or misdemeanors |
| length\_of\_stay | How long the person stayed in jail |
| Recidivated | Character binary variable of whether the person recidivated (Recidivate) or not (notRecidivate) |
| |
| --- |
| Table 7\.1 |
The cleaned dataset describes 6,162 ex\-offenders screened by COMPAS in 2013 and 2014\. There are 53 columns in the original data describing length of jail stays, type of charges, the degree of crimes committed, and criminal history. Many variables were added by Northpointe, the original author of the COMPAS algorithm, and are not relevant to the model building process. Also, noticeably absent, are economic and educational outcomes for these individuals. The model developed below is simplistic \- it is not a replication of the existing Northpointe algorithm.
```
raw_data <- read.csv(file.path(root.dir,"Chapter7/compas-scores-two-years.csv"))
df <-
raw_data %>%
filter(days_b_screening_arrest <= 30) %>%
filter(days_b_screening_arrest >= -30) %>%
filter(is_recid != -1) %>%
filter(c_charge_degree != "O") %>%
filter(priors_count != "36") %>%
filter(priors_count != "25") %>%
mutate(length_of_stay = as.numeric(as.Date(c_jail_out) - as.Date(c_jail_in)),
priors_count = as.factor(priors_count),
Recidivated = as.factor(ifelse(two_year_recid == 1,"Recidivate","notRecidivate")),
recidivatedNumeric = ifelse(Recidivated == "Recidivate", 1, 0),
race2 = case_when(race == "Caucasian" ~ "Caucasian",
race == "African-American" ~ "African-American",
TRUE ~ "Other")) %>%
dplyr::select(sex,age,age_cat,race,race2,priors_count,two_year_recid,r_charge_desc,
c_charge_desc,c_charge_degree,r_charge_degree,juv_other_count,
length_of_stay,priors_count,Recidivated,recidivatedNumeric) %>%
filter(priors_count != 38)
```
Figure 7\.1 illustrates the most frequent initial charge. Crimes of varying severity are included in the dataset. Note the use of `reorder` and `FUN = max` in the `ggplot` call.
```
group_by(df, c_charge_desc) %>%
summarize(count = n()) %>%
mutate(rate = count / sum(count)) %>%
arrange(-rate) %>% head(9) %>%
ggplot(aes(reorder(c_charge_desc, rate, FUN = max),
rate, fill = c_charge_desc)) +
geom_col() + coord_flip() +
scale_fill_manual(values = palette_9_colors) +
labs(x = "Charge", y = "Rate", title= "Most frequent initial charges") +
plotTheme() + theme(legend.position = "none")
```
Figure 7\.2 visualizes the rate of recidivism by race. Note the rate of recidivism for African Americans is twice that (59%) of Caucasians (29%). If this reported rate is driven by reporting or other bias, then it may have important implications for the model’s usefulness.
```
df %>%
group_by(Recidivated, race) %>%
summarize(n = n()) %>%
mutate(freq = n / sum(n)) %>% filter(Recidivated == "Recidivate") %>%
ggplot(aes(reorder(race, -freq), freq)) +
geom_bar(stat = "identity", position = "dodge", fill = palette_2_colors[2]) +
labs(title = "Recidivism rate by race",
y = "Rate", x = "Race") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
7\.3 Estimate two recidivism models
-----------------------------------
“You mustn’t include race in the model because that will ensure resource allocation decisions will in part, be guided by race”. I’ve heard this line countless times in my career. but as we will learn, the bottom line is that if racial bias is baked into the training data, then controlling explicitly for race, is not likely to remove it. This section tests this theory by estimating a Logistic regression with race and one without.
The dependent variable is `Recidivated` which is coded as `1` for inmates who experienced a recidivism event and `0` for those that did not. Aside from race, the below two models include sex, age, the number of “other” convictions as a juvenile, the count of prior adult convictions, and the length of stay in prison.
The data is split into a 75% training set and a 25% test set using a simple `dplyr` approach.
```
train <- df %>% dplyr::sample_frac(.75)
train_index <- as.numeric(rownames(train))
test <- df[-train_index, ]
```
`reg.noRace` and `reg.withRace` are estimated below. `summary(reg.noRace)` shows that all features are statistically significant and their signs reasonable. For example, as `age` increases, the probability of recidivism decreases. Conversely, the longer `length_of_stay` in the prison system, the greater the likelihood that an individual recidivates.
Note that `priors_count` is input as a factor. If it were input as a continuous feature, the interpretation would be ‘a one unit increase in priors leads to a corresponding increase in the propensity to recidivate.’ By converting to factor, the interpretation is that there is a statistically significant difference between 0 and *n* priors. Most of these fixed effects are significant.
```
reg.noRace <- glm(Recidivated ~ ., data =
train %>% dplyr::select(sex, age, age_cat,
juv_other_count, length_of_stay,
priors_count, Recidivated),
family = "binomial"(link = "logit"))
reg.withRace <- glm(Recidivated ~ ., data =
train %>% dplyr::select(sex, age, age_cat, race,
juv_other_count, length_of_stay,
priors_count, Recidivated),
family = "binomial"(link = "logit"))
```
The summary of `reg.withRace` is quite revealing. You may try two specifications with both the current 6\-category `race` feature or an alternative `race2` feature including categories for just `Caucasian`, `African-American`, and `Other`. In both instances the race variables are largely insignificant suggesting that differences in race are not driving propensity to recidivate.
How can that be given the differences illustrated in Figure 7\.2 above? To explore further, try to estimate another regression, the same as `reg.withRace`, but with `race2` (instead of `race`) and without `priors_count`.
Why is race significant when `priors_count` is removed? Figure 7\.3 below shows the mean `priors_count` by race. African Americans are reported to have far higher prior rates than other races. Thus, `race` and `priors_count` tell the same story, and this colinearity renders race insignificant when both are included in the model.
As race plays no role in the usefulness of our model, `reg.noRace` is used for the remainder of the analysis.
```
group_by(df, race2) %>%
summarize(averagePriors = mean(as.numeric(priors_count))) %>%
ggplot(aes(race2, averagePriors, fill = race2)) +
geom_bar(stat="identity", position = "dodge") +
labs(title="Mean priors by race", y = "Mean Priors", x = "Race") +
scale_fill_manual(values = palette_3_colors, name = "Recidivism") +
plotTheme() + theme(legend.position = "none")
```
### 7\.3\.1 Accuracy \& Generalizability
Both accuracy and the confusion metrics are discussed here with emphasis on generalizability across race. To begin, the code below calculates a predicted recidivism class, `predClass`, for any predicted probability over 0\.50\.
```
testProbs <-
data.frame(class = test$recidivatedNumeric,
probs = predict(reg.noRace, test, type = "response"),
Race = test$race2)
```
The first cause for concern comes in Figure 7\.4 below which contrasts observed and predicted recidivism rates given the 50% threshold. About 45% of ex\-offenders are observed to recidivate across all races, but only 40% are predicted to do so. This underprediction is far more pronouced for Caucasians and other races, relative to African Americans.
```
mutate(testProbs, predClass = ifelse(probs >= .5, 1, 0)) %>%
group_by(Race) %>%
summarize(Observed.recidivism = sum(class) / n(),
Predicted.recidivism = sum(predClass) / n()) %>%
gather(Variable, Value, -Race) %>%
ggplot(aes(Race, Value)) +
geom_bar(aes(fill = Race), position="dodge", stat="identity") +
scale_fill_manual(values = palette_3_colors) +
facet_wrap(~Variable) +
labs(title = "Observed and predicted recidivism", x = "Race", y = "Rate") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
Let’s delve a bit deeper by visualizing confusion metrics by race. Northpointe, the company that markets decision\-making tools based on these data, has argued that the algorithm is fair because of the comparable across\-race accuracy rates.[65](#fn65) Table 7\.2 below, confirms this claim \- but that is far from the entire story.
Despite equal accuracy rates, the issue is in the disparities for each confusion metric. The `iterateThreshold` function, first used in Chapter 6, will be used again to calculate confusion metrics for each threshold by race.
The function takes several inputs including the `data` frame of predicted probabilities; an `observedClass`; a column of predicted probabilities, `predictedProbs`, and an optional `group` parameter that provides confusion metrics by race.
Below, the function is run and the results are filtered for just the 50% threshold. Accuracy and the confusion metrics as rates are selected out, converted to long form and then plotted as a grouped bar plot. Let’s interpret each metric in the context of social cost.
```
iterateThresholds <- function(data, observedClass, predictedProbs, group) {
observedClass <- enquo(observedClass)
predictedProbs <- enquo(predictedProbs)
group <- enquo(group)
x = .01
all_prediction <- data.frame()
if (missing(group)) {
while (x <= 1) {
this_prediction <- data.frame()
this_prediction <-
data %>%
mutate(predclass = ifelse(!!predictedProbs > x, 1,0)) %>%
count(predclass, !!observedClass) %>%
summarize(Count_TN = sum(n[predclass==0 & !!observedClass==0]),
Count_TP = sum(n[predclass==1 & !!observedClass==1]),
Count_FN = sum(n[predclass==0 & !!observedClass==1]),
Count_FP = sum(n[predclass==1 & !!observedClass==0]),
Rate_TP = Count_TP / (Count_TP + Count_FN),
Rate_FP = Count_FP / (Count_FP + Count_TN),
Rate_FN = Count_FN / (Count_FN + Count_TP),
Rate_TN = Count_TN / (Count_TN + Count_FP),
Accuracy = (Count_TP + Count_TN) /
(Count_TP + Count_TN + Count_FN + Count_FP)) %>%
mutate(Threshold = round(x,2))
all_prediction <- rbind(all_prediction,this_prediction)
x <- x + .01
}
return(all_prediction)
}
else if (!missing(group)) {
while (x <= 1) {
this_prediction <- data.frame()
this_prediction <-
data %>%
mutate(predclass = ifelse(!!predictedProbs > x, 1,0)) %>%
group_by(!!group) %>%
count(predclass, !!observedClass) %>%
summarize(Count_TN = sum(n[predclass==0 & !!observedClass==0]),
Count_TP = sum(n[predclass==1 & !!observedClass==1]),
Count_FN = sum(n[predclass==0 & !!observedClass==1]),
Count_FP = sum(n[predclass==1 & !!observedClass==0]),
Rate_TP = Count_TP / (Count_TP + Count_FN),
Rate_FP = Count_FP / (Count_FP + Count_TN),
Rate_FN = Count_FN / (Count_FN + Count_TP),
Rate_TN = Count_TN / (Count_TN + Count_FP),
Accuracy = (Count_TP + Count_TN) /
(Count_TP + Count_TN + Count_FN + Count_FP)) %>%
mutate(Threshold = round(x, 2))
all_prediction <- rbind(all_prediction, this_prediction)
x <- x + .01
}
return(all_prediction)
}
}
```
```
testProbs.thresholds <-
iterateThresholds(data=testProbs, observedClass = class,
predictedProbs = probs, group = Race)
filter(testProbs.thresholds, Threshold == .5) %>%
dplyr::select(Accuracy, Race, starts_with("Rate")) %>%
gather(Variable, Value, -Race) %>%
ggplot(aes(Variable, Value, fill = Race)) +
geom_bar(aes(fill = Race), position = "dodge", stat = "identity") +
scale_fill_manual(values = palette_3_colors) +
labs(title="Confusion matrix rates by race",
subtitle = "50% threshold", x = "Outcome",y = "Rate") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
* **False Negatives** \- The rate at which African Americans are incorrectly predicted *not* to recividate is noticeably lower than for other races. A judge granting paroll errant in this way, means an individual is incorrectly released from prison only to commit another crime. Here, the social cost is on society not the ex\-offender. In the context of disparate impact, Caucasians will be incorrectly released at greater rates than African Americans.
* **False Positives** \- The rate at which African Americans are incorrectly predicted to recidivate is noticeably higher than for other races. A judge faced with a paroll decision when this error comes into play will incorrectly prevent a prisoner from being released. Here, the social cost is most certainly with the ex\-offender, with a much greater disparate impact for African Americans.
These two metrics alone suggest that the use of this algorithm may have a disparate impact on African Americans. Thus, we should be weary of how useful this algorithm is for making criminal justice decisions.
### 7\.3\.1 Accuracy \& Generalizability
Both accuracy and the confusion metrics are discussed here with emphasis on generalizability across race. To begin, the code below calculates a predicted recidivism class, `predClass`, for any predicted probability over 0\.50\.
```
testProbs <-
data.frame(class = test$recidivatedNumeric,
probs = predict(reg.noRace, test, type = "response"),
Race = test$race2)
```
The first cause for concern comes in Figure 7\.4 below which contrasts observed and predicted recidivism rates given the 50% threshold. About 45% of ex\-offenders are observed to recidivate across all races, but only 40% are predicted to do so. This underprediction is far more pronouced for Caucasians and other races, relative to African Americans.
```
mutate(testProbs, predClass = ifelse(probs >= .5, 1, 0)) %>%
group_by(Race) %>%
summarize(Observed.recidivism = sum(class) / n(),
Predicted.recidivism = sum(predClass) / n()) %>%
gather(Variable, Value, -Race) %>%
ggplot(aes(Race, Value)) +
geom_bar(aes(fill = Race), position="dodge", stat="identity") +
scale_fill_manual(values = palette_3_colors) +
facet_wrap(~Variable) +
labs(title = "Observed and predicted recidivism", x = "Race", y = "Rate") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
Let’s delve a bit deeper by visualizing confusion metrics by race. Northpointe, the company that markets decision\-making tools based on these data, has argued that the algorithm is fair because of the comparable across\-race accuracy rates.[65](#fn65) Table 7\.2 below, confirms this claim \- but that is far from the entire story.
Despite equal accuracy rates, the issue is in the disparities for each confusion metric. The `iterateThreshold` function, first used in Chapter 6, will be used again to calculate confusion metrics for each threshold by race.
The function takes several inputs including the `data` frame of predicted probabilities; an `observedClass`; a column of predicted probabilities, `predictedProbs`, and an optional `group` parameter that provides confusion metrics by race.
Below, the function is run and the results are filtered for just the 50% threshold. Accuracy and the confusion metrics as rates are selected out, converted to long form and then plotted as a grouped bar plot. Let’s interpret each metric in the context of social cost.
```
iterateThresholds <- function(data, observedClass, predictedProbs, group) {
observedClass <- enquo(observedClass)
predictedProbs <- enquo(predictedProbs)
group <- enquo(group)
x = .01
all_prediction <- data.frame()
if (missing(group)) {
while (x <= 1) {
this_prediction <- data.frame()
this_prediction <-
data %>%
mutate(predclass = ifelse(!!predictedProbs > x, 1,0)) %>%
count(predclass, !!observedClass) %>%
summarize(Count_TN = sum(n[predclass==0 & !!observedClass==0]),
Count_TP = sum(n[predclass==1 & !!observedClass==1]),
Count_FN = sum(n[predclass==0 & !!observedClass==1]),
Count_FP = sum(n[predclass==1 & !!observedClass==0]),
Rate_TP = Count_TP / (Count_TP + Count_FN),
Rate_FP = Count_FP / (Count_FP + Count_TN),
Rate_FN = Count_FN / (Count_FN + Count_TP),
Rate_TN = Count_TN / (Count_TN + Count_FP),
Accuracy = (Count_TP + Count_TN) /
(Count_TP + Count_TN + Count_FN + Count_FP)) %>%
mutate(Threshold = round(x,2))
all_prediction <- rbind(all_prediction,this_prediction)
x <- x + .01
}
return(all_prediction)
}
else if (!missing(group)) {
while (x <= 1) {
this_prediction <- data.frame()
this_prediction <-
data %>%
mutate(predclass = ifelse(!!predictedProbs > x, 1,0)) %>%
group_by(!!group) %>%
count(predclass, !!observedClass) %>%
summarize(Count_TN = sum(n[predclass==0 & !!observedClass==0]),
Count_TP = sum(n[predclass==1 & !!observedClass==1]),
Count_FN = sum(n[predclass==0 & !!observedClass==1]),
Count_FP = sum(n[predclass==1 & !!observedClass==0]),
Rate_TP = Count_TP / (Count_TP + Count_FN),
Rate_FP = Count_FP / (Count_FP + Count_TN),
Rate_FN = Count_FN / (Count_FN + Count_TP),
Rate_TN = Count_TN / (Count_TN + Count_FP),
Accuracy = (Count_TP + Count_TN) /
(Count_TP + Count_TN + Count_FN + Count_FP)) %>%
mutate(Threshold = round(x, 2))
all_prediction <- rbind(all_prediction, this_prediction)
x <- x + .01
}
return(all_prediction)
}
}
```
```
testProbs.thresholds <-
iterateThresholds(data=testProbs, observedClass = class,
predictedProbs = probs, group = Race)
filter(testProbs.thresholds, Threshold == .5) %>%
dplyr::select(Accuracy, Race, starts_with("Rate")) %>%
gather(Variable, Value, -Race) %>%
ggplot(aes(Variable, Value, fill = Race)) +
geom_bar(aes(fill = Race), position = "dodge", stat = "identity") +
scale_fill_manual(values = palette_3_colors) +
labs(title="Confusion matrix rates by race",
subtitle = "50% threshold", x = "Outcome",y = "Rate") +
plotTheme() + theme(axis.text.x = element_text(angle = 45, hjust = 1))
```
* **False Negatives** \- The rate at which African Americans are incorrectly predicted *not* to recividate is noticeably lower than for other races. A judge granting paroll errant in this way, means an individual is incorrectly released from prison only to commit another crime. Here, the social cost is on society not the ex\-offender. In the context of disparate impact, Caucasians will be incorrectly released at greater rates than African Americans.
* **False Positives** \- The rate at which African Americans are incorrectly predicted to recidivate is noticeably higher than for other races. A judge faced with a paroll decision when this error comes into play will incorrectly prevent a prisoner from being released. Here, the social cost is most certainly with the ex\-offender, with a much greater disparate impact for African Americans.
These two metrics alone suggest that the use of this algorithm may have a disparate impact on African Americans. Thus, we should be weary of how useful this algorithm is for making criminal justice decisions.
7\.4 What about the threshold?
------------------------------
As we learned in the previous chapter, the threshold at which an individual outcome is classified to be true can make all the difference. The same is true in this use case, but can finding an optimal threshold help to erase the disparate impact? Let’s explore some metrics related to the threshold, beginning with an across race ROC Curve.
The ROC Curve measures trade\-offs in true positive and false positive rates for each threshold. Recall from Figure 6\.7 (the ‘football plot’), that the diagonal ‘coin flip line’ suggests a classifier which gets it right 50% of the time, but also gets it wrong 50% of the time. Anything classified on or below the coin flip line is not useful.
```
aucTable <-
testProbs %>%
group_by(Race) %>%
summarize(AUC = auc(class,probs)) %>%
mutate(AUC = as.character(round(AUC, 3)))
mutate(testProbs.thresholds, pointSize = ifelse(Threshold == .48, 24, 16)) %>%
ggplot(aes(Rate_FP, Rate_TP, colour=Race)) +
geom_point(aes(shape = pointSize)) + geom_line() + scale_shape_identity() +
scale_color_manual(values = palette_3_colors) +
geom_abline(slope = 1, intercept = 0, size = 1.5, color = 'grey') +
annotation_custom(tableGrob(aucTable, rows = NULL), xmin = .65, xmax = 1, ymin = 0, ymax = .25) +
labs(title="ROC Curves by race", x="False Positive Rate", y="True Positive Rate") +
plotTheme()
```
Above, the ROC curve is generated for each race and shows there are really three different curves. `testProbs.thresholds` shows that a threshold that gives an African American True Positive rate of roughly 76% of the time, gives one for Caucasians 48% of the time. At that same threshold however, the model makes more False Positives for African Americans (37%) than it does for Caucasians (22%).
To see evidence of these disparities, note the triangles in the figure above or try `filter(testProbs.thresholds, Threshold == 0.48)`.
What is the appropriate risk score threshold at which we should conclude an ex\-offender will recidivate? The ROC curves suggest a threshold suitable for one race may not be robust for another. It stands to reason then that perhaps a different threshold for each race may help to equalize some of these confusion metric differences.
7\.5 Optimizing ‘equitable’ thresholds
--------------------------------------
If a given predicted threshold leads to a disparate impact for one race, then it is inequitable. Let’s explore the possibility of reducing disparate impact by searching for an equitable threshold for each race. I was first exposed to the idea of equitable thresholds through the work of colleagues at Oregon’s State Department of Child Protective Services, who develop machine learning models to predict child welfare outcomes and allocate limited social worker resources. Fairness is a key concern for the development team who published a paper on their approach for ‘fairness correction’.[66](#fn66) In this section, I replicate a more simple version of that work.
Consider a simple measure of disparity that argues for a comparable absolute difference in False Positive (`Rate_FP`) and False Negative (`Rate_FN`) rates across both races. The below table shows what that disparity looks like at the 50% threshold. We see a comparable `Difference` in accuracy, but diverging error rates across races.
| Var | African\-American | Caucasian | Difference |
| --- | --- | --- | --- |
| Accuracy | 0\.6871089 | 0\.6824458 | 0\.0046631 |
| Rate\_FN | 0\.2925659 | 0\.5392670 | 0\.2467011 |
| Rate\_FP | 0\.3350785 | 0\.1835443 | 0\.1515342 |
| Disparities in across\-race confusion metrics (50% threshold) |
| --- |
| Table 7\.2 |
Let’s try to achieve equitable thresholds that:
1. Exhibit little difference in across\-race False Positive and Negative Rates; while ensuring both rates are relatively low.
2. Exhibit little difference in across\-race Accuracy rates; while ensuring these rates are relatively high.
To create this optimization, across\-race confusion metrics and accuracies are calculated for each possible threshold combination. Here, to keep it simple, thresholds are analyzed at 10% intervals and only for two races. Doing so results in the below visualization.
Here, the x\-axis runs from 0 (no difference in across\-race confusion metrics) and 1 (perfect accuracy). The y\-axis shows each threshold pair. Shades of purple represent accuracy rates \- which we hope are close to 1 and comparable, across races. Shades of blue represent differences in across\-rate confusion metrics, which should be close to 0 and comparable. False error rates in shades of green are shown for only African Americans, given the associated social costs.
The plot is sorted in a way that maximizes the (euclidean) distance between accuracies and differences, such that a distance of `1` would mean both differences are `0` and both accuracies are `1`.
When the `0.5, 0.5` threshold (in red) is visualized this way, it is clear that a 50% default is far from ideal. There are some interesting candidates here for most equitable \- but tradeoffs exist. `0.6, 0.5` has high and comparable accuracy rates; differences in false positives close to 0; but an overall high false negative rate for African Americans. Consider how this contrasts with `0.5, 0.4`, which lowers the threshold for Caucasians instead of increasing it for African Americans. What more can you say about the seemingly most equitable threshold pair, 0\.6, 0\.5?
Below is table of results for each threshold pair. Note that these results are likely influenced by the results of the training/test split. Iteratively sampling or ‘bootstrapping’ many splits, and calculating equitable thresholds for each, would be a better approach. In addition, estimating results using thresholds at the hundredths of a percent may be more robust than at the tenths of a percent.
| threshold | False\_Negative\_Diff | False\_Positive\_Diff | African.American.accuracy | Caucasian.accuracy | African.American.FP | African.American.FN |
| --- | --- | --- | --- | --- | --- | --- |
| 0\.6, 0\.5 | 0\.07 | 0\.01 | 0\.6708385 | 0\.6824458 | 0\.1727749 | 0\.4724221 |
| 0\.7, 0\.6 | 0\.03 | 0\.01 | 0\.6132666 | 0\.6903353 | 0\.0890052 | 0\.6594724 |
| 0\.5, 0\.4 | 0\.08 | 0\.02 | 0\.6871089 | 0\.6410256 | 0\.3350785 | 0\.2925659 |
| 0\.8, 0\.7 | 0\.05 | 0\.01 | 0\.5419274 | 0\.6765286 | 0\.0157068 | 0\.8633094 |
| 0\.4, 0\.3 | 0\.10 | 0\.04 | 0\.6720901 | 0\.6035503 | 0\.5445026 | 0\.1294964 |
| 0\.9, 0\.8 | 0\.01 | 0\.01 | 0\.4956195 | 0\.6351085 | 0\.0026178 | 0\.9640288 |
| 0\.3, 0\.2 | 0\.01 | 0\.01 | 0\.6207760 | 0\.5128205 | 0\.7225131 | 0\.0647482 |
| 1, 1 | 0\.00 | 0\.00 | 0\.4780976 | 0\.6232742 | 0\.0000000 | 1\.0000000 |
| 0\.9, 0\.9 | 0\.03 | 0\.00 | 0\.4956195 | 0\.6232742 | 0\.0026178 | 0\.9640288 |
| 0\.9, 1 | 0\.04 | 0\.00 | 0\.4956195 | 0\.6232742 | 0\.0026178 | 0\.9640288 |
| 1, 0\.9 | 0\.01 | 0\.01 | 0\.4780976 | 0\.6232742 | 0\.0000000 | 1\.0000000 |
| 0\.8, 0\.8 | 0\.09 | 0\.01 | 0\.5419274 | 0\.6351085 | 0\.0157068 | 0\.8633094 |
| 0\.7, 0\.7 | 0\.15 | 0\.06 | 0\.6132666 | 0\.6765286 | 0\.0890052 | 0\.6594724 |
| 0\.7, 0\.5 | 0\.12 | 0\.09 | 0\.6132666 | 0\.6824458 | 0\.0890052 | 0\.6594724 |
| 0\.5, 0\.3 | 0\.07 | 0\.16 | 0\.6871089 | 0\.6035503 | 0\.3350785 | 0\.2925659 |
| 1, 0\.8 | 0\.05 | 0\.01 | 0\.4780976 | 0\.6351085 | 0\.0000000 | 1\.0000000 |
| 1, 0\.1 | 0\.99 | 0\.96 | 0\.4780976 | 0\.4023669 | 0\.0000000 | 1\.0000000 |
| 0\.6, 0\.6 | 0\.22 | 0\.09 | 0\.6708385 | 0\.6903353 | 0\.1727749 | 0\.4724221 |
| 0\.8, 0\.9 | 0\.13 | 0\.01 | 0\.5419274 | 0\.6232742 | 0\.0157068 | 0\.8633094 |
| 0\.6, 0\.4 | 0\.10 | 0\.18 | 0\.6708385 | 0\.6410256 | 0\.1727749 | 0\.4724221 |
| 0\.9, 0\.7 | 0\.15 | 0\.03 | 0\.4956195 | 0\.6765286 | 0\.0026178 | 0\.9640288 |
| 0\.8, 0\.6 | 0\.17 | 0\.06 | 0\.5419274 | 0\.6903353 | 0\.0157068 | 0\.8633094 |
| 0\.8, 1 | 0\.14 | 0\.02 | 0\.5419274 | 0\.6232742 | 0\.0157068 | 0\.8633094 |
| 0\.9, 0\.1 | 0\.96 | 0\.95 | 0\.4956195 | 0\.4023669 | 0\.0026178 | 0\.9640288 |
| 1, 0\.7 | 0\.19 | 0\.03 | 0\.4780976 | 0\.6765286 | 0\.0000000 | 1\.0000000 |
| 0\.4, 0\.2 | 0\.06 | 0\.19 | 0\.6720901 | 0\.5128205 | 0\.5445026 | 0\.1294964 |
| 0\.5, 0\.5 | 0\.25 | 0\.15 | 0\.6871089 | 0\.6824458 | 0\.3350785 | 0\.2925659 |
| 0\.9, 0\.6 | 0\.27 | 0\.08 | 0\.4956195 | 0\.6903353 | 0\.0026178 | 0\.9640288 |
| 0\.7, 0\.8 | 0\.29 | 0\.08 | 0\.6132666 | 0\.6351085 | 0\.0890052 | 0\.6594724 |
| 0\.2, 0\.1 | 0\.02 | 0\.04 | 0\.5469337 | 0\.4023669 | 0\.9188482 | 0\.0263789 |
| 0\.1, 0\.1 | 0\.00 | 0\.04 | 0\.5231539 | 0\.4023669 | 0\.9947644 | 0\.0023981 |
| 1, 0\.6 | 0\.31 | 0\.08 | 0\.4780976 | 0\.6903353 | 0\.0000000 | 1\.0000000 |
| 0\.6, 0\.7 | 0\.34 | 0\.14 | 0\.6708385 | 0\.6765286 | 0\.1727749 | 0\.4724221 |
| 0\.8, 0\.1 | 0\.86 | 0\.94 | 0\.5419274 | 0\.4023669 | 0\.0157068 | 0\.8633094 |
| 0\.4, 0\.4 | 0\.24 | 0\.19 | 0\.6720901 | 0\.6410256 | 0\.5445026 | 0\.1294964 |
| 0\.7, 0\.9 | 0\.33 | 0\.08 | 0\.6132666 | 0\.6232742 | 0\.0890052 | 0\.6594724 |
| 0\.1, 1 | 1\.00 | 0\.99 | 0\.5231539 | 0\.6232742 | 0\.9947644 | 0\.0023981 |
| 0\.3, 0\.3 | 0\.16 | 0\.22 | 0\.6207760 | 0\.6035503 | 0\.7225131 | 0\.0647482 |
| 0\.7, 1 | 0\.34 | 0\.09 | 0\.6132666 | 0\.6232742 | 0\.0890052 | 0\.6594724 |
| 0\.2, 0\.2 | 0\.05 | 0\.18 | 0\.5469337 | 0\.5128205 | 0\.9188482 | 0\.0263789 |
| 0\.1, 0\.9 | 0\.99 | 0\.99 | 0\.5231539 | 0\.6232742 | 0\.9947644 | 0\.0023981 |
| 0\.3, 0\.1 | 0\.06 | 0\.23 | 0\.6207760 | 0\.4023669 | 0\.7225131 | 0\.0647482 |
| 0\.8, 0\.5 | 0\.32 | 0\.17 | 0\.5419274 | 0\.6824458 | 0\.0157068 | 0\.8633094 |
| 0\.1, 0\.8 | 0\.95 | 0\.99 | 0\.5231539 | 0\.6351085 | 0\.9947644 | 0\.0023981 |
| 0\.4, 0\.1 | 0\.12 | 0\.41 | 0\.6720901 | 0\.4023669 | 0\.5445026 | 0\.1294964 |
| 0\.1, 0\.2 | 0\.07 | 0\.26 | 0\.5231539 | 0\.5128205 | 0\.9947644 | 0\.0023981 |
| 0\.5, 0\.6 | 0\.40 | 0\.26 | 0\.6871089 | 0\.6903353 | 0\.3350785 | 0\.2925659 |
| 0\.2, 1 | 0\.97 | 0\.92 | 0\.5469337 | 0\.6232742 | 0\.9188482 | 0\.0263789 |
| 0\.6, 0\.8 | 0\.48 | 0\.16 | 0\.6708385 | 0\.6351085 | 0\.1727749 | 0\.4724221 |
| 0\.9, 0\.5 | 0\.42 | 0\.18 | 0\.4956195 | 0\.6824458 | 0\.0026178 | 0\.9640288 |
| 1, 0\.2 | 0\.93 | 0\.74 | 0\.4780976 | 0\.5128205 | 0\.0000000 | 1\.0000000 |
| 0\.2, 0\.9 | 0\.96 | 0\.91 | 0\.5469337 | 0\.6232742 | 0\.9188482 | 0\.0263789 |
| 1, 0\.5 | 0\.46 | 0\.18 | 0\.4780976 | 0\.6824458 | 0\.0000000 | 1\.0000000 |
| 0\.6, 0\.3 | 0\.25 | 0\.33 | 0\.6708385 | 0\.6035503 | 0\.1727749 | 0\.4724221 |
| 0\.7, 0\.4 | 0\.29 | 0\.26 | 0\.6132666 | 0\.6410256 | 0\.0890052 | 0\.6594724 |
| 0\.5, 0\.2 | 0\.22 | 0\.40 | 0\.6871089 | 0\.5128205 | 0\.3350785 | 0\.2925659 |
| 0\.6, 0\.9 | 0\.52 | 0\.17 | 0\.6708385 | 0\.6232742 | 0\.1727749 | 0\.4724221 |
| 0\.6, 1 | 0\.53 | 0\.17 | 0\.6708385 | 0\.6232742 | 0\.1727749 | 0\.4724221 |
| 0\.2, 0\.8 | 0\.93 | 0\.91 | 0\.5469337 | 0\.6351085 | 0\.9188482 | 0\.0263789 |
| 0\.7, 0\.1 | 0\.65 | 0\.87 | 0\.6132666 | 0\.4023669 | 0\.0890052 | 0\.6594724 |
| 0\.5, 0\.1 | 0\.29 | 0\.62 | 0\.6871089 | 0\.4023669 | 0\.3350785 | 0\.2925659 |
| 0\.9, 0\.2 | 0\.89 | 0\.73 | 0\.4956195 | 0\.5128205 | 0\.0026178 | 0\.9640288 |
| 0\.6, 0\.1 | 0\.47 | 0\.78 | 0\.6708385 | 0\.4023669 | 0\.1727749 | 0\.4724221 |
| 0\.4, 0\.5 | 0\.41 | 0\.36 | 0\.6720901 | 0\.6824458 | 0\.5445026 | 0\.1294964 |
| 0\.3, 0\.4 | 0\.31 | 0\.37 | 0\.6207760 | 0\.6410256 | 0\.7225131 | 0\.0647482 |
| 0\.1, 0\.7 | 0\.81 | 0\.97 | 0\.5231539 | 0\.6765286 | 0\.9947644 | 0\.0023981 |
| 0\.5, 0\.7 | 0\.52 | 0\.31 | 0\.6871089 | 0\.6765286 | 0\.3350785 | 0\.2925659 |
| 0\.2, 0\.3 | 0\.20 | 0\.42 | 0\.5469337 | 0\.6035503 | 0\.9188482 | 0\.0263789 |
| 0\.3, 1 | 0\.94 | 0\.72 | 0\.6207760 | 0\.6232742 | 0\.7225131 | 0\.0647482 |
| 1, 0\.4 | 0\.63 | 0\.35 | 0\.4780976 | 0\.6410256 | 0\.0000000 | 1\.0000000 |
| 0\.1, 0\.3 | 0\.22 | 0\.49 | 0\.5231539 | 0\.6035503 | 0\.9947644 | 0\.0023981 |
| 0\.2, 0\.7 | 0\.79 | 0\.89 | 0\.5469337 | 0\.6765286 | 0\.9188482 | 0\.0263789 |
| 0\.8, 0\.2 | 0\.79 | 0\.72 | 0\.5419274 | 0\.5128205 | 0\.0157068 | 0\.8633094 |
| 0\.3, 0\.9 | 0\.92 | 0\.72 | 0\.6207760 | 0\.6232742 | 0\.7225131 | 0\.0647482 |
| 1, 0\.3 | 0\.77 | 0\.50 | 0\.4780976 | 0\.6035503 | 0\.0000000 | 1\.0000000 |
| 0\.5, 0\.8 | 0\.66 | 0\.33 | 0\.6871089 | 0\.6351085 | 0\.3350785 | 0\.2925659 |
| 0\.9, 0\.4 | 0\.59 | 0\.35 | 0\.4956195 | 0\.6410256 | 0\.0026178 | 0\.9640288 |
| 0\.8, 0\.4 | 0\.49 | 0\.34 | 0\.5419274 | 0\.6410256 | 0\.0157068 | 0\.8633094 |
| 0\.5, 0\.9 | 0\.70 | 0\.33 | 0\.6871089 | 0\.6232742 | 0\.3350785 | 0\.2925659 |
| 0\.5, 1 | 0\.71 | 0\.34 | 0\.6871089 | 0\.6232742 | 0\.3350785 | 0\.2925659 |
| 0\.1, 0\.6 | 0\.69 | 0\.92 | 0\.5231539 | 0\.6903353 | 0\.9947644 | 0\.0023981 |
| 0\.3, 0\.8 | 0\.89 | 0\.71 | 0\.6207760 | 0\.6351085 | 0\.7225131 | 0\.0647482 |
| 0\.6, 0\.2 | 0\.40 | 0\.56 | 0\.6708385 | 0\.5128205 | 0\.1727749 | 0\.4724221 |
| 0\.7, 0\.3 | 0\.43 | 0\.41 | 0\.6132666 | 0\.6035503 | 0\.0890052 | 0\.6594724 |
| 0\.9, 0\.3 | 0\.74 | 0\.50 | 0\.4956195 | 0\.6035503 | 0\.0026178 | 0\.9640288 |
| 0\.4, 0\.6 | 0\.56 | 0\.47 | 0\.6720901 | 0\.6903353 | 0\.5445026 | 0\.1294964 |
| 0\.4, 1 | 0\.87 | 0\.54 | 0\.6720901 | 0\.6232742 | 0\.5445026 | 0\.1294964 |
| 0\.2, 0\.4 | 0\.35 | 0\.57 | 0\.5469337 | 0\.6410256 | 0\.9188482 | 0\.0263789 |
| 0\.3, 0\.5 | 0\.47 | 0\.54 | 0\.6207760 | 0\.6824458 | 0\.7225131 | 0\.0647482 |
| 0\.4, 0\.9 | 0\.86 | 0\.54 | 0\.6720901 | 0\.6232742 | 0\.5445026 | 0\.1294964 |
| 0\.2, 0\.6 | 0\.66 | 0\.84 | 0\.5469337 | 0\.6903353 | 0\.9188482 | 0\.0263789 |
| 0\.4, 0\.8 | 0\.82 | 0\.54 | 0\.6720901 | 0\.6351085 | 0\.5445026 | 0\.1294964 |
| 0\.8, 0\.3 | 0\.64 | 0\.48 | 0\.5419274 | 0\.6035503 | 0\.0157068 | 0\.8633094 |
| 0\.4, 0\.7 | 0\.68 | 0\.52 | 0\.6720901 | 0\.6765286 | 0\.5445026 | 0\.1294964 |
| 0\.1, 0\.4 | 0\.37 | 0\.64 | 0\.5231539 | 0\.6410256 | 0\.9947644 | 0\.0023981 |
| 0\.7, 0\.2 | 0\.59 | 0\.65 | 0\.6132666 | 0\.5128205 | 0\.0890052 | 0\.6594724 |
| 0\.3, 0\.7 | 0\.75 | 0\.69 | 0\.6207760 | 0\.6765286 | 0\.7225131 | 0\.0647482 |
| 0\.1, 0\.5 | 0\.54 | 0\.81 | 0\.5231539 | 0\.6824458 | 0\.9947644 | 0\.0023981 |
| 0\.2, 0\.5 | 0\.51 | 0\.74 | 0\.5469337 | 0\.6824458 | 0\.9188482 | 0\.0263789 |
| 0\.3, 0\.6 | 0\.63 | 0\.64 | 0\.6207760 | 0\.6903353 | 0\.7225131 | 0\.0647482 |
| |
| --- |
| Table 7\.3 |
Does engineering equitable thresholds help reduce disparate impact? Perhaps, but like many interesting social science phenomena, trade\-offs must be made and no threshold is perfect. Consider how such an outcome relates to the across\-race selection bias in these data. In this chapter, the focus is has been on social costs \- which is just one of many potential decision\-making bottom lines in government.
While it is often difficult to qualitatively or quantitatively judge the social costs and benefits of algorithms, I urge you to try. The results may not communicate neatly with dollar signs. Instead, careful discussion is needed on the decision\-making process, the business\-as\-usual approach, the underlying data and all the many trade\-offs. More so, terms like ‘False Positive’, ROC curves etc. will likely be lost on the Police Chief, Mayor or other non\-technical decisions\-makers.
That’s fine. The Police Chief has enough domain expertise to understand how the trade\-offs effect the efficiency and effectiveness of her agency \- so long as you or a member of your team can communicate well. Keep in mind, that these decisions are already being made by human beings, and you can create a confusion matrix for the business\-as\-usual decision\-making.
By now, it is clear that these methods are imperfect. Algorithms make mistakes, as do humans and even judges. The question is, are these methods useful?
Judging a useful algorithm thus requires far more than data science prowess. For use cases with potentially significant social costs, proper governance is key. We return to the role of ‘Algorithmic Governance’ next, in the book’s conclusion.
7\.6 Assignment \- Memo to the Mayor
------------------------------------
Let’s see how good you are at communicating these very nuanced data science results to non\-technical domain experts. Assume you are a data scientist working for the Department of Prisons and you are to *make recommendations* to your city’s Mayor about whether or not she should adopt a new recidivism algorithm into how the City allocates an ex\-offender job training program. Some in the Administration have expressed their concern that expanding the City’s limited job training resources on ex\-offenders who recidivate shortly after their release is not good policy. What do you think?
Begin by sketching out the basics of the job training program, quantifying the program costs. Although our concern is social costs, research the financial costs to individuals and society related to imprisonment. Calculate costs and benefits using this research as well as your own sensible assumptions; choose a threshold from `testProbs.thresholds`, and use it to develop a qualitative and quantitative cost/benefit analysis.
In your memo, no shorter than 600 words and no longer than 800 words, **you must argue for the use of the algorithm**, including the following information:
1. Explain the job training program, why the City is considering an algorithm to allocate the program and how the algorithm works, in brief.
2. Acknowledge and explain to the Mayor why she should be concerned with algorithmic fairness.
3. Present your cost/benefit analysis for your equitable threshold of choice **including** an across\-race grouped bar plot of each confusion metric the 0\.5, 0\.5 threshold and your optimal threshold.
4. Interpret the trade\-off between accuracy and generalizability as it relates to *the use* of this algorithm in prioritizing a job training program and advocate that the Mayor adopt the algorithm.
Remember, if your memo is too technical, you will lose the Mayor’s attention. If you are disingenuous about disparate impact, the press and other advocates will eventually get hold of your memo and use it to hurt the Mayor politically. If you are too strong about the implications for fairness, the Mayor will not agree to use the algorithm and your boss, may fire you. The best hint I can give you is to focus your memo on the use case at hand \- but don’t forget, politics are an important consideration.
| Field Specific |
urbanspatial.github.io | https://urbanspatial.github.io/PublicPolicyAnalytics/predicting-rideshare-demand.html |
Chapter 8 Predicting rideshare demand
=====================================
8\.1 Introduction \- ride share
-------------------------------
This last chapter returns to spatial problem solving to predict space/time demand for ride share in Chicago. Companies like Uber \& Lyft generate and analyze tremendous amounts of data to incentivize ride share use; to employ dynamic or ‘surge’ pricing; to solve routing problems; and to forecast ride share demand to minimize driver response times. This last use case is the focus of this chapter.
The model developed here is similar to the other geospatial machine learning models built thus far, with two exceptions. First, this chapter focuses on time effects, adding additional complexity to our models, and two, social costs are less important here than they have been in previous chapters.
We have dealt with time once previously. Recall the Predictive Policing algorithm was trained on 2017 burglaries and validated on 2018 (5\.5\.4\). Here, time was not an explicit parameter in the model. Instead, it was assumed that the 2017 burglary experience generalized to 2018\.
To forecast ride share, time must be explicitly accounted for. Conceptually, modeling time is not all that different from modeling space. Spatial autocorrelation posits that values *here* are in part, a function of nearby values. In the case of temporal or ‘serial correlation’, a similar hypothesis can be posited \- that the value *now* is in part, a function of values in the past.
There are many examples of serial correlation. Gas prices today are related to gas prices yesterday. Same with stock prices, traffic, and daily temperatures. Just as an understanding of the underlying spatial process is the key to a strong spatial model, the key to a strong time series model is an understanding of the underlying temporal process.
Figure 8\.1, Source: [https://eng.uber.com/forecasting\-introduction/](https://eng.uber.com/forecasting-introduction/)
Uber describes its Marketplace Algorithm as one that, “enables us to predict user supply and demand in a spatio\-temporal fine granular fashion to direct driver\-partners to high demand areas before they arise, thereby increasing their trip count and earnings.”[67](#fn67) They go on to remark, “Spatio\-temporal forecasts are still an open research area.” Figure 8\.1 provides an example from the quoted Uber Engineering blog.
In a word, this is a dispatch problem, and there are two general approaches to consider. The more naive is to route drivers in response to space/time demand spikes as they emerge in real time. The problem with this approach is that by the time drivers reach a hot spot, the spike may have ended. Not only might this improperly allocate vehicles in the short run, but feedback effects may increase response times to other parts of the city in the long run.
The second approach is to generalize from recent ride share experiences to predict demand in the near future. Take rush hour for example \- demand occurs in the same locations at the same times, Monday through Friday. Thus, rush hour demand on Tuesday can be used to predict rush hour demand on Wednesday.
An actual ride share forecast would likely predict trip demand or `Trip_Count` for very high resolution space/time intervals, like for every 5 minutes for every 100x100 ft. fishnet grid cell. Our model will take a low resolution approach, reducing millions of Chicago ride share trips from November through December, 2018, into a 20% subsample and aggregating to hourly intervals and a subset of Chicago Census tracts.
We will learn new approaches for manipulating temporal data and creating time\-based features using the `lubridate` package. We also learn the `purrr` family of functions to loop through the validation of many different regressions. Data is wrangled in the next section. Exploratory Analysis then analyzes space/time patterns in the data. The final section trains and validates a space/time forecast.
8\.2 Data Wrangling \- ride share
---------------------------------
Begin by loading the required libraries and functions. The ride share data is then read in and wrangled along with weather data. Ride share trip data is then wrangled into a complete ‘panel’ of observations that include every possible space/time combination.
```
library(tidyverse)
library(sf)
library(lubridate)
library(tigris)
library(gganimate)
library(riem)
library(gridExtra)
library(knitr)
library(kableExtra)
options(tigris_class = "sf")
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
palette5 <- c("#eff3ff","#bdd7e7","#6baed6","#3182bd","#08519c")
palette4 <- c("#D2FBD4","#92BCAB","#527D82","#123F5A")
palette2 <- c("#6baed6","#08519c")
```
The ride share data for November and December 2018 is read in. These data exist on the Chicago Open Data portal, but because they are so large, querying with the API can take a very long time.[68](#fn68) Instead, the data below is a \~20% sample (n \= \~1\.793 million rows) of the original data.
```
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
ride <- read.csv(file.path(root.dir,"Chapter8/chicago_rideshare_trips_nov_dec_18_clean_sample.csv"))
```
To keep the data size manageable, only 3 pertinent fields are included in the data and defined in the table below.
| Variable\_Name | Description |
| --- | --- |
| Trip.Start.Timestamp | Date/time trip started |
| Pickup.Census.Tract | Census Tract origin |
| Dropoff.Census.Tract | Census Tract destination |
| |
| --- |
| Table 8\.1 |
### 8\.2\.1 Lubridate
Next, temporal data wrangling is performed using the fantastically simple `lubridate` package. One of the more powerful features of `lubridate` is its ability to standardize date/time stamps. In the code block below, a list contains my birthday written four different ways. Subjecting that list to the `ymd` function miraculously standardizes three of the four items.
```
ymd(c("1982-09-06", "1982-Sep-6", "1982-Sept-06", "1982-Sept-six"))
```
```
## [1] "1982-09-06" "1982-09-06" "1982-09-06" NA
```
`ymd` is a one of several components of the `parse_date_time` function. As below, these functions standardize the `Trip.Start.Timestamp` field (when a trip departed) into the 60 minute and 15 minute intervals needed for our analysis. Functions like `week`, `wkday` and `hour` convert the data/time stamp into week of the year, day of the week, and hour of the day, respectively.
Two `Pickup.Census.Tract` units for Chicago’s O’Hare International Airport are dropped. Surely ride share companies forecast airport demand, but they likely employ additional features/models that account for takeoff and landing patterns.
```
ride2 <-
ride %>%
mutate(interval60 = floor_date(mdy_hms(Trip.Start.Timestamp), unit = "hour"),
interval15 = floor_date(mdy_hms(Trip.Start.Timestamp), unit = "15 mins"),
week = week(interval60),
dotw = wday(interval60, label=TRUE),
Pickup.Census.Tract = as.character(Pickup.Census.Tract),
Dropoff.Census.Tract = as.character(Dropoff.Census.Tract)) %>%
filter(Pickup.Census.Tract != "17031980000" & Pickup.Census.Tract != "17031770602")
ride2[1:3, c(1,4:7)]
```
```
## Trip.Start.Timestamp interval60 interval15 week dotw
## 1 12/07/2018 04:30:00 PM 2018-12-07 16:00:00 2018-12-07 16:30:00 49 Fri
## 2 12/30/2018 06:00:00 PM 2018-12-30 18:00:00 2018-12-30 18:00:00 52 Sun
## 3 11/24/2018 07:45:00 AM 2018-11-24 07:00:00 2018-11-24 07:45:00 47 Sat
```
### 8\.2\.2 Weather data
One might reasonably assume that inclement weather in the Windy City would incentivize ride share. There once were a host of open weather data APIs available to the rstats community, but that changed when IBM bought Weather Company and Weather Underground, two giant aggregators of weather data. Recently, the good people at the Iowa Environment Mesonet released the `riem` package[69](#fn69), which provides free space/time weather data.
The `riem_measures` function downloads `weather.Data` for O’Hare Airport between November 1, 2018 and January 1, 2019\. Note that the O’Hare weather station sufficiently provides temporal weather for all of Chicago.
```
weather.Data <-
riem_measures(station = "ORD", date_start = "2018-11-01", date_end = "2019-01-01")
```
In this chapter, several ‘panel’ datasets are created. A panel is long form data, typically giving repeat observations for particular items. An example would be a dataset tracking student grades over time. Here, each row would represent a student/year pair. Every twelve rows would represent one student’s grades across twelve years of schooling.
Below a `weather.Panel` is generated to summarize temperature, precipitation, and wind speed for every hour between November and December. In the code block, `mutate_if` and `replace_na` converts any character or numeric field with `NA` to 0\. The first `mutate` function creates `interval60` by converting the date/time stamp, `valid`, from 5 minute intervals to 60 minute intervals. Note what `?substr` does. Then `group_by` each hour (`interval60`) to `summarize` a final set of hourly weather indicators.
Below the weather data is plotted as a time series using `grid.arrange`.
```
weather.Panel <-
weather.Data %>%
mutate_if(is.character, list(~replace(as.character(.), is.na(.), "0"))) %>%
replace(is.na(.), 0) %>%
mutate(interval60 = ymd_h(substr(valid, 1, 13))) %>%
mutate(week = week(interval60),
dotw = wday(interval60, label=TRUE)) %>%
group_by(interval60) %>%
summarize(Temperature = max(tmpf),
Percipitation = sum(p01i),
Wind_Speed = max(sknt)) %>%
mutate(Temperature = ifelse(Temperature == 0, 42, Temperature))
```
```
grid.arrange(top = "Weather Data - Chicago - November & December, 2018",
ggplot(weather.Panel, aes(interval60,Percipitation)) + geom_line() +
labs(title="Percipitation", x="Hour", y="Percipitation") + plotTheme(),
ggplot(weather.Panel, aes(interval60,Wind_Speed)) + geom_line() +
labs(title="Wind Speed", x="Hour", y="Wind Speed") + plotTheme(),
ggplot(weather.Panel, aes(interval60,Temperature)) + geom_line() +
labs(title="Temperature", x="Hour", y="Temperature") + plotTheme())
```
### 8\.2\.3 Subset a study area using neighborhoods
A ride share forecast for every Cook County tract, for every hour, for 8 weeks, would yield a time/space panel (data frame) consisting of `nrow(chicagoTracts) * 24 * 7 * 8` \= 1,771,392 rows. A regression that size will melt your laptop. Instead, 201 Census tracts are subset across Chicago’s downtown, the Loop, up through Wrigleyville and Lincoln Square.
The code block below pulls all tract geometries from the `tigris` package, loads a neighborhood geojson and subsets those found in a `neighborhoodList`. `st_intersection` then finds `studyArea.tracts`. The plot below maps `studyArea.tracts` relative to `chicagoTracts`.
```
chicagoTracts <-
tigris::tracts(state = "Illinois", county = "Cook") %>%
dplyr::select(GEOID) %>% filter(GEOID != 17031990000)
neighborhoodList <-
c("Grant Park","Printers Row","Loop","Millenium Park","West Loop","United Center",
"West Town","East Village","Ukranian Village","Wicker Park","River North",
"Rush & Division","Streeterville","Gold Coast","Old Town","Bucktown","Lincoln Park",
"Sheffield & DePaul","Lake View","Boystown","Wrigleyville","North Center","Uptown",
"Lincoln Square","Little Italy, UIC")
nhoods <-
st_read("https://data.cityofchicago.org/api/geospatial/bbvz-uum9?method=export&format=GeoJSON") %>%
st_transform(st_crs(chicagoTracts)) %>%
filter(pri_neigh %in% neighborhoodList)
studyArea.tracts <-
st_intersection(chicagoTracts, st_union(nhoods))
```
### 8\.2\.4 Create the final space/time panel
The dataset for this analysis must be a complete panel with an observation for every possible space/time combination. The `ride2` data frame is incomplete as some space/time intervals saw no trips. In the code blocks below, the panel is completed by finding all unique space/time intervals and inserting `0` trips where necessary. Additional feature engineering is also performed.
The complete `study.panel` includes 8 weeks of ride share trips. How many total space/time combinations exist over 8 weeks? 24 hours a day \* 7 days a week \* 8 weeks \= 1,344 possible time units. That multiplied by 201 tracts in `studyArea.tracts` \= `270,144` unique space/time units. Thus, the final data frame must also have precisely this many rows.
The first step is `ride.template` which `filter`s for the 8 weeks of interest and uses `semi_join` to return only those trips in the `studyArea.tracts`. A quick calculation shows `ride.template` captures all the needed `unique` space/time units.
```
ride.template <-
filter(ride2, week %in% c(45:52)) %>%
semi_join(st_drop_geometry(studyArea.tracts),
by = c("Pickup.Census.Tract" = "GEOID"))
length(unique(ride.template$interval60)) * length(unique(ride.template$Pickup.Census.Tract))
```
```
## [1] 270144
```
An empty data frame, `study.panel`, is then created with the complete space/time panel. This is done using the `expand.grid` function and `unique`. `nrow` shows that the space/time count is still correct.
```
study.panel <-
expand.grid(interval60 = unique(ride.template$interval60),
Pickup.Census.Tract = unique(ride.template$Pickup.Census.Tract))
nrow(study.panel)
```
```
## [1] 270144
```
The final `ride.panel` is created by merging space/time intervals from actual trips in `ride.template` with intervals that saw no trips in `study.panel`.
A `Trip_Counter` is created in `ride.template` giving `1` for each trip. `right_join` then returns *non\-trip* space/time intervals from `study.panel`. If a trip occurred at a given interval, `Trip_Counter` returns `1`, otherwise `NA`. Trips are then grouped by each space/time interval and `Trip_Count` is set equal to the `sum` of `Trip_Counter`. `na.rm = T` prevents `sum` from returning `NA`.
Next, the `weather.Panel` and `studyArea.tracts` are joined to provide weather and geometry information, respectively. Finally, features denoting week and day of the week are created with `lubridate`.
The output is a complete panel. Note that `nrow(ride.template) == sum(ride.panel$Trip_Count)` \=\= `TRUE`.
```
ride.panel <-
ride.template %>%
mutate(Trip_Counter = 1) %>%
right_join(study.panel) %>%
group_by(interval60, Pickup.Census.Tract) %>%
summarize(Trip_Count = sum(Trip_Counter, na.rm=T)) %>%
left_join(weather.Panel, by = "interval60") %>%
left_join(studyArea.tracts, by = c("Pickup.Census.Tract" = "GEOID")) %>%
mutate(week = week(interval60),
dotw = wday(interval60, label = TRUE)) %>%
st_sf()
```
To test for serial (temporal) correlation, additional feature engineering creates time lags. `arrange` sorts the data by space then time; `group_by` groups by tract, and `lag` returns the `Trip-Count` for the previous *nth* time period.
```
ride.panel <-
ride.panel %>%
arrange(Pickup.Census.Tract, interval60) %>%
group_by(Pickup.Census.Tract) %>%
mutate(lagHour = dplyr::lag(Trip_Count,1),
lag2Hours = dplyr::lag(Trip_Count,2),
lag3Hours = dplyr::lag(Trip_Count,3),
lag4Hours = dplyr::lag(Trip_Count,4),
lag12Hours = dplyr::lag(Trip_Count,12),
lag1day = dplyr::lag(Trip_Count,24)) %>%
ungroup()
as.data.frame(filter(
ride.panel, Pickup.Census.Tract == "17031831900"))[1:6, c(1:3,10:11)]
```
```
## interval60 Pickup.Census.Tract Trip_Count lagHour lag2Hours
## 1 2018-11-05 00:00:00 17031831900 2 NA NA
## 2 2018-11-05 01:00:00 17031831900 2 2 NA
## 3 2018-11-05 02:00:00 17031831900 0 2 2
## 4 2018-11-05 03:00:00 17031831900 0 0 2
## 5 2018-11-05 04:00:00 17031831900 0 0 0
## 6 2018-11-05 05:00:00 17031831900 3 0 0
```
### 8\.2\.5 Split training and test
How might generalizability be tested for in this use case? Random k\-fold cross\-validation or spatial cross\-validation (5\.5\.1\) both seem reasonable. LOGO\-CV could even be used to cross\-validate in time, across hours, days, or days of the week.
Here, a time series approach is taken, training on 5 weeks of data, weeks 45\-49, and testing on the following 3 weeks, 50\-52\. If the time/space experience is generalizable, it can be used to project into the near future. The below code splits the data by `week`.
```
ride.Train <- filter(ride.panel, week < 50)
ride.Test <- filter(ride.panel, week >= 50)
```
### 8\.2\.6 What about distance features?
Why not measure exposure or distance to points of interest, as we have in previous chapters? It seems reasonable for instance, that ride share trips would decline as distance to subway stations decline, as riders trade\-off car trips for transit. Why not account for this? Check out the table below.
| time | tract | Distance\_to\_subway | Trip\_Count |
| --- | --- | --- | --- |
| 1 | 1 | 200 | 10 |
| 2 | 1 | 200 | 13 |
| 3 | 1 | 200 | 18 |
| 4 | 1 | 200 | 22 |
| 5 | 1 | 200 | 24 |
| 1 | 2 | 1890 | 45 |
| 2 | 2 | 1890 | 62 |
| 3 | 2 | 1890 | 89 |
| 4 | 2 | 1890 | 91 |
| 5 | 2 | 1890 | 100 |
| |
| --- |
| Table 8\.2 |
The first two columns in the table are examples of a short, 2 space/time interval panel. Note the similarity between `Distance_to_subway` and `tract` and note the coefficient on `Distance_to_subway` when it is included in a regression with `time` and `tract`.
The missing coefficient reflects the fact that `Distance_to_Subway` is perfectly colinear with `tract`. The lesson is that these exposure variables are not needed for panel data, when `Pickup.Census.Tract` is controlled for directly. It may be hard to conceptualize, but controlling for tract\-level variation in part, controls for exposure to points of interest.
**Table 8\.3 Colinear Regression**
| | |
| | Trip\_Count |
| | |
| time | 8\.800\*\*\* (2\.248\) |
| tract | 60\.000\*\*\* (6\.359\) |
| Distance\_to\_subway | |
| Constant | \-69\.000\*\*\* (12\.107\) |
| N | 10 |
| R2 | 0\.937 |
| Adjusted R2 | 0\.919 |
| Residual Std. Error | 10\.054 (df \= 7\) |
| F Statistic | 52\.178\*\*\* (df \= 2; 7\) |
| | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | |
8\.3 Exploratory Analysis \- ride share
---------------------------------------
In this section, the ride share data is explored for time, space, weather, and demographic relationships. Section 8\.3\.1 and 8\.3\.2 illustrate temporal and spatial trends, respectively, while 8\.3\.3 creates a space/time animation. Finally, 8\.3\.4 explores correlations with weather.
### 8\.3\.1 `Trip_Count` serial autocorrelation
Should `Trip_Count` exhibit serial (temporal) correlation, then the time lag features will lead to better predictions. Several tests are conducted below, beginning with a time series visualization. `geom_vline` is used to visualize `mondays` as well as Thanksgiving and Christmas (dotted lines).
There are some interesting trends to note. Most weeks exhibit remarkably comparable time series patterns with consistent peaks and troughs. This suggests the presence of serial correlation. Weeks surrounding Thanksgiving and Christmas appear as clear outliers, however.
```
mondays <-
mutate(ride.panel,
monday = ifelse(dotw == "Mon" & hour(interval60) == 1,
interval60, 0)) %>%
filter(monday != 0)
tg <- as.POSIXct("2018-11-22 01:00:00 UTC")
xmas <- as.POSIXct("2018-12-24 01:00:00 UTC")
st_drop_geometry(rbind(
mutate(ride.Train, Legend = "Training"),
mutate(ride.Test, Legend = "Testing"))) %>%
group_by(Legend, interval60) %>%
summarize(Trip_Count = sum(Trip_Count)) %>%
ungroup() %>%
ggplot(aes(interval60, Trip_Count, colour = Legend)) + geom_line() +
scale_colour_manual(values = palette2) +
geom_vline(xintercept = tg, linetype = "dotted") +
geom_vline(xintercept = xmas, linetype = "dotted") +
geom_vline(data = mondays, aes(xintercept = monday)) +
labs(title="Rideshare trips by week: November-December",
subtitle="Dotted lines for Thanksgiving & Christmas",
x="Day", y="Trip Count") +
plotTheme() + theme(panel.grid.major = element_blank())
```
Next, the time `lag` features are tested for correlation with `Trip_Count`. `plotData.lag` returns the `Trip_Count` and time lag features for week 45\. `fct_relevel` reorders the lag levels. Omit that line and try `levels(plotData.lag$Variable)`.
Pearson correlation is then calculated for each `Variable` in `correlation.lag`.
```
plotData.lag <-
filter(as.data.frame(ride.panel), week == 45) %>%
dplyr::select(starts_with("lag"), Trip_Count) %>%
gather(Variable, Value, -Trip_Count) %>%
mutate(Variable = fct_relevel(Variable, "lagHour","lag2Hours","lag3Hours",
"lag4Hours","lag12Hours","lag1day"))
correlation.lag <-
group_by(plotData.lag, Variable) %>%
summarize(correlation = round(cor(Value, Trip_Count, use = "complete.obs"), 2))
```
The very strong `Trip_Count` correlations are visualized below. Note that the correlation decreases with each additional lag hour, but predictive power returns with the 1 day lag. These features should be strong predictors in a model. See 5\.4\.1 to overlay correlation coefficients in a `ggplot`.
### 8\.3\.2 `Trip_Count` spatial autocorrelation
Ride share exhibits strong temporal correlation, but how about spatial autocorrelation? Figures 8\.6 and 8\.7 map tract `Trip_Count` sums by week and by day of the week, respectively. Sum is chosen over mean here to avoid tract/time pairs with 0 counts. Even during the holidays, the spatial pattern appears consistent, with trips concentrated in Chicago’s central business district, The Loop (southeastern portion of the study area).
Note the `q5` function is used to map quintile breaks, but a list of `labels` are fed to `scale_fill_manual`. Set the data wrangling portion of the below code block to a `temp.df` and run `qBr(temp.df, "Sum_Trip_Count")` to get the breaks.
```
group_by(ride.panel, week, Pickup.Census.Tract) %>%
summarize(Sum_Trip_Count = sum(Trip_Count)) %>%
ungroup() %>%
ggplot() + geom_sf(aes(fill = q5(Sum_Trip_Count))) +
facet_wrap(~week, ncol = 8) +
scale_fill_manual(values = palette5,
labels = c("16", "140", "304", "530", "958"),
name = "Trip_Count") +
labs(title="Sum of rideshare trips by tract and week") +
mapTheme() + theme(legend.position = "bottom")
```
Finally, spatial autocorrelation is tested for by visualizing spatial lag correlations (4\.2\). A queen contiguity spatial weights matrix (5\.4\) relates tract *t* to adjacent tracts. “First order contiguity” refers to those tracts that touch tract *t*. “Second order” refers to the tracts that touch those tracts, etc.
This code is a bit verbose and thus withheld. These features will not be used in the model, but the resulting plot does show that there is strong spatial autocorrelation in ride share.
### 8\.3\.3 Space/time correlation?
Ride share in Chicago exhibits very strong space/time dependencies, so why not visualize both together? Below, the `gganimate` package is used to build an animated gif map of ride share trips over space and time. Here, the 15 minute intervals, `interval15`, are used from a single Monday in `week45.panel`. Tract geometries are pulled from `ride2`.
```
week45 <-
filter(ride2 , week == 45 & dotw == "Mon")
week45.panel <-
expand.grid(
interval15 = unique(week45$interval15),
Pickup.Census.Tract = unique(ride2$Pickup.Census.Tract))
```
A `ride.animation.data` panel is created following the same routine as 8\.2\.4 above. Recall, we are using a 20% sample of the data, so the actual number of trips is much higher. Nevertheless, the space/time pattern emanating outward from the Loop is apparent. Note again, the use of `fct_relevel`.
```
ride.animation.data <-
mutate(week45, Trip_Counter = 1) %>%
right_join(week45.panel) %>%
group_by(interval15, Pickup.Census.Tract) %>%
summarize(Trip_Count = sum(Trip_Counter, na.rm=T)) %>%
ungroup() %>%
left_join(chicagoTracts, by=c("Pickup.Census.Tract" = "GEOID")) %>%
st_sf() %>%
mutate(Trips = case_when(Trip_Count == 0 ~ "0 trips",
Trip_Count > 0 & Trip_Count <= 3 ~ "1-3 trips",
Trip_Count > 3 & Trip_Count <= 6 ~ "4-6 trips",
Trip_Count > 6 & Trip_Count <= 10 ~ "7-10 trips",
Trip_Count > 10 ~ "11+ trips")) %>%
mutate(Trips = fct_relevel(Trips, "0 trips","1-3 trips","4-6 trips",
"7-10 trips","10+ trips"))
```
The animation object is created below. Install and load the `gifski` package. `transition_manual` is set to `interval15` to suggest a new map be generated for each 15 minute interval. `animate` creates the gif, setting `duration` to ensure that the entire animation lasts only 20 seconds.
```
rideshare_animation <-
ggplot() +
geom_sf(data = ride.animation.data, aes(fill = Trips)) +
scale_fill_manual(values = palette5) +
labs(title = "Rideshare pickups for one day in November 2018",
subtitle = "15 minute intervals: {current_frame}") +
transition_manual(interval15) +
mapTheme()
```
```
animate(rideshare_animation, duration=20, renderer = gifski_renderer())
```
The below code can be used to write the animated gif to your local machine.
```
anim_save("rideshare_local", rideshare_animation, duration=20, renderer = gifski_renderer())
```
### 8\.3\.4 Weather
`ride.panel` includes three weather\-related variables. The below bar plot removes the spatial variation and creates a dummy variable, `isPercip`, to ask whether precipitation effects mean `Trip_Count`. There appears to be little effect.
```
st_drop_geometry(ride.panel) %>%
group_by(interval60) %>%
summarize(Trip_Count = mean(Trip_Count),
Percipitation = first(Percipitation)) %>%
mutate(isPercip = ifelse(Percipitation > 0,"Rain/Snow", "None")) %>%
group_by(isPercip) %>%
summarize(Mean_Trip_Count = mean(Trip_Count)) %>%
ggplot(aes(isPercip, Mean_Trip_Count)) + geom_bar(stat = "identity") +
labs(title="Does ridership vary with percipitation?",
x="Percipitation", y="Mean Trip Count") +
plotTheme()
```
Do more people take ride share when it is colder? Below, `Trip_Count` is plotted as a function of `Temperature` by `week`. These plots suggest the opposite \- that in most weeks, ride share increases as temps warm. If that sounds strange for November and December, it is because this correlation is ‘spurious’.
A key relationship has been omitted \- namely that both temperature and ride share increase with the hour of the day. Temperature may still be an important predictor, but only when time is controlled for. A couple of quick regressions can illustrate this point. Regressing `Trip_Count` as a function of `Temperature`, estimates that a one degree increase in temps leads to a 5\.54 increase in trips.
When hourly fixed effects are added to the regression, the model suggests a one degree increase in temps leads to a \-7\.47 *decrease* in `Trip_Count`. This is what we would expect \- that all else equal, as temperature increases, travelers are less likely to take ride share versus, say walking.
```
st_drop_geometry(ride.panel) %>%
group_by(interval60) %>%
summarize(Trip_Count = mean(Trip_Count),
Temperature = first(Temperature)) %>%
mutate(week = week(interval60)) %>%
ggplot(aes(Temperature, Trip_Count)) +
geom_point() + geom_smooth(method = "lm", se= FALSE) +
facet_wrap(~week, ncol=8) +
labs(title="Trip Count as a fuction of Temperature by week",
x="Temperature", y="Mean Trip Count") +
plotTheme()
```
8\.4 Modeling and validation using `purrr::map`
-----------------------------------------------
In this section, models are estimated from `ride.Train` and tested on `ride.Test` to gauge how well the space/time features forecast ride share demand. The `purrr` family of functions will be used to loop through a set of ‘nested’ data frames to efficiently compare across different model specifications. This functionality is very powerful, and programmatically, is a step up from the `tidy` used thus far.
### 8\.4\.1 A short primer on `nest`ed `tibble`s
Let’s return to the example space/time panel created above, `colinear_df`, by re\-engineering this data frame as a nested `tibble`. A `tibble` is a like a data frame with more bells and whistles. In the code block below, `colinear_df` is converted `as.tibble`. `nest` then embeds 5 separate tibbles in `colinear_nested`, delineated by `time`. This gives a tibble of tibbles.
```
colinear_nested <- nest(as.tibble(colinear_df), -time)
colinear_nested
```
```
## # A tibble: 5 x 2
## time data
## <dbl> <list>
## 1 1 <tibble [2 x 3]>
## 2 2 <tibble [2 x 3]>
## 3 3 <tibble [2 x 3]>
## 4 4 <tibble [2 x 3]>
## 5 5 <tibble [2 x 3]>
```
Nesting allows one to split and wrangle data in all sorts of interesting ways. Any nested `tibble` can be `unnest`ed with:
```
unnest(colinear_nested[1,2])
```
```
## # A tibble: 2 x 3
## tract Distance_to_subway Trip_Count
## <dbl> <dbl> <dbl>
## 1 1 200 10
## 2 2 1890 45
```
### 8\.4\.2 Estimate a ride share forecast
In this section, four different linear regressions are estimated on `ride.Train`, each with different fixed effects:
1. `reg1` focuses on just time, including hour fixed effects, day of the week, and `Temperature`.
2. `reg2` focuses on just space effects with the `Pickup.Census.Tract` fixed effects.
3. `reg3` includes both time and space fixed effects.
4. `reg4` adds the time `lag` features.
Time features like `hour` could be modeled as either a continuous or categorical feature. As a continuous feature, the interpretation is that a 1 `hour` increase is associated with an estimated change in `Trip_Count`. As a factor, the interpretation is that there are significant differences in `Trip_Count` by hour. Both options can be explored, but below, the latter is chosen.
Spatial fixed effects for `Pickup.Census.Tract` are also included to account for the across\-tract differences, like amenities, access to transit, distance to the Loop, etc.
Ordinary Least Squares (OLS) is chosen, despite `Trip_Count` being a count variable. Poisson is an option, but the counts are sufficiently large to feel at ease with OLS. Not surprisingly, the best choice of algorithm is that which leads to the most accurate and generalizable model.
```
reg1 <- lm(Trip_Count ~ hour(interval60) + dotw + Temperature, data=ride.Train)
reg2 <- lm(Trip_Count ~ Pickup.Census.Tract + dotw + Temperature, data=ride.Train)
reg3 <- lm(Trip_Count ~ Pickup.Census.Tract + hour(interval60) + dotw + Temperature,
data=ride.Train)
reg4 <- lm(Trip_Count ~ Pickup.Census.Tract + hour(interval60) + dotw + Temperature +
lagHour + lag2Hours + lag3Hours + lag12Hours + lag1day,
data=ride.Train)
```
### 8\.4\.3 Validate test set by time
In this section, Mean Absolute Error (MAE) is calculated on `ride.Test` for each model. `ride.Test` includes 3 weeks and is highly influenced by the Christmas holiday. To understand if models generalize to the holiday and non\-holiday weeks, `ride.Test.weekNest` nests `ride.Test` by `week`. Note that the `data` field contains 3 sf tibbles (with geometries), and `unnest(ride.Test.weekNest[1,2])` returns one week’s worth of simple features data.
```
ride.Test.weekNest <-
as.data.frame(ride.Test) %>%
nest(-week)
ride.Test.weekNest
```
```
## # A tibble: 3 x 2
## week data
## <dbl> <list>
## 1 50 <tibble [33,768 x 14]>
## 2 51 <tibble [33,768 x 14]>
## 3 52 <tibble [33,768 x 14]>
```
Next, a small function is created that takes a tibble, `dat` and a regression model, `fit` as its inputs, and outputs predictions as `pred`. This function is used to predict for each week in `ride.Trest.weekNest`.
```
model_pred <- function(dat, fit){
pred <- predict(fit, newdata = dat)}
```
The nested format allows one to loop through each model for each week and `mutate` summary statistics. In the code block below, `week_predictions` are calculated for each week in `ride.Test`. The `map` function applies the `model_pred` function to each nested tibble.
Take the first line of the below `mutate`, for example. A new column, `Time_FE`, includes predictions for `reg1` \- the time fixed effects model. The predictions are created by `map`ping the function, `model_pred`, to each row of `data`, parameterizing `fit` as the `reg1` model.
```
week_predictions <-
ride.Test.weekNest %>%
mutate(A_Time_FE = map(.x = data, fit = reg1, .f = model_pred),
B_Space_FE = map(.x = data, fit = reg2, .f = model_pred),
C_Space_Time_FE = map(.x = data, fit = reg3, .f = model_pred),
D_Space_Time_Lags = map(.x = data, fit = reg4, .f = model_pred))
week_predictions
```
```
## # A tibble: 3 x 6
## week data A_Time_FE B_Space_FE C_Space_Time_FE D_Space_Time_La~
## <dbl> <list> <list> <list> <list> <list>
## 1 50 <tibble [33,7~ <dbl [33,76~ <dbl [33,7~ <dbl [33,768]> <dbl [33,768]>
## 2 51 <tibble [33,7~ <dbl [33,76~ <dbl [33,7~ <dbl [33,768]> <dbl [33,768]>
## 3 52 <tibble [33,7~ <dbl [33,76~ <dbl [33,7~ <dbl [33,768]> <dbl [33,768]>
```
The output shows that each new column is a `list` of predictions for each model by week. Once columns are moved to long form with `gather`, four new columns are generated in the code blow below.
`Observed` is the actual space/time `Trip_Count` for that week, created by looping through (`map`) each nested tibble in the `data` field and `pull`ing `Trip_Count`. `Absolute_Error` is created with `map2`, which maps over two inputs. In this case, `Observed` and `Prediction` are fed into a function (`~`) that calculates the absolute value of their difference.
To calculate `MAE`, `map_dbl`, a variant of `map`, is used to loop through `Absolute_Error`, calculating the `mean`. The same function calculates the standard deviation of absolute error, `sd_AE`, which is a useful measure of generalizability.
```
week_predictions <- week_predictions %>%
gather(Regression, Prediction, -data, -week) %>%
mutate(Observed = map(data, pull, Trip_Count),
Absolute_Error = map2(Observed, Prediction, ~ abs(.x - .y)),
MAE = map_dbl(Absolute_Error, mean),
sd_AE = map_dbl(Absolute_Error, sd))
```
The resulting data frame shows goodness of fit by week and model. The MAE for the time effects model (`reg1`) in week 50 is comparable to the mean observed `Trip_Count` of 5\.86\. However, with increasing sophistication, the model becomes more accurate and more generalizable.
This nested framework makes it easy to plot MAE by model by week, as below. Both the spatial fixed effects and time lags add significant predictive power.
```
week_predictions %>%
dplyr::select(week, Regression, MAE) %>%
gather(Variable, MAE, -Regression, -week) %>%
ggplot(aes(week, MAE)) +
geom_bar(aes(fill = Regression), position = "dodge", stat="identity") +
scale_fill_manual(values = palette5) +
labs(title = "Mean Absolute Errors by model specification and week") +
plotTheme()
```
For each model, predicted and observed `Trip_Count` is taken out of the spatial context and their means plotted in time series form below. Models `A` and `C` appear to have the same time trend because again, the spatial context has been removed.
With more sophistication comes the ability to predict for the highest peaks, and the time lags help make this happen. The time series does show that the model over\-predicts trips for some of the days around Christmas. This may be because the training data suggests more trips should otherwise occur during those times.
```
week_predictions %>%
mutate(interval60 = map(data, pull, interval60),
Pickup.Census.Tract = map(data, pull, Pickup.Census.Tract)) %>%
dplyr::select(interval60, Pickup.Census.Tract, Observed, Prediction, Regression) %>%
unnest() %>%
gather(Variable, Value, -Regression, -interval60, -Pickup.Census.Tract) %>%
group_by(Regression, Variable, interval60) %>%
summarize(Value = mean(Value)) %>%
ggplot(aes(interval60, Value, colour=Variable)) + geom_line(size = 1.1) +
facet_wrap(~Regression, ncol=1) +
scale_colour_manual(values = palette2) +
labs(title = "Mean Predicted/Observed ride share by hourly interval",
x = "Hour", y= "Rideshare Trips") +
plotTheme()
```
### 8\.4\.4 Validate test set by space
```
error.byWeek <-
filter(week_predictions, Regression == "D_Space_Time_Lags") %>%
unnest() %>% st_sf() %>%
dplyr::select(Pickup.Census.Tract, Absolute_Error, week, geometry) %>%
gather(Variable, Value, -Pickup.Census.Tract, -week, -geometry) %>%
group_by(Variable, Pickup.Census.Tract, week) %>%
summarize(MAE = mean(Value))
```
Above, MAE for `reg4` is mapped, by `unnest`ing it from `week_predictions`. Errors for all time periods are averaged by `Pickup.Census.Tract` and `week`. The highest errors are in the Loop where more trips occur and there are some interesting error patterns along arterial routes running west and north from downtown.
MAE is then mapped by hour for Monday of the 50th week. Errors cluster in The Loop but are otherwise distributed throughout. Each map below could be tested for spatial autocorrelation which may suggest the addition of new features. Given the use case however, is there an incentive to have a really generalizable model?
We have been weary of geospatial machine learning models with errors that vary systematically across space. In other uses cases, the concern was that these differences could drive disparate impact. Do ride share companies have an incentive to be fair in the communities they serve? Could errors in this model be driven by selection bias in the training data?
It is possible that ride share drivers select in to neighborhoods based on passenger perception but what incentive do ride share companies have to ensure that everyone who wants a ride gets one?
```
error.byHour <-
filter(week_predictions, Regression == "D_Space_Time_Lags") %>%
unnest() %>%
st_sf() %>%
dplyr::select(Pickup.Census.Tract, Absolute_Error, geometry, interval60) %>%
gather(Variable, Value, -interval60, -Pickup.Census.Tract, -geometry) %>%
filter(wday(interval60, label = TRUE) == "Mon" & week(interval60) == 50) %>%
group_by(hour = hour(interval60), Pickup.Census.Tract) %>%
summarize(MAE = mean(Value))
```
8\.5 Conclusion \- Dispatch
---------------------------
*Why does our model predict so well?*
It is rare that linear models have such small errors, but in this case, the time lag features are very strong predictors. These very high resolution time lags can reasonably be included because the dispatch use case allows for near real time predictions \- like the next hour. Many temporal forecasts have much longer time horizons.
Unlike previous chapters, no cross\-validation was performed, and thus, it is possible that our model is not actually as useful as the results suggest. Random k\-fold or spatial cross\-validation are options. One might also cross\-validate by hour, day of the week or some other time unit to understand how well our model generalizes across time. Space/time cross\-validation is also possible.
*A dispatch algorithm*
Uber’s Marketplace Algorithm is an example of how Uber uses space/time forecasting to reduce response times, save money and maybe even lower automobile congestion. How can Uber be sure that a given algorithm is optimized for these bottom lines?
Ride share companies take many thousands of trips every day. To find the optimal approach, imagine randomly allocating two or more dispatch algorithms across 1000 Chicago drivers over the course of a week. Such a Randomized Control Trial, or what tech companies sometimes call ‘A/B testing’, could help reveal the most optimal algorithm.
*What about the public sector use of this algorithm?*
What space/time prediction problems might public sector data scientists wrestle with? Any public or private entity that needs to ‘rebalance’ an asset like bike share bikes or personal scooters, would benefit from understanding near future demand. Public parking authorities could maximize parking ticket revenues by allocating patrols to time/space units predicted to have illegal parkers. Emergency Management offices can use metrics like this to strategically place ambulances to reduce response times. Finally, this is an obvious extension to the predictive policing use case discussed in Chapter 5 which would allocate police cruisers in space and time.
8\.6 Assignment \- Predict bike share trips
-------------------------------------------
One of the most difficult operational problems for urban bike share systems is the need to ‘re\-balance’ bicycles across the network. Bike share is not useful if a dock has no bikes to pickup, nor if there are no open docking spaces to deposit a bike. Re\-balancing is the practice of anticipating (or predicting) bike share demand for all docks at all times and manually redistributing bikes to ensure a bike or a docking place is available when needed.
In this assignment, you will pick a city with a bike share open data feed and forecast space/time demand for bike share pickups. Most bike share data has fields for origin, destination and date/time.
Envision a bike re\-balancing plan and design an algorithm to inform such a plan. The deliverables include:
1. 2\-3 paragraphs that introduce the reader to bike share and the need for re\-balancing. How will re\-balancing occur? Perhaps you will manage a small fleet of trucks to move bikes from here to there or perhaps you will offer rewards, discounts or other incentives for riders to move a bike from place to place. *Keep in mind*, your plan will inform the appropriate time lag features you can use. How far forward do you wish to predict for at any given time?
2. Your unit of analysis here is the bike share station, not Census tracts. Engineer features to account for weather and time effects and experiment with some amenity features. Develop two different training/test sets including 1\) a 3 week training set and a 2 week test set of all the stations and 2\) a complete 5 week panel for cross\-validation.
3. Develop exploratory analysis plots that describe the space/time dependencies in the data and create an animated map. Interpret your findings in the context of the re\-balancing plan.
4. Use `purrr` to train and validate several models for comparison on the latter two week test set. Perform either random k\-fold cross validation or LOGO\-CV on the 5 week panel. You may choose to cross validate by time or space. Interpret your findings in the context of accuracy and generalizability.
5. Conclude with how useful your algorithm is for the bike re\-balancing plan.
8\.1 Introduction \- ride share
-------------------------------
This last chapter returns to spatial problem solving to predict space/time demand for ride share in Chicago. Companies like Uber \& Lyft generate and analyze tremendous amounts of data to incentivize ride share use; to employ dynamic or ‘surge’ pricing; to solve routing problems; and to forecast ride share demand to minimize driver response times. This last use case is the focus of this chapter.
The model developed here is similar to the other geospatial machine learning models built thus far, with two exceptions. First, this chapter focuses on time effects, adding additional complexity to our models, and two, social costs are less important here than they have been in previous chapters.
We have dealt with time once previously. Recall the Predictive Policing algorithm was trained on 2017 burglaries and validated on 2018 (5\.5\.4\). Here, time was not an explicit parameter in the model. Instead, it was assumed that the 2017 burglary experience generalized to 2018\.
To forecast ride share, time must be explicitly accounted for. Conceptually, modeling time is not all that different from modeling space. Spatial autocorrelation posits that values *here* are in part, a function of nearby values. In the case of temporal or ‘serial correlation’, a similar hypothesis can be posited \- that the value *now* is in part, a function of values in the past.
There are many examples of serial correlation. Gas prices today are related to gas prices yesterday. Same with stock prices, traffic, and daily temperatures. Just as an understanding of the underlying spatial process is the key to a strong spatial model, the key to a strong time series model is an understanding of the underlying temporal process.
Figure 8\.1, Source: [https://eng.uber.com/forecasting\-introduction/](https://eng.uber.com/forecasting-introduction/)
Uber describes its Marketplace Algorithm as one that, “enables us to predict user supply and demand in a spatio\-temporal fine granular fashion to direct driver\-partners to high demand areas before they arise, thereby increasing their trip count and earnings.”[67](#fn67) They go on to remark, “Spatio\-temporal forecasts are still an open research area.” Figure 8\.1 provides an example from the quoted Uber Engineering blog.
In a word, this is a dispatch problem, and there are two general approaches to consider. The more naive is to route drivers in response to space/time demand spikes as they emerge in real time. The problem with this approach is that by the time drivers reach a hot spot, the spike may have ended. Not only might this improperly allocate vehicles in the short run, but feedback effects may increase response times to other parts of the city in the long run.
The second approach is to generalize from recent ride share experiences to predict demand in the near future. Take rush hour for example \- demand occurs in the same locations at the same times, Monday through Friday. Thus, rush hour demand on Tuesday can be used to predict rush hour demand on Wednesday.
An actual ride share forecast would likely predict trip demand or `Trip_Count` for very high resolution space/time intervals, like for every 5 minutes for every 100x100 ft. fishnet grid cell. Our model will take a low resolution approach, reducing millions of Chicago ride share trips from November through December, 2018, into a 20% subsample and aggregating to hourly intervals and a subset of Chicago Census tracts.
We will learn new approaches for manipulating temporal data and creating time\-based features using the `lubridate` package. We also learn the `purrr` family of functions to loop through the validation of many different regressions. Data is wrangled in the next section. Exploratory Analysis then analyzes space/time patterns in the data. The final section trains and validates a space/time forecast.
8\.2 Data Wrangling \- ride share
---------------------------------
Begin by loading the required libraries and functions. The ride share data is then read in and wrangled along with weather data. Ride share trip data is then wrangled into a complete ‘panel’ of observations that include every possible space/time combination.
```
library(tidyverse)
library(sf)
library(lubridate)
library(tigris)
library(gganimate)
library(riem)
library(gridExtra)
library(knitr)
library(kableExtra)
options(tigris_class = "sf")
source("https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/functions.r")
palette5 <- c("#eff3ff","#bdd7e7","#6baed6","#3182bd","#08519c")
palette4 <- c("#D2FBD4","#92BCAB","#527D82","#123F5A")
palette2 <- c("#6baed6","#08519c")
```
The ride share data for November and December 2018 is read in. These data exist on the Chicago Open Data portal, but because they are so large, querying with the API can take a very long time.[68](#fn68) Instead, the data below is a \~20% sample (n \= \~1\.793 million rows) of the original data.
```
root.dir = "https://raw.githubusercontent.com/urbanSpatial/Public-Policy-Analytics-Landing/master/DATA/"
ride <- read.csv(file.path(root.dir,"Chapter8/chicago_rideshare_trips_nov_dec_18_clean_sample.csv"))
```
To keep the data size manageable, only 3 pertinent fields are included in the data and defined in the table below.
| Variable\_Name | Description |
| --- | --- |
| Trip.Start.Timestamp | Date/time trip started |
| Pickup.Census.Tract | Census Tract origin |
| Dropoff.Census.Tract | Census Tract destination |
| |
| --- |
| Table 8\.1 |
### 8\.2\.1 Lubridate
Next, temporal data wrangling is performed using the fantastically simple `lubridate` package. One of the more powerful features of `lubridate` is its ability to standardize date/time stamps. In the code block below, a list contains my birthday written four different ways. Subjecting that list to the `ymd` function miraculously standardizes three of the four items.
```
ymd(c("1982-09-06", "1982-Sep-6", "1982-Sept-06", "1982-Sept-six"))
```
```
## [1] "1982-09-06" "1982-09-06" "1982-09-06" NA
```
`ymd` is a one of several components of the `parse_date_time` function. As below, these functions standardize the `Trip.Start.Timestamp` field (when a trip departed) into the 60 minute and 15 minute intervals needed for our analysis. Functions like `week`, `wkday` and `hour` convert the data/time stamp into week of the year, day of the week, and hour of the day, respectively.
Two `Pickup.Census.Tract` units for Chicago’s O’Hare International Airport are dropped. Surely ride share companies forecast airport demand, but they likely employ additional features/models that account for takeoff and landing patterns.
```
ride2 <-
ride %>%
mutate(interval60 = floor_date(mdy_hms(Trip.Start.Timestamp), unit = "hour"),
interval15 = floor_date(mdy_hms(Trip.Start.Timestamp), unit = "15 mins"),
week = week(interval60),
dotw = wday(interval60, label=TRUE),
Pickup.Census.Tract = as.character(Pickup.Census.Tract),
Dropoff.Census.Tract = as.character(Dropoff.Census.Tract)) %>%
filter(Pickup.Census.Tract != "17031980000" & Pickup.Census.Tract != "17031770602")
ride2[1:3, c(1,4:7)]
```
```
## Trip.Start.Timestamp interval60 interval15 week dotw
## 1 12/07/2018 04:30:00 PM 2018-12-07 16:00:00 2018-12-07 16:30:00 49 Fri
## 2 12/30/2018 06:00:00 PM 2018-12-30 18:00:00 2018-12-30 18:00:00 52 Sun
## 3 11/24/2018 07:45:00 AM 2018-11-24 07:00:00 2018-11-24 07:45:00 47 Sat
```
### 8\.2\.2 Weather data
One might reasonably assume that inclement weather in the Windy City would incentivize ride share. There once were a host of open weather data APIs available to the rstats community, but that changed when IBM bought Weather Company and Weather Underground, two giant aggregators of weather data. Recently, the good people at the Iowa Environment Mesonet released the `riem` package[69](#fn69), which provides free space/time weather data.
The `riem_measures` function downloads `weather.Data` for O’Hare Airport between November 1, 2018 and January 1, 2019\. Note that the O’Hare weather station sufficiently provides temporal weather for all of Chicago.
```
weather.Data <-
riem_measures(station = "ORD", date_start = "2018-11-01", date_end = "2019-01-01")
```
In this chapter, several ‘panel’ datasets are created. A panel is long form data, typically giving repeat observations for particular items. An example would be a dataset tracking student grades over time. Here, each row would represent a student/year pair. Every twelve rows would represent one student’s grades across twelve years of schooling.
Below a `weather.Panel` is generated to summarize temperature, precipitation, and wind speed for every hour between November and December. In the code block, `mutate_if` and `replace_na` converts any character or numeric field with `NA` to 0\. The first `mutate` function creates `interval60` by converting the date/time stamp, `valid`, from 5 minute intervals to 60 minute intervals. Note what `?substr` does. Then `group_by` each hour (`interval60`) to `summarize` a final set of hourly weather indicators.
Below the weather data is plotted as a time series using `grid.arrange`.
```
weather.Panel <-
weather.Data %>%
mutate_if(is.character, list(~replace(as.character(.), is.na(.), "0"))) %>%
replace(is.na(.), 0) %>%
mutate(interval60 = ymd_h(substr(valid, 1, 13))) %>%
mutate(week = week(interval60),
dotw = wday(interval60, label=TRUE)) %>%
group_by(interval60) %>%
summarize(Temperature = max(tmpf),
Percipitation = sum(p01i),
Wind_Speed = max(sknt)) %>%
mutate(Temperature = ifelse(Temperature == 0, 42, Temperature))
```
```
grid.arrange(top = "Weather Data - Chicago - November & December, 2018",
ggplot(weather.Panel, aes(interval60,Percipitation)) + geom_line() +
labs(title="Percipitation", x="Hour", y="Percipitation") + plotTheme(),
ggplot(weather.Panel, aes(interval60,Wind_Speed)) + geom_line() +
labs(title="Wind Speed", x="Hour", y="Wind Speed") + plotTheme(),
ggplot(weather.Panel, aes(interval60,Temperature)) + geom_line() +
labs(title="Temperature", x="Hour", y="Temperature") + plotTheme())
```
### 8\.2\.3 Subset a study area using neighborhoods
A ride share forecast for every Cook County tract, for every hour, for 8 weeks, would yield a time/space panel (data frame) consisting of `nrow(chicagoTracts) * 24 * 7 * 8` \= 1,771,392 rows. A regression that size will melt your laptop. Instead, 201 Census tracts are subset across Chicago’s downtown, the Loop, up through Wrigleyville and Lincoln Square.
The code block below pulls all tract geometries from the `tigris` package, loads a neighborhood geojson and subsets those found in a `neighborhoodList`. `st_intersection` then finds `studyArea.tracts`. The plot below maps `studyArea.tracts` relative to `chicagoTracts`.
```
chicagoTracts <-
tigris::tracts(state = "Illinois", county = "Cook") %>%
dplyr::select(GEOID) %>% filter(GEOID != 17031990000)
neighborhoodList <-
c("Grant Park","Printers Row","Loop","Millenium Park","West Loop","United Center",
"West Town","East Village","Ukranian Village","Wicker Park","River North",
"Rush & Division","Streeterville","Gold Coast","Old Town","Bucktown","Lincoln Park",
"Sheffield & DePaul","Lake View","Boystown","Wrigleyville","North Center","Uptown",
"Lincoln Square","Little Italy, UIC")
nhoods <-
st_read("https://data.cityofchicago.org/api/geospatial/bbvz-uum9?method=export&format=GeoJSON") %>%
st_transform(st_crs(chicagoTracts)) %>%
filter(pri_neigh %in% neighborhoodList)
studyArea.tracts <-
st_intersection(chicagoTracts, st_union(nhoods))
```
### 8\.2\.4 Create the final space/time panel
The dataset for this analysis must be a complete panel with an observation for every possible space/time combination. The `ride2` data frame is incomplete as some space/time intervals saw no trips. In the code blocks below, the panel is completed by finding all unique space/time intervals and inserting `0` trips where necessary. Additional feature engineering is also performed.
The complete `study.panel` includes 8 weeks of ride share trips. How many total space/time combinations exist over 8 weeks? 24 hours a day \* 7 days a week \* 8 weeks \= 1,344 possible time units. That multiplied by 201 tracts in `studyArea.tracts` \= `270,144` unique space/time units. Thus, the final data frame must also have precisely this many rows.
The first step is `ride.template` which `filter`s for the 8 weeks of interest and uses `semi_join` to return only those trips in the `studyArea.tracts`. A quick calculation shows `ride.template` captures all the needed `unique` space/time units.
```
ride.template <-
filter(ride2, week %in% c(45:52)) %>%
semi_join(st_drop_geometry(studyArea.tracts),
by = c("Pickup.Census.Tract" = "GEOID"))
length(unique(ride.template$interval60)) * length(unique(ride.template$Pickup.Census.Tract))
```
```
## [1] 270144
```
An empty data frame, `study.panel`, is then created with the complete space/time panel. This is done using the `expand.grid` function and `unique`. `nrow` shows that the space/time count is still correct.
```
study.panel <-
expand.grid(interval60 = unique(ride.template$interval60),
Pickup.Census.Tract = unique(ride.template$Pickup.Census.Tract))
nrow(study.panel)
```
```
## [1] 270144
```
The final `ride.panel` is created by merging space/time intervals from actual trips in `ride.template` with intervals that saw no trips in `study.panel`.
A `Trip_Counter` is created in `ride.template` giving `1` for each trip. `right_join` then returns *non\-trip* space/time intervals from `study.panel`. If a trip occurred at a given interval, `Trip_Counter` returns `1`, otherwise `NA`. Trips are then grouped by each space/time interval and `Trip_Count` is set equal to the `sum` of `Trip_Counter`. `na.rm = T` prevents `sum` from returning `NA`.
Next, the `weather.Panel` and `studyArea.tracts` are joined to provide weather and geometry information, respectively. Finally, features denoting week and day of the week are created with `lubridate`.
The output is a complete panel. Note that `nrow(ride.template) == sum(ride.panel$Trip_Count)` \=\= `TRUE`.
```
ride.panel <-
ride.template %>%
mutate(Trip_Counter = 1) %>%
right_join(study.panel) %>%
group_by(interval60, Pickup.Census.Tract) %>%
summarize(Trip_Count = sum(Trip_Counter, na.rm=T)) %>%
left_join(weather.Panel, by = "interval60") %>%
left_join(studyArea.tracts, by = c("Pickup.Census.Tract" = "GEOID")) %>%
mutate(week = week(interval60),
dotw = wday(interval60, label = TRUE)) %>%
st_sf()
```
To test for serial (temporal) correlation, additional feature engineering creates time lags. `arrange` sorts the data by space then time; `group_by` groups by tract, and `lag` returns the `Trip-Count` for the previous *nth* time period.
```
ride.panel <-
ride.panel %>%
arrange(Pickup.Census.Tract, interval60) %>%
group_by(Pickup.Census.Tract) %>%
mutate(lagHour = dplyr::lag(Trip_Count,1),
lag2Hours = dplyr::lag(Trip_Count,2),
lag3Hours = dplyr::lag(Trip_Count,3),
lag4Hours = dplyr::lag(Trip_Count,4),
lag12Hours = dplyr::lag(Trip_Count,12),
lag1day = dplyr::lag(Trip_Count,24)) %>%
ungroup()
as.data.frame(filter(
ride.panel, Pickup.Census.Tract == "17031831900"))[1:6, c(1:3,10:11)]
```
```
## interval60 Pickup.Census.Tract Trip_Count lagHour lag2Hours
## 1 2018-11-05 00:00:00 17031831900 2 NA NA
## 2 2018-11-05 01:00:00 17031831900 2 2 NA
## 3 2018-11-05 02:00:00 17031831900 0 2 2
## 4 2018-11-05 03:00:00 17031831900 0 0 2
## 5 2018-11-05 04:00:00 17031831900 0 0 0
## 6 2018-11-05 05:00:00 17031831900 3 0 0
```
### 8\.2\.5 Split training and test
How might generalizability be tested for in this use case? Random k\-fold cross\-validation or spatial cross\-validation (5\.5\.1\) both seem reasonable. LOGO\-CV could even be used to cross\-validate in time, across hours, days, or days of the week.
Here, a time series approach is taken, training on 5 weeks of data, weeks 45\-49, and testing on the following 3 weeks, 50\-52\. If the time/space experience is generalizable, it can be used to project into the near future. The below code splits the data by `week`.
```
ride.Train <- filter(ride.panel, week < 50)
ride.Test <- filter(ride.panel, week >= 50)
```
### 8\.2\.6 What about distance features?
Why not measure exposure or distance to points of interest, as we have in previous chapters? It seems reasonable for instance, that ride share trips would decline as distance to subway stations decline, as riders trade\-off car trips for transit. Why not account for this? Check out the table below.
| time | tract | Distance\_to\_subway | Trip\_Count |
| --- | --- | --- | --- |
| 1 | 1 | 200 | 10 |
| 2 | 1 | 200 | 13 |
| 3 | 1 | 200 | 18 |
| 4 | 1 | 200 | 22 |
| 5 | 1 | 200 | 24 |
| 1 | 2 | 1890 | 45 |
| 2 | 2 | 1890 | 62 |
| 3 | 2 | 1890 | 89 |
| 4 | 2 | 1890 | 91 |
| 5 | 2 | 1890 | 100 |
| |
| --- |
| Table 8\.2 |
The first two columns in the table are examples of a short, 2 space/time interval panel. Note the similarity between `Distance_to_subway` and `tract` and note the coefficient on `Distance_to_subway` when it is included in a regression with `time` and `tract`.
The missing coefficient reflects the fact that `Distance_to_Subway` is perfectly colinear with `tract`. The lesson is that these exposure variables are not needed for panel data, when `Pickup.Census.Tract` is controlled for directly. It may be hard to conceptualize, but controlling for tract\-level variation in part, controls for exposure to points of interest.
**Table 8\.3 Colinear Regression**
| | |
| | Trip\_Count |
| | |
| time | 8\.800\*\*\* (2\.248\) |
| tract | 60\.000\*\*\* (6\.359\) |
| Distance\_to\_subway | |
| Constant | \-69\.000\*\*\* (12\.107\) |
| N | 10 |
| R2 | 0\.937 |
| Adjusted R2 | 0\.919 |
| Residual Std. Error | 10\.054 (df \= 7\) |
| F Statistic | 52\.178\*\*\* (df \= 2; 7\) |
| | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | |
### 8\.2\.1 Lubridate
Next, temporal data wrangling is performed using the fantastically simple `lubridate` package. One of the more powerful features of `lubridate` is its ability to standardize date/time stamps. In the code block below, a list contains my birthday written four different ways. Subjecting that list to the `ymd` function miraculously standardizes three of the four items.
```
ymd(c("1982-09-06", "1982-Sep-6", "1982-Sept-06", "1982-Sept-six"))
```
```
## [1] "1982-09-06" "1982-09-06" "1982-09-06" NA
```
`ymd` is a one of several components of the `parse_date_time` function. As below, these functions standardize the `Trip.Start.Timestamp` field (when a trip departed) into the 60 minute and 15 minute intervals needed for our analysis. Functions like `week`, `wkday` and `hour` convert the data/time stamp into week of the year, day of the week, and hour of the day, respectively.
Two `Pickup.Census.Tract` units for Chicago’s O’Hare International Airport are dropped. Surely ride share companies forecast airport demand, but they likely employ additional features/models that account for takeoff and landing patterns.
```
ride2 <-
ride %>%
mutate(interval60 = floor_date(mdy_hms(Trip.Start.Timestamp), unit = "hour"),
interval15 = floor_date(mdy_hms(Trip.Start.Timestamp), unit = "15 mins"),
week = week(interval60),
dotw = wday(interval60, label=TRUE),
Pickup.Census.Tract = as.character(Pickup.Census.Tract),
Dropoff.Census.Tract = as.character(Dropoff.Census.Tract)) %>%
filter(Pickup.Census.Tract != "17031980000" & Pickup.Census.Tract != "17031770602")
ride2[1:3, c(1,4:7)]
```
```
## Trip.Start.Timestamp interval60 interval15 week dotw
## 1 12/07/2018 04:30:00 PM 2018-12-07 16:00:00 2018-12-07 16:30:00 49 Fri
## 2 12/30/2018 06:00:00 PM 2018-12-30 18:00:00 2018-12-30 18:00:00 52 Sun
## 3 11/24/2018 07:45:00 AM 2018-11-24 07:00:00 2018-11-24 07:45:00 47 Sat
```
### 8\.2\.2 Weather data
One might reasonably assume that inclement weather in the Windy City would incentivize ride share. There once were a host of open weather data APIs available to the rstats community, but that changed when IBM bought Weather Company and Weather Underground, two giant aggregators of weather data. Recently, the good people at the Iowa Environment Mesonet released the `riem` package[69](#fn69), which provides free space/time weather data.
The `riem_measures` function downloads `weather.Data` for O’Hare Airport between November 1, 2018 and January 1, 2019\. Note that the O’Hare weather station sufficiently provides temporal weather for all of Chicago.
```
weather.Data <-
riem_measures(station = "ORD", date_start = "2018-11-01", date_end = "2019-01-01")
```
In this chapter, several ‘panel’ datasets are created. A panel is long form data, typically giving repeat observations for particular items. An example would be a dataset tracking student grades over time. Here, each row would represent a student/year pair. Every twelve rows would represent one student’s grades across twelve years of schooling.
Below a `weather.Panel` is generated to summarize temperature, precipitation, and wind speed for every hour between November and December. In the code block, `mutate_if` and `replace_na` converts any character or numeric field with `NA` to 0\. The first `mutate` function creates `interval60` by converting the date/time stamp, `valid`, from 5 minute intervals to 60 minute intervals. Note what `?substr` does. Then `group_by` each hour (`interval60`) to `summarize` a final set of hourly weather indicators.
Below the weather data is plotted as a time series using `grid.arrange`.
```
weather.Panel <-
weather.Data %>%
mutate_if(is.character, list(~replace(as.character(.), is.na(.), "0"))) %>%
replace(is.na(.), 0) %>%
mutate(interval60 = ymd_h(substr(valid, 1, 13))) %>%
mutate(week = week(interval60),
dotw = wday(interval60, label=TRUE)) %>%
group_by(interval60) %>%
summarize(Temperature = max(tmpf),
Percipitation = sum(p01i),
Wind_Speed = max(sknt)) %>%
mutate(Temperature = ifelse(Temperature == 0, 42, Temperature))
```
```
grid.arrange(top = "Weather Data - Chicago - November & December, 2018",
ggplot(weather.Panel, aes(interval60,Percipitation)) + geom_line() +
labs(title="Percipitation", x="Hour", y="Percipitation") + plotTheme(),
ggplot(weather.Panel, aes(interval60,Wind_Speed)) + geom_line() +
labs(title="Wind Speed", x="Hour", y="Wind Speed") + plotTheme(),
ggplot(weather.Panel, aes(interval60,Temperature)) + geom_line() +
labs(title="Temperature", x="Hour", y="Temperature") + plotTheme())
```
### 8\.2\.3 Subset a study area using neighborhoods
A ride share forecast for every Cook County tract, for every hour, for 8 weeks, would yield a time/space panel (data frame) consisting of `nrow(chicagoTracts) * 24 * 7 * 8` \= 1,771,392 rows. A regression that size will melt your laptop. Instead, 201 Census tracts are subset across Chicago’s downtown, the Loop, up through Wrigleyville and Lincoln Square.
The code block below pulls all tract geometries from the `tigris` package, loads a neighborhood geojson and subsets those found in a `neighborhoodList`. `st_intersection` then finds `studyArea.tracts`. The plot below maps `studyArea.tracts` relative to `chicagoTracts`.
```
chicagoTracts <-
tigris::tracts(state = "Illinois", county = "Cook") %>%
dplyr::select(GEOID) %>% filter(GEOID != 17031990000)
neighborhoodList <-
c("Grant Park","Printers Row","Loop","Millenium Park","West Loop","United Center",
"West Town","East Village","Ukranian Village","Wicker Park","River North",
"Rush & Division","Streeterville","Gold Coast","Old Town","Bucktown","Lincoln Park",
"Sheffield & DePaul","Lake View","Boystown","Wrigleyville","North Center","Uptown",
"Lincoln Square","Little Italy, UIC")
nhoods <-
st_read("https://data.cityofchicago.org/api/geospatial/bbvz-uum9?method=export&format=GeoJSON") %>%
st_transform(st_crs(chicagoTracts)) %>%
filter(pri_neigh %in% neighborhoodList)
studyArea.tracts <-
st_intersection(chicagoTracts, st_union(nhoods))
```
### 8\.2\.4 Create the final space/time panel
The dataset for this analysis must be a complete panel with an observation for every possible space/time combination. The `ride2` data frame is incomplete as some space/time intervals saw no trips. In the code blocks below, the panel is completed by finding all unique space/time intervals and inserting `0` trips where necessary. Additional feature engineering is also performed.
The complete `study.panel` includes 8 weeks of ride share trips. How many total space/time combinations exist over 8 weeks? 24 hours a day \* 7 days a week \* 8 weeks \= 1,344 possible time units. That multiplied by 201 tracts in `studyArea.tracts` \= `270,144` unique space/time units. Thus, the final data frame must also have precisely this many rows.
The first step is `ride.template` which `filter`s for the 8 weeks of interest and uses `semi_join` to return only those trips in the `studyArea.tracts`. A quick calculation shows `ride.template` captures all the needed `unique` space/time units.
```
ride.template <-
filter(ride2, week %in% c(45:52)) %>%
semi_join(st_drop_geometry(studyArea.tracts),
by = c("Pickup.Census.Tract" = "GEOID"))
length(unique(ride.template$interval60)) * length(unique(ride.template$Pickup.Census.Tract))
```
```
## [1] 270144
```
An empty data frame, `study.panel`, is then created with the complete space/time panel. This is done using the `expand.grid` function and `unique`. `nrow` shows that the space/time count is still correct.
```
study.panel <-
expand.grid(interval60 = unique(ride.template$interval60),
Pickup.Census.Tract = unique(ride.template$Pickup.Census.Tract))
nrow(study.panel)
```
```
## [1] 270144
```
The final `ride.panel` is created by merging space/time intervals from actual trips in `ride.template` with intervals that saw no trips in `study.panel`.
A `Trip_Counter` is created in `ride.template` giving `1` for each trip. `right_join` then returns *non\-trip* space/time intervals from `study.panel`. If a trip occurred at a given interval, `Trip_Counter` returns `1`, otherwise `NA`. Trips are then grouped by each space/time interval and `Trip_Count` is set equal to the `sum` of `Trip_Counter`. `na.rm = T` prevents `sum` from returning `NA`.
Next, the `weather.Panel` and `studyArea.tracts` are joined to provide weather and geometry information, respectively. Finally, features denoting week and day of the week are created with `lubridate`.
The output is a complete panel. Note that `nrow(ride.template) == sum(ride.panel$Trip_Count)` \=\= `TRUE`.
```
ride.panel <-
ride.template %>%
mutate(Trip_Counter = 1) %>%
right_join(study.panel) %>%
group_by(interval60, Pickup.Census.Tract) %>%
summarize(Trip_Count = sum(Trip_Counter, na.rm=T)) %>%
left_join(weather.Panel, by = "interval60") %>%
left_join(studyArea.tracts, by = c("Pickup.Census.Tract" = "GEOID")) %>%
mutate(week = week(interval60),
dotw = wday(interval60, label = TRUE)) %>%
st_sf()
```
To test for serial (temporal) correlation, additional feature engineering creates time lags. `arrange` sorts the data by space then time; `group_by` groups by tract, and `lag` returns the `Trip-Count` for the previous *nth* time period.
```
ride.panel <-
ride.panel %>%
arrange(Pickup.Census.Tract, interval60) %>%
group_by(Pickup.Census.Tract) %>%
mutate(lagHour = dplyr::lag(Trip_Count,1),
lag2Hours = dplyr::lag(Trip_Count,2),
lag3Hours = dplyr::lag(Trip_Count,3),
lag4Hours = dplyr::lag(Trip_Count,4),
lag12Hours = dplyr::lag(Trip_Count,12),
lag1day = dplyr::lag(Trip_Count,24)) %>%
ungroup()
as.data.frame(filter(
ride.panel, Pickup.Census.Tract == "17031831900"))[1:6, c(1:3,10:11)]
```
```
## interval60 Pickup.Census.Tract Trip_Count lagHour lag2Hours
## 1 2018-11-05 00:00:00 17031831900 2 NA NA
## 2 2018-11-05 01:00:00 17031831900 2 2 NA
## 3 2018-11-05 02:00:00 17031831900 0 2 2
## 4 2018-11-05 03:00:00 17031831900 0 0 2
## 5 2018-11-05 04:00:00 17031831900 0 0 0
## 6 2018-11-05 05:00:00 17031831900 3 0 0
```
### 8\.2\.5 Split training and test
How might generalizability be tested for in this use case? Random k\-fold cross\-validation or spatial cross\-validation (5\.5\.1\) both seem reasonable. LOGO\-CV could even be used to cross\-validate in time, across hours, days, or days of the week.
Here, a time series approach is taken, training on 5 weeks of data, weeks 45\-49, and testing on the following 3 weeks, 50\-52\. If the time/space experience is generalizable, it can be used to project into the near future. The below code splits the data by `week`.
```
ride.Train <- filter(ride.panel, week < 50)
ride.Test <- filter(ride.panel, week >= 50)
```
### 8\.2\.6 What about distance features?
Why not measure exposure or distance to points of interest, as we have in previous chapters? It seems reasonable for instance, that ride share trips would decline as distance to subway stations decline, as riders trade\-off car trips for transit. Why not account for this? Check out the table below.
| time | tract | Distance\_to\_subway | Trip\_Count |
| --- | --- | --- | --- |
| 1 | 1 | 200 | 10 |
| 2 | 1 | 200 | 13 |
| 3 | 1 | 200 | 18 |
| 4 | 1 | 200 | 22 |
| 5 | 1 | 200 | 24 |
| 1 | 2 | 1890 | 45 |
| 2 | 2 | 1890 | 62 |
| 3 | 2 | 1890 | 89 |
| 4 | 2 | 1890 | 91 |
| 5 | 2 | 1890 | 100 |
| |
| --- |
| Table 8\.2 |
The first two columns in the table are examples of a short, 2 space/time interval panel. Note the similarity between `Distance_to_subway` and `tract` and note the coefficient on `Distance_to_subway` when it is included in a regression with `time` and `tract`.
The missing coefficient reflects the fact that `Distance_to_Subway` is perfectly colinear with `tract`. The lesson is that these exposure variables are not needed for panel data, when `Pickup.Census.Tract` is controlled for directly. It may be hard to conceptualize, but controlling for tract\-level variation in part, controls for exposure to points of interest.
**Table 8\.3 Colinear Regression**
| | |
| | Trip\_Count |
| | |
| time | 8\.800\*\*\* (2\.248\) |
| tract | 60\.000\*\*\* (6\.359\) |
| Distance\_to\_subway | |
| Constant | \-69\.000\*\*\* (12\.107\) |
| N | 10 |
| R2 | 0\.937 |
| Adjusted R2 | 0\.919 |
| Residual Std. Error | 10\.054 (df \= 7\) |
| F Statistic | 52\.178\*\*\* (df \= 2; 7\) |
| | |
| ⋆p\<0\.1; ⋆⋆p\<0\.05; ⋆⋆⋆p\<0\.01 | |
8\.3 Exploratory Analysis \- ride share
---------------------------------------
In this section, the ride share data is explored for time, space, weather, and demographic relationships. Section 8\.3\.1 and 8\.3\.2 illustrate temporal and spatial trends, respectively, while 8\.3\.3 creates a space/time animation. Finally, 8\.3\.4 explores correlations with weather.
### 8\.3\.1 `Trip_Count` serial autocorrelation
Should `Trip_Count` exhibit serial (temporal) correlation, then the time lag features will lead to better predictions. Several tests are conducted below, beginning with a time series visualization. `geom_vline` is used to visualize `mondays` as well as Thanksgiving and Christmas (dotted lines).
There are some interesting trends to note. Most weeks exhibit remarkably comparable time series patterns with consistent peaks and troughs. This suggests the presence of serial correlation. Weeks surrounding Thanksgiving and Christmas appear as clear outliers, however.
```
mondays <-
mutate(ride.panel,
monday = ifelse(dotw == "Mon" & hour(interval60) == 1,
interval60, 0)) %>%
filter(monday != 0)
tg <- as.POSIXct("2018-11-22 01:00:00 UTC")
xmas <- as.POSIXct("2018-12-24 01:00:00 UTC")
st_drop_geometry(rbind(
mutate(ride.Train, Legend = "Training"),
mutate(ride.Test, Legend = "Testing"))) %>%
group_by(Legend, interval60) %>%
summarize(Trip_Count = sum(Trip_Count)) %>%
ungroup() %>%
ggplot(aes(interval60, Trip_Count, colour = Legend)) + geom_line() +
scale_colour_manual(values = palette2) +
geom_vline(xintercept = tg, linetype = "dotted") +
geom_vline(xintercept = xmas, linetype = "dotted") +
geom_vline(data = mondays, aes(xintercept = monday)) +
labs(title="Rideshare trips by week: November-December",
subtitle="Dotted lines for Thanksgiving & Christmas",
x="Day", y="Trip Count") +
plotTheme() + theme(panel.grid.major = element_blank())
```
Next, the time `lag` features are tested for correlation with `Trip_Count`. `plotData.lag` returns the `Trip_Count` and time lag features for week 45\. `fct_relevel` reorders the lag levels. Omit that line and try `levels(plotData.lag$Variable)`.
Pearson correlation is then calculated for each `Variable` in `correlation.lag`.
```
plotData.lag <-
filter(as.data.frame(ride.panel), week == 45) %>%
dplyr::select(starts_with("lag"), Trip_Count) %>%
gather(Variable, Value, -Trip_Count) %>%
mutate(Variable = fct_relevel(Variable, "lagHour","lag2Hours","lag3Hours",
"lag4Hours","lag12Hours","lag1day"))
correlation.lag <-
group_by(plotData.lag, Variable) %>%
summarize(correlation = round(cor(Value, Trip_Count, use = "complete.obs"), 2))
```
The very strong `Trip_Count` correlations are visualized below. Note that the correlation decreases with each additional lag hour, but predictive power returns with the 1 day lag. These features should be strong predictors in a model. See 5\.4\.1 to overlay correlation coefficients in a `ggplot`.
### 8\.3\.2 `Trip_Count` spatial autocorrelation
Ride share exhibits strong temporal correlation, but how about spatial autocorrelation? Figures 8\.6 and 8\.7 map tract `Trip_Count` sums by week and by day of the week, respectively. Sum is chosen over mean here to avoid tract/time pairs with 0 counts. Even during the holidays, the spatial pattern appears consistent, with trips concentrated in Chicago’s central business district, The Loop (southeastern portion of the study area).
Note the `q5` function is used to map quintile breaks, but a list of `labels` are fed to `scale_fill_manual`. Set the data wrangling portion of the below code block to a `temp.df` and run `qBr(temp.df, "Sum_Trip_Count")` to get the breaks.
```
group_by(ride.panel, week, Pickup.Census.Tract) %>%
summarize(Sum_Trip_Count = sum(Trip_Count)) %>%
ungroup() %>%
ggplot() + geom_sf(aes(fill = q5(Sum_Trip_Count))) +
facet_wrap(~week, ncol = 8) +
scale_fill_manual(values = palette5,
labels = c("16", "140", "304", "530", "958"),
name = "Trip_Count") +
labs(title="Sum of rideshare trips by tract and week") +
mapTheme() + theme(legend.position = "bottom")
```
Finally, spatial autocorrelation is tested for by visualizing spatial lag correlations (4\.2\). A queen contiguity spatial weights matrix (5\.4\) relates tract *t* to adjacent tracts. “First order contiguity” refers to those tracts that touch tract *t*. “Second order” refers to the tracts that touch those tracts, etc.
This code is a bit verbose and thus withheld. These features will not be used in the model, but the resulting plot does show that there is strong spatial autocorrelation in ride share.
### 8\.3\.3 Space/time correlation?
Ride share in Chicago exhibits very strong space/time dependencies, so why not visualize both together? Below, the `gganimate` package is used to build an animated gif map of ride share trips over space and time. Here, the 15 minute intervals, `interval15`, are used from a single Monday in `week45.panel`. Tract geometries are pulled from `ride2`.
```
week45 <-
filter(ride2 , week == 45 & dotw == "Mon")
week45.panel <-
expand.grid(
interval15 = unique(week45$interval15),
Pickup.Census.Tract = unique(ride2$Pickup.Census.Tract))
```
A `ride.animation.data` panel is created following the same routine as 8\.2\.4 above. Recall, we are using a 20% sample of the data, so the actual number of trips is much higher. Nevertheless, the space/time pattern emanating outward from the Loop is apparent. Note again, the use of `fct_relevel`.
```
ride.animation.data <-
mutate(week45, Trip_Counter = 1) %>%
right_join(week45.panel) %>%
group_by(interval15, Pickup.Census.Tract) %>%
summarize(Trip_Count = sum(Trip_Counter, na.rm=T)) %>%
ungroup() %>%
left_join(chicagoTracts, by=c("Pickup.Census.Tract" = "GEOID")) %>%
st_sf() %>%
mutate(Trips = case_when(Trip_Count == 0 ~ "0 trips",
Trip_Count > 0 & Trip_Count <= 3 ~ "1-3 trips",
Trip_Count > 3 & Trip_Count <= 6 ~ "4-6 trips",
Trip_Count > 6 & Trip_Count <= 10 ~ "7-10 trips",
Trip_Count > 10 ~ "11+ trips")) %>%
mutate(Trips = fct_relevel(Trips, "0 trips","1-3 trips","4-6 trips",
"7-10 trips","10+ trips"))
```
The animation object is created below. Install and load the `gifski` package. `transition_manual` is set to `interval15` to suggest a new map be generated for each 15 minute interval. `animate` creates the gif, setting `duration` to ensure that the entire animation lasts only 20 seconds.
```
rideshare_animation <-
ggplot() +
geom_sf(data = ride.animation.data, aes(fill = Trips)) +
scale_fill_manual(values = palette5) +
labs(title = "Rideshare pickups for one day in November 2018",
subtitle = "15 minute intervals: {current_frame}") +
transition_manual(interval15) +
mapTheme()
```
```
animate(rideshare_animation, duration=20, renderer = gifski_renderer())
```
The below code can be used to write the animated gif to your local machine.
```
anim_save("rideshare_local", rideshare_animation, duration=20, renderer = gifski_renderer())
```
### 8\.3\.4 Weather
`ride.panel` includes three weather\-related variables. The below bar plot removes the spatial variation and creates a dummy variable, `isPercip`, to ask whether precipitation effects mean `Trip_Count`. There appears to be little effect.
```
st_drop_geometry(ride.panel) %>%
group_by(interval60) %>%
summarize(Trip_Count = mean(Trip_Count),
Percipitation = first(Percipitation)) %>%
mutate(isPercip = ifelse(Percipitation > 0,"Rain/Snow", "None")) %>%
group_by(isPercip) %>%
summarize(Mean_Trip_Count = mean(Trip_Count)) %>%
ggplot(aes(isPercip, Mean_Trip_Count)) + geom_bar(stat = "identity") +
labs(title="Does ridership vary with percipitation?",
x="Percipitation", y="Mean Trip Count") +
plotTheme()
```
Do more people take ride share when it is colder? Below, `Trip_Count` is plotted as a function of `Temperature` by `week`. These plots suggest the opposite \- that in most weeks, ride share increases as temps warm. If that sounds strange for November and December, it is because this correlation is ‘spurious’.
A key relationship has been omitted \- namely that both temperature and ride share increase with the hour of the day. Temperature may still be an important predictor, but only when time is controlled for. A couple of quick regressions can illustrate this point. Regressing `Trip_Count` as a function of `Temperature`, estimates that a one degree increase in temps leads to a 5\.54 increase in trips.
When hourly fixed effects are added to the regression, the model suggests a one degree increase in temps leads to a \-7\.47 *decrease* in `Trip_Count`. This is what we would expect \- that all else equal, as temperature increases, travelers are less likely to take ride share versus, say walking.
```
st_drop_geometry(ride.panel) %>%
group_by(interval60) %>%
summarize(Trip_Count = mean(Trip_Count),
Temperature = first(Temperature)) %>%
mutate(week = week(interval60)) %>%
ggplot(aes(Temperature, Trip_Count)) +
geom_point() + geom_smooth(method = "lm", se= FALSE) +
facet_wrap(~week, ncol=8) +
labs(title="Trip Count as a fuction of Temperature by week",
x="Temperature", y="Mean Trip Count") +
plotTheme()
```
### 8\.3\.1 `Trip_Count` serial autocorrelation
Should `Trip_Count` exhibit serial (temporal) correlation, then the time lag features will lead to better predictions. Several tests are conducted below, beginning with a time series visualization. `geom_vline` is used to visualize `mondays` as well as Thanksgiving and Christmas (dotted lines).
There are some interesting trends to note. Most weeks exhibit remarkably comparable time series patterns with consistent peaks and troughs. This suggests the presence of serial correlation. Weeks surrounding Thanksgiving and Christmas appear as clear outliers, however.
```
mondays <-
mutate(ride.panel,
monday = ifelse(dotw == "Mon" & hour(interval60) == 1,
interval60, 0)) %>%
filter(monday != 0)
tg <- as.POSIXct("2018-11-22 01:00:00 UTC")
xmas <- as.POSIXct("2018-12-24 01:00:00 UTC")
st_drop_geometry(rbind(
mutate(ride.Train, Legend = "Training"),
mutate(ride.Test, Legend = "Testing"))) %>%
group_by(Legend, interval60) %>%
summarize(Trip_Count = sum(Trip_Count)) %>%
ungroup() %>%
ggplot(aes(interval60, Trip_Count, colour = Legend)) + geom_line() +
scale_colour_manual(values = palette2) +
geom_vline(xintercept = tg, linetype = "dotted") +
geom_vline(xintercept = xmas, linetype = "dotted") +
geom_vline(data = mondays, aes(xintercept = monday)) +
labs(title="Rideshare trips by week: November-December",
subtitle="Dotted lines for Thanksgiving & Christmas",
x="Day", y="Trip Count") +
plotTheme() + theme(panel.grid.major = element_blank())
```
Next, the time `lag` features are tested for correlation with `Trip_Count`. `plotData.lag` returns the `Trip_Count` and time lag features for week 45\. `fct_relevel` reorders the lag levels. Omit that line and try `levels(plotData.lag$Variable)`.
Pearson correlation is then calculated for each `Variable` in `correlation.lag`.
```
plotData.lag <-
filter(as.data.frame(ride.panel), week == 45) %>%
dplyr::select(starts_with("lag"), Trip_Count) %>%
gather(Variable, Value, -Trip_Count) %>%
mutate(Variable = fct_relevel(Variable, "lagHour","lag2Hours","lag3Hours",
"lag4Hours","lag12Hours","lag1day"))
correlation.lag <-
group_by(plotData.lag, Variable) %>%
summarize(correlation = round(cor(Value, Trip_Count, use = "complete.obs"), 2))
```
The very strong `Trip_Count` correlations are visualized below. Note that the correlation decreases with each additional lag hour, but predictive power returns with the 1 day lag. These features should be strong predictors in a model. See 5\.4\.1 to overlay correlation coefficients in a `ggplot`.
### 8\.3\.2 `Trip_Count` spatial autocorrelation
Ride share exhibits strong temporal correlation, but how about spatial autocorrelation? Figures 8\.6 and 8\.7 map tract `Trip_Count` sums by week and by day of the week, respectively. Sum is chosen over mean here to avoid tract/time pairs with 0 counts. Even during the holidays, the spatial pattern appears consistent, with trips concentrated in Chicago’s central business district, The Loop (southeastern portion of the study area).
Note the `q5` function is used to map quintile breaks, but a list of `labels` are fed to `scale_fill_manual`. Set the data wrangling portion of the below code block to a `temp.df` and run `qBr(temp.df, "Sum_Trip_Count")` to get the breaks.
```
group_by(ride.panel, week, Pickup.Census.Tract) %>%
summarize(Sum_Trip_Count = sum(Trip_Count)) %>%
ungroup() %>%
ggplot() + geom_sf(aes(fill = q5(Sum_Trip_Count))) +
facet_wrap(~week, ncol = 8) +
scale_fill_manual(values = palette5,
labels = c("16", "140", "304", "530", "958"),
name = "Trip_Count") +
labs(title="Sum of rideshare trips by tract and week") +
mapTheme() + theme(legend.position = "bottom")
```
Finally, spatial autocorrelation is tested for by visualizing spatial lag correlations (4\.2\). A queen contiguity spatial weights matrix (5\.4\) relates tract *t* to adjacent tracts. “First order contiguity” refers to those tracts that touch tract *t*. “Second order” refers to the tracts that touch those tracts, etc.
This code is a bit verbose and thus withheld. These features will not be used in the model, but the resulting plot does show that there is strong spatial autocorrelation in ride share.
### 8\.3\.3 Space/time correlation?
Ride share in Chicago exhibits very strong space/time dependencies, so why not visualize both together? Below, the `gganimate` package is used to build an animated gif map of ride share trips over space and time. Here, the 15 minute intervals, `interval15`, are used from a single Monday in `week45.panel`. Tract geometries are pulled from `ride2`.
```
week45 <-
filter(ride2 , week == 45 & dotw == "Mon")
week45.panel <-
expand.grid(
interval15 = unique(week45$interval15),
Pickup.Census.Tract = unique(ride2$Pickup.Census.Tract))
```
A `ride.animation.data` panel is created following the same routine as 8\.2\.4 above. Recall, we are using a 20% sample of the data, so the actual number of trips is much higher. Nevertheless, the space/time pattern emanating outward from the Loop is apparent. Note again, the use of `fct_relevel`.
```
ride.animation.data <-
mutate(week45, Trip_Counter = 1) %>%
right_join(week45.panel) %>%
group_by(interval15, Pickup.Census.Tract) %>%
summarize(Trip_Count = sum(Trip_Counter, na.rm=T)) %>%
ungroup() %>%
left_join(chicagoTracts, by=c("Pickup.Census.Tract" = "GEOID")) %>%
st_sf() %>%
mutate(Trips = case_when(Trip_Count == 0 ~ "0 trips",
Trip_Count > 0 & Trip_Count <= 3 ~ "1-3 trips",
Trip_Count > 3 & Trip_Count <= 6 ~ "4-6 trips",
Trip_Count > 6 & Trip_Count <= 10 ~ "7-10 trips",
Trip_Count > 10 ~ "11+ trips")) %>%
mutate(Trips = fct_relevel(Trips, "0 trips","1-3 trips","4-6 trips",
"7-10 trips","10+ trips"))
```
The animation object is created below. Install and load the `gifski` package. `transition_manual` is set to `interval15` to suggest a new map be generated for each 15 minute interval. `animate` creates the gif, setting `duration` to ensure that the entire animation lasts only 20 seconds.
```
rideshare_animation <-
ggplot() +
geom_sf(data = ride.animation.data, aes(fill = Trips)) +
scale_fill_manual(values = palette5) +
labs(title = "Rideshare pickups for one day in November 2018",
subtitle = "15 minute intervals: {current_frame}") +
transition_manual(interval15) +
mapTheme()
```
```
animate(rideshare_animation, duration=20, renderer = gifski_renderer())
```
The below code can be used to write the animated gif to your local machine.
```
anim_save("rideshare_local", rideshare_animation, duration=20, renderer = gifski_renderer())
```
### 8\.3\.4 Weather
`ride.panel` includes three weather\-related variables. The below bar plot removes the spatial variation and creates a dummy variable, `isPercip`, to ask whether precipitation effects mean `Trip_Count`. There appears to be little effect.
```
st_drop_geometry(ride.panel) %>%
group_by(interval60) %>%
summarize(Trip_Count = mean(Trip_Count),
Percipitation = first(Percipitation)) %>%
mutate(isPercip = ifelse(Percipitation > 0,"Rain/Snow", "None")) %>%
group_by(isPercip) %>%
summarize(Mean_Trip_Count = mean(Trip_Count)) %>%
ggplot(aes(isPercip, Mean_Trip_Count)) + geom_bar(stat = "identity") +
labs(title="Does ridership vary with percipitation?",
x="Percipitation", y="Mean Trip Count") +
plotTheme()
```
Do more people take ride share when it is colder? Below, `Trip_Count` is plotted as a function of `Temperature` by `week`. These plots suggest the opposite \- that in most weeks, ride share increases as temps warm. If that sounds strange for November and December, it is because this correlation is ‘spurious’.
A key relationship has been omitted \- namely that both temperature and ride share increase with the hour of the day. Temperature may still be an important predictor, but only when time is controlled for. A couple of quick regressions can illustrate this point. Regressing `Trip_Count` as a function of `Temperature`, estimates that a one degree increase in temps leads to a 5\.54 increase in trips.
When hourly fixed effects are added to the regression, the model suggests a one degree increase in temps leads to a \-7\.47 *decrease* in `Trip_Count`. This is what we would expect \- that all else equal, as temperature increases, travelers are less likely to take ride share versus, say walking.
```
st_drop_geometry(ride.panel) %>%
group_by(interval60) %>%
summarize(Trip_Count = mean(Trip_Count),
Temperature = first(Temperature)) %>%
mutate(week = week(interval60)) %>%
ggplot(aes(Temperature, Trip_Count)) +
geom_point() + geom_smooth(method = "lm", se= FALSE) +
facet_wrap(~week, ncol=8) +
labs(title="Trip Count as a fuction of Temperature by week",
x="Temperature", y="Mean Trip Count") +
plotTheme()
```
8\.4 Modeling and validation using `purrr::map`
-----------------------------------------------
In this section, models are estimated from `ride.Train` and tested on `ride.Test` to gauge how well the space/time features forecast ride share demand. The `purrr` family of functions will be used to loop through a set of ‘nested’ data frames to efficiently compare across different model specifications. This functionality is very powerful, and programmatically, is a step up from the `tidy` used thus far.
### 8\.4\.1 A short primer on `nest`ed `tibble`s
Let’s return to the example space/time panel created above, `colinear_df`, by re\-engineering this data frame as a nested `tibble`. A `tibble` is a like a data frame with more bells and whistles. In the code block below, `colinear_df` is converted `as.tibble`. `nest` then embeds 5 separate tibbles in `colinear_nested`, delineated by `time`. This gives a tibble of tibbles.
```
colinear_nested <- nest(as.tibble(colinear_df), -time)
colinear_nested
```
```
## # A tibble: 5 x 2
## time data
## <dbl> <list>
## 1 1 <tibble [2 x 3]>
## 2 2 <tibble [2 x 3]>
## 3 3 <tibble [2 x 3]>
## 4 4 <tibble [2 x 3]>
## 5 5 <tibble [2 x 3]>
```
Nesting allows one to split and wrangle data in all sorts of interesting ways. Any nested `tibble` can be `unnest`ed with:
```
unnest(colinear_nested[1,2])
```
```
## # A tibble: 2 x 3
## tract Distance_to_subway Trip_Count
## <dbl> <dbl> <dbl>
## 1 1 200 10
## 2 2 1890 45
```
### 8\.4\.2 Estimate a ride share forecast
In this section, four different linear regressions are estimated on `ride.Train`, each with different fixed effects:
1. `reg1` focuses on just time, including hour fixed effects, day of the week, and `Temperature`.
2. `reg2` focuses on just space effects with the `Pickup.Census.Tract` fixed effects.
3. `reg3` includes both time and space fixed effects.
4. `reg4` adds the time `lag` features.
Time features like `hour` could be modeled as either a continuous or categorical feature. As a continuous feature, the interpretation is that a 1 `hour` increase is associated with an estimated change in `Trip_Count`. As a factor, the interpretation is that there are significant differences in `Trip_Count` by hour. Both options can be explored, but below, the latter is chosen.
Spatial fixed effects for `Pickup.Census.Tract` are also included to account for the across\-tract differences, like amenities, access to transit, distance to the Loop, etc.
Ordinary Least Squares (OLS) is chosen, despite `Trip_Count` being a count variable. Poisson is an option, but the counts are sufficiently large to feel at ease with OLS. Not surprisingly, the best choice of algorithm is that which leads to the most accurate and generalizable model.
```
reg1 <- lm(Trip_Count ~ hour(interval60) + dotw + Temperature, data=ride.Train)
reg2 <- lm(Trip_Count ~ Pickup.Census.Tract + dotw + Temperature, data=ride.Train)
reg3 <- lm(Trip_Count ~ Pickup.Census.Tract + hour(interval60) + dotw + Temperature,
data=ride.Train)
reg4 <- lm(Trip_Count ~ Pickup.Census.Tract + hour(interval60) + dotw + Temperature +
lagHour + lag2Hours + lag3Hours + lag12Hours + lag1day,
data=ride.Train)
```
### 8\.4\.3 Validate test set by time
In this section, Mean Absolute Error (MAE) is calculated on `ride.Test` for each model. `ride.Test` includes 3 weeks and is highly influenced by the Christmas holiday. To understand if models generalize to the holiday and non\-holiday weeks, `ride.Test.weekNest` nests `ride.Test` by `week`. Note that the `data` field contains 3 sf tibbles (with geometries), and `unnest(ride.Test.weekNest[1,2])` returns one week’s worth of simple features data.
```
ride.Test.weekNest <-
as.data.frame(ride.Test) %>%
nest(-week)
ride.Test.weekNest
```
```
## # A tibble: 3 x 2
## week data
## <dbl> <list>
## 1 50 <tibble [33,768 x 14]>
## 2 51 <tibble [33,768 x 14]>
## 3 52 <tibble [33,768 x 14]>
```
Next, a small function is created that takes a tibble, `dat` and a regression model, `fit` as its inputs, and outputs predictions as `pred`. This function is used to predict for each week in `ride.Trest.weekNest`.
```
model_pred <- function(dat, fit){
pred <- predict(fit, newdata = dat)}
```
The nested format allows one to loop through each model for each week and `mutate` summary statistics. In the code block below, `week_predictions` are calculated for each week in `ride.Test`. The `map` function applies the `model_pred` function to each nested tibble.
Take the first line of the below `mutate`, for example. A new column, `Time_FE`, includes predictions for `reg1` \- the time fixed effects model. The predictions are created by `map`ping the function, `model_pred`, to each row of `data`, parameterizing `fit` as the `reg1` model.
```
week_predictions <-
ride.Test.weekNest %>%
mutate(A_Time_FE = map(.x = data, fit = reg1, .f = model_pred),
B_Space_FE = map(.x = data, fit = reg2, .f = model_pred),
C_Space_Time_FE = map(.x = data, fit = reg3, .f = model_pred),
D_Space_Time_Lags = map(.x = data, fit = reg4, .f = model_pred))
week_predictions
```
```
## # A tibble: 3 x 6
## week data A_Time_FE B_Space_FE C_Space_Time_FE D_Space_Time_La~
## <dbl> <list> <list> <list> <list> <list>
## 1 50 <tibble [33,7~ <dbl [33,76~ <dbl [33,7~ <dbl [33,768]> <dbl [33,768]>
## 2 51 <tibble [33,7~ <dbl [33,76~ <dbl [33,7~ <dbl [33,768]> <dbl [33,768]>
## 3 52 <tibble [33,7~ <dbl [33,76~ <dbl [33,7~ <dbl [33,768]> <dbl [33,768]>
```
The output shows that each new column is a `list` of predictions for each model by week. Once columns are moved to long form with `gather`, four new columns are generated in the code blow below.
`Observed` is the actual space/time `Trip_Count` for that week, created by looping through (`map`) each nested tibble in the `data` field and `pull`ing `Trip_Count`. `Absolute_Error` is created with `map2`, which maps over two inputs. In this case, `Observed` and `Prediction` are fed into a function (`~`) that calculates the absolute value of their difference.
To calculate `MAE`, `map_dbl`, a variant of `map`, is used to loop through `Absolute_Error`, calculating the `mean`. The same function calculates the standard deviation of absolute error, `sd_AE`, which is a useful measure of generalizability.
```
week_predictions <- week_predictions %>%
gather(Regression, Prediction, -data, -week) %>%
mutate(Observed = map(data, pull, Trip_Count),
Absolute_Error = map2(Observed, Prediction, ~ abs(.x - .y)),
MAE = map_dbl(Absolute_Error, mean),
sd_AE = map_dbl(Absolute_Error, sd))
```
The resulting data frame shows goodness of fit by week and model. The MAE for the time effects model (`reg1`) in week 50 is comparable to the mean observed `Trip_Count` of 5\.86\. However, with increasing sophistication, the model becomes more accurate and more generalizable.
This nested framework makes it easy to plot MAE by model by week, as below. Both the spatial fixed effects and time lags add significant predictive power.
```
week_predictions %>%
dplyr::select(week, Regression, MAE) %>%
gather(Variable, MAE, -Regression, -week) %>%
ggplot(aes(week, MAE)) +
geom_bar(aes(fill = Regression), position = "dodge", stat="identity") +
scale_fill_manual(values = palette5) +
labs(title = "Mean Absolute Errors by model specification and week") +
plotTheme()
```
For each model, predicted and observed `Trip_Count` is taken out of the spatial context and their means plotted in time series form below. Models `A` and `C` appear to have the same time trend because again, the spatial context has been removed.
With more sophistication comes the ability to predict for the highest peaks, and the time lags help make this happen. The time series does show that the model over\-predicts trips for some of the days around Christmas. This may be because the training data suggests more trips should otherwise occur during those times.
```
week_predictions %>%
mutate(interval60 = map(data, pull, interval60),
Pickup.Census.Tract = map(data, pull, Pickup.Census.Tract)) %>%
dplyr::select(interval60, Pickup.Census.Tract, Observed, Prediction, Regression) %>%
unnest() %>%
gather(Variable, Value, -Regression, -interval60, -Pickup.Census.Tract) %>%
group_by(Regression, Variable, interval60) %>%
summarize(Value = mean(Value)) %>%
ggplot(aes(interval60, Value, colour=Variable)) + geom_line(size = 1.1) +
facet_wrap(~Regression, ncol=1) +
scale_colour_manual(values = palette2) +
labs(title = "Mean Predicted/Observed ride share by hourly interval",
x = "Hour", y= "Rideshare Trips") +
plotTheme()
```
### 8\.4\.4 Validate test set by space
```
error.byWeek <-
filter(week_predictions, Regression == "D_Space_Time_Lags") %>%
unnest() %>% st_sf() %>%
dplyr::select(Pickup.Census.Tract, Absolute_Error, week, geometry) %>%
gather(Variable, Value, -Pickup.Census.Tract, -week, -geometry) %>%
group_by(Variable, Pickup.Census.Tract, week) %>%
summarize(MAE = mean(Value))
```
Above, MAE for `reg4` is mapped, by `unnest`ing it from `week_predictions`. Errors for all time periods are averaged by `Pickup.Census.Tract` and `week`. The highest errors are in the Loop where more trips occur and there are some interesting error patterns along arterial routes running west and north from downtown.
MAE is then mapped by hour for Monday of the 50th week. Errors cluster in The Loop but are otherwise distributed throughout. Each map below could be tested for spatial autocorrelation which may suggest the addition of new features. Given the use case however, is there an incentive to have a really generalizable model?
We have been weary of geospatial machine learning models with errors that vary systematically across space. In other uses cases, the concern was that these differences could drive disparate impact. Do ride share companies have an incentive to be fair in the communities they serve? Could errors in this model be driven by selection bias in the training data?
It is possible that ride share drivers select in to neighborhoods based on passenger perception but what incentive do ride share companies have to ensure that everyone who wants a ride gets one?
```
error.byHour <-
filter(week_predictions, Regression == "D_Space_Time_Lags") %>%
unnest() %>%
st_sf() %>%
dplyr::select(Pickup.Census.Tract, Absolute_Error, geometry, interval60) %>%
gather(Variable, Value, -interval60, -Pickup.Census.Tract, -geometry) %>%
filter(wday(interval60, label = TRUE) == "Mon" & week(interval60) == 50) %>%
group_by(hour = hour(interval60), Pickup.Census.Tract) %>%
summarize(MAE = mean(Value))
```
### 8\.4\.1 A short primer on `nest`ed `tibble`s
Let’s return to the example space/time panel created above, `colinear_df`, by re\-engineering this data frame as a nested `tibble`. A `tibble` is a like a data frame with more bells and whistles. In the code block below, `colinear_df` is converted `as.tibble`. `nest` then embeds 5 separate tibbles in `colinear_nested`, delineated by `time`. This gives a tibble of tibbles.
```
colinear_nested <- nest(as.tibble(colinear_df), -time)
colinear_nested
```
```
## # A tibble: 5 x 2
## time data
## <dbl> <list>
## 1 1 <tibble [2 x 3]>
## 2 2 <tibble [2 x 3]>
## 3 3 <tibble [2 x 3]>
## 4 4 <tibble [2 x 3]>
## 5 5 <tibble [2 x 3]>
```
Nesting allows one to split and wrangle data in all sorts of interesting ways. Any nested `tibble` can be `unnest`ed with:
```
unnest(colinear_nested[1,2])
```
```
## # A tibble: 2 x 3
## tract Distance_to_subway Trip_Count
## <dbl> <dbl> <dbl>
## 1 1 200 10
## 2 2 1890 45
```
### 8\.4\.2 Estimate a ride share forecast
In this section, four different linear regressions are estimated on `ride.Train`, each with different fixed effects:
1. `reg1` focuses on just time, including hour fixed effects, day of the week, and `Temperature`.
2. `reg2` focuses on just space effects with the `Pickup.Census.Tract` fixed effects.
3. `reg3` includes both time and space fixed effects.
4. `reg4` adds the time `lag` features.
Time features like `hour` could be modeled as either a continuous or categorical feature. As a continuous feature, the interpretation is that a 1 `hour` increase is associated with an estimated change in `Trip_Count`. As a factor, the interpretation is that there are significant differences in `Trip_Count` by hour. Both options can be explored, but below, the latter is chosen.
Spatial fixed effects for `Pickup.Census.Tract` are also included to account for the across\-tract differences, like amenities, access to transit, distance to the Loop, etc.
Ordinary Least Squares (OLS) is chosen, despite `Trip_Count` being a count variable. Poisson is an option, but the counts are sufficiently large to feel at ease with OLS. Not surprisingly, the best choice of algorithm is that which leads to the most accurate and generalizable model.
```
reg1 <- lm(Trip_Count ~ hour(interval60) + dotw + Temperature, data=ride.Train)
reg2 <- lm(Trip_Count ~ Pickup.Census.Tract + dotw + Temperature, data=ride.Train)
reg3 <- lm(Trip_Count ~ Pickup.Census.Tract + hour(interval60) + dotw + Temperature,
data=ride.Train)
reg4 <- lm(Trip_Count ~ Pickup.Census.Tract + hour(interval60) + dotw + Temperature +
lagHour + lag2Hours + lag3Hours + lag12Hours + lag1day,
data=ride.Train)
```
### 8\.4\.3 Validate test set by time
In this section, Mean Absolute Error (MAE) is calculated on `ride.Test` for each model. `ride.Test` includes 3 weeks and is highly influenced by the Christmas holiday. To understand if models generalize to the holiday and non\-holiday weeks, `ride.Test.weekNest` nests `ride.Test` by `week`. Note that the `data` field contains 3 sf tibbles (with geometries), and `unnest(ride.Test.weekNest[1,2])` returns one week’s worth of simple features data.
```
ride.Test.weekNest <-
as.data.frame(ride.Test) %>%
nest(-week)
ride.Test.weekNest
```
```
## # A tibble: 3 x 2
## week data
## <dbl> <list>
## 1 50 <tibble [33,768 x 14]>
## 2 51 <tibble [33,768 x 14]>
## 3 52 <tibble [33,768 x 14]>
```
Next, a small function is created that takes a tibble, `dat` and a regression model, `fit` as its inputs, and outputs predictions as `pred`. This function is used to predict for each week in `ride.Trest.weekNest`.
```
model_pred <- function(dat, fit){
pred <- predict(fit, newdata = dat)}
```
The nested format allows one to loop through each model for each week and `mutate` summary statistics. In the code block below, `week_predictions` are calculated for each week in `ride.Test`. The `map` function applies the `model_pred` function to each nested tibble.
Take the first line of the below `mutate`, for example. A new column, `Time_FE`, includes predictions for `reg1` \- the time fixed effects model. The predictions are created by `map`ping the function, `model_pred`, to each row of `data`, parameterizing `fit` as the `reg1` model.
```
week_predictions <-
ride.Test.weekNest %>%
mutate(A_Time_FE = map(.x = data, fit = reg1, .f = model_pred),
B_Space_FE = map(.x = data, fit = reg2, .f = model_pred),
C_Space_Time_FE = map(.x = data, fit = reg3, .f = model_pred),
D_Space_Time_Lags = map(.x = data, fit = reg4, .f = model_pred))
week_predictions
```
```
## # A tibble: 3 x 6
## week data A_Time_FE B_Space_FE C_Space_Time_FE D_Space_Time_La~
## <dbl> <list> <list> <list> <list> <list>
## 1 50 <tibble [33,7~ <dbl [33,76~ <dbl [33,7~ <dbl [33,768]> <dbl [33,768]>
## 2 51 <tibble [33,7~ <dbl [33,76~ <dbl [33,7~ <dbl [33,768]> <dbl [33,768]>
## 3 52 <tibble [33,7~ <dbl [33,76~ <dbl [33,7~ <dbl [33,768]> <dbl [33,768]>
```
The output shows that each new column is a `list` of predictions for each model by week. Once columns are moved to long form with `gather`, four new columns are generated in the code blow below.
`Observed` is the actual space/time `Trip_Count` for that week, created by looping through (`map`) each nested tibble in the `data` field and `pull`ing `Trip_Count`. `Absolute_Error` is created with `map2`, which maps over two inputs. In this case, `Observed` and `Prediction` are fed into a function (`~`) that calculates the absolute value of their difference.
To calculate `MAE`, `map_dbl`, a variant of `map`, is used to loop through `Absolute_Error`, calculating the `mean`. The same function calculates the standard deviation of absolute error, `sd_AE`, which is a useful measure of generalizability.
```
week_predictions <- week_predictions %>%
gather(Regression, Prediction, -data, -week) %>%
mutate(Observed = map(data, pull, Trip_Count),
Absolute_Error = map2(Observed, Prediction, ~ abs(.x - .y)),
MAE = map_dbl(Absolute_Error, mean),
sd_AE = map_dbl(Absolute_Error, sd))
```
The resulting data frame shows goodness of fit by week and model. The MAE for the time effects model (`reg1`) in week 50 is comparable to the mean observed `Trip_Count` of 5\.86\. However, with increasing sophistication, the model becomes more accurate and more generalizable.
This nested framework makes it easy to plot MAE by model by week, as below. Both the spatial fixed effects and time lags add significant predictive power.
```
week_predictions %>%
dplyr::select(week, Regression, MAE) %>%
gather(Variable, MAE, -Regression, -week) %>%
ggplot(aes(week, MAE)) +
geom_bar(aes(fill = Regression), position = "dodge", stat="identity") +
scale_fill_manual(values = palette5) +
labs(title = "Mean Absolute Errors by model specification and week") +
plotTheme()
```
For each model, predicted and observed `Trip_Count` is taken out of the spatial context and their means plotted in time series form below. Models `A` and `C` appear to have the same time trend because again, the spatial context has been removed.
With more sophistication comes the ability to predict for the highest peaks, and the time lags help make this happen. The time series does show that the model over\-predicts trips for some of the days around Christmas. This may be because the training data suggests more trips should otherwise occur during those times.
```
week_predictions %>%
mutate(interval60 = map(data, pull, interval60),
Pickup.Census.Tract = map(data, pull, Pickup.Census.Tract)) %>%
dplyr::select(interval60, Pickup.Census.Tract, Observed, Prediction, Regression) %>%
unnest() %>%
gather(Variable, Value, -Regression, -interval60, -Pickup.Census.Tract) %>%
group_by(Regression, Variable, interval60) %>%
summarize(Value = mean(Value)) %>%
ggplot(aes(interval60, Value, colour=Variable)) + geom_line(size = 1.1) +
facet_wrap(~Regression, ncol=1) +
scale_colour_manual(values = palette2) +
labs(title = "Mean Predicted/Observed ride share by hourly interval",
x = "Hour", y= "Rideshare Trips") +
plotTheme()
```
### 8\.4\.4 Validate test set by space
```
error.byWeek <-
filter(week_predictions, Regression == "D_Space_Time_Lags") %>%
unnest() %>% st_sf() %>%
dplyr::select(Pickup.Census.Tract, Absolute_Error, week, geometry) %>%
gather(Variable, Value, -Pickup.Census.Tract, -week, -geometry) %>%
group_by(Variable, Pickup.Census.Tract, week) %>%
summarize(MAE = mean(Value))
```
Above, MAE for `reg4` is mapped, by `unnest`ing it from `week_predictions`. Errors for all time periods are averaged by `Pickup.Census.Tract` and `week`. The highest errors are in the Loop where more trips occur and there are some interesting error patterns along arterial routes running west and north from downtown.
MAE is then mapped by hour for Monday of the 50th week. Errors cluster in The Loop but are otherwise distributed throughout. Each map below could be tested for spatial autocorrelation which may suggest the addition of new features. Given the use case however, is there an incentive to have a really generalizable model?
We have been weary of geospatial machine learning models with errors that vary systematically across space. In other uses cases, the concern was that these differences could drive disparate impact. Do ride share companies have an incentive to be fair in the communities they serve? Could errors in this model be driven by selection bias in the training data?
It is possible that ride share drivers select in to neighborhoods based on passenger perception but what incentive do ride share companies have to ensure that everyone who wants a ride gets one?
```
error.byHour <-
filter(week_predictions, Regression == "D_Space_Time_Lags") %>%
unnest() %>%
st_sf() %>%
dplyr::select(Pickup.Census.Tract, Absolute_Error, geometry, interval60) %>%
gather(Variable, Value, -interval60, -Pickup.Census.Tract, -geometry) %>%
filter(wday(interval60, label = TRUE) == "Mon" & week(interval60) == 50) %>%
group_by(hour = hour(interval60), Pickup.Census.Tract) %>%
summarize(MAE = mean(Value))
```
8\.5 Conclusion \- Dispatch
---------------------------
*Why does our model predict so well?*
It is rare that linear models have such small errors, but in this case, the time lag features are very strong predictors. These very high resolution time lags can reasonably be included because the dispatch use case allows for near real time predictions \- like the next hour. Many temporal forecasts have much longer time horizons.
Unlike previous chapters, no cross\-validation was performed, and thus, it is possible that our model is not actually as useful as the results suggest. Random k\-fold or spatial cross\-validation are options. One might also cross\-validate by hour, day of the week or some other time unit to understand how well our model generalizes across time. Space/time cross\-validation is also possible.
*A dispatch algorithm*
Uber’s Marketplace Algorithm is an example of how Uber uses space/time forecasting to reduce response times, save money and maybe even lower automobile congestion. How can Uber be sure that a given algorithm is optimized for these bottom lines?
Ride share companies take many thousands of trips every day. To find the optimal approach, imagine randomly allocating two or more dispatch algorithms across 1000 Chicago drivers over the course of a week. Such a Randomized Control Trial, or what tech companies sometimes call ‘A/B testing’, could help reveal the most optimal algorithm.
*What about the public sector use of this algorithm?*
What space/time prediction problems might public sector data scientists wrestle with? Any public or private entity that needs to ‘rebalance’ an asset like bike share bikes or personal scooters, would benefit from understanding near future demand. Public parking authorities could maximize parking ticket revenues by allocating patrols to time/space units predicted to have illegal parkers. Emergency Management offices can use metrics like this to strategically place ambulances to reduce response times. Finally, this is an obvious extension to the predictive policing use case discussed in Chapter 5 which would allocate police cruisers in space and time.
8\.6 Assignment \- Predict bike share trips
-------------------------------------------
One of the most difficult operational problems for urban bike share systems is the need to ‘re\-balance’ bicycles across the network. Bike share is not useful if a dock has no bikes to pickup, nor if there are no open docking spaces to deposit a bike. Re\-balancing is the practice of anticipating (or predicting) bike share demand for all docks at all times and manually redistributing bikes to ensure a bike or a docking place is available when needed.
In this assignment, you will pick a city with a bike share open data feed and forecast space/time demand for bike share pickups. Most bike share data has fields for origin, destination and date/time.
Envision a bike re\-balancing plan and design an algorithm to inform such a plan. The deliverables include:
1. 2\-3 paragraphs that introduce the reader to bike share and the need for re\-balancing. How will re\-balancing occur? Perhaps you will manage a small fleet of trucks to move bikes from here to there or perhaps you will offer rewards, discounts or other incentives for riders to move a bike from place to place. *Keep in mind*, your plan will inform the appropriate time lag features you can use. How far forward do you wish to predict for at any given time?
2. Your unit of analysis here is the bike share station, not Census tracts. Engineer features to account for weather and time effects and experiment with some amenity features. Develop two different training/test sets including 1\) a 3 week training set and a 2 week test set of all the stations and 2\) a complete 5 week panel for cross\-validation.
3. Develop exploratory analysis plots that describe the space/time dependencies in the data and create an animated map. Interpret your findings in the context of the re\-balancing plan.
4. Use `purrr` to train and validate several models for comparison on the latter two week test set. Perform either random k\-fold cross validation or LOGO\-CV on the 5 week panel. You may choose to cross validate by time or space. Interpret your findings in the context of accuracy and generalizability.
5. Conclude with how useful your algorithm is for the bike re\-balancing plan.
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/rstudio.html |
Chapter 3 R \& RStudio, RMarkdown
=================================
3\.1 Summary
------------
We will begin learning R through RMarkdown, which helps you tell your story of data analysis because you can write text alongside the code. We are actually learning two languages at once: R and Markdown.
### 3\.1\.1 Objectives
In this lesson we will get familiar with:
* the RStudio interface
* RMarkdown
* functions, packages, help pages, and error messages
* assigning variables and commenting
* configuring GitHub with RStudio
### 3\.1\.2 Resources
* [What is RMarkdown?](https://vimeo.com/178485416) awesome 1\-minute video by RStudio
* [R for Data Science](http://r4ds.had.co.nz/) by Hadley Wickham and Garrett Grolemund
* [STAT 545](http://stat545.com/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com) by Jenny Bryan
* [R for Excel Users](https://blog.shotwell.ca/posts/r_for_excel_users/) by Gordon Shotwell
* [Welcome to the tidyverse](https://joss.theoj.org/papers/10.21105/joss.01686) by Hadley Wickham et al.
* [A GIF\-based introduction to RStudio](https://www.pipinghotdata.com/posts/2020-09-07-introducing-the-rstudio-ide-and-r-markdown/) \- Shannon Pileggi, Piping Hot Data
3\.2 RStudio Orientation
------------------------
What is the RStudio IDE (integrated development environment)? The RStudio IDE is software that greatly improves your R experience.
I think that **R is your airplane, and the RStudio IDE is your airport**. You are the pilot, and you use R to go places! With practice you’ll gain skills and confidence; you can fly further distances and get through tricky situations. You will become an awesome pilot and can fly your plane anywhere. And the RStudio IDE provides support! Runways, communication, community, and other services that makes your life as a pilot much easier. It provides not only the infrastructure but a hub for the community that you can interact with.
To launch RStudio, double\-click on the RStudio icon. Launching RStudio also launches R, and you will probably never open R by itself.
Notice the default panes:
* Console (entire left)
* Environment/History (tabbed in upper right)
* Files/Plots/Packages/Help (tabbed in lower right)
We won’t click through this all immediately but we will become familiar with more of the options and capabilities throughout the next few days.
Something critical to know now is that you can make everything you see BIGGER by going to the navigation pane: View \> Zoom In. Learn these keyboard shortcuts; being able to see what you’re typing will help avoid typos \& help us help you.
An important first question: **where are we?**
If you’ve have opened RStudio for the first time, you’ll be in your Home directory. This is noted by the `~/` at the top of the console. You can see too that the Files pane in the lower right shows what is in the Home directory where you are. You can navigate around within that Files pane and explore, but note that you won’t change where you are: even as you click through you’ll still be Home: `~/`.
We are going to have our first experience with R through RMarkdown, so let’s do the following.
3\.3 Intro to RMarkdown
-----------------------
An RMarkdown file is a plain text file that allow us to write code and text together, and when it is “knit,” the code will be evaluated and the text formatted so that it creates a reproducible report or document that is nice to read as a human.
This is really critical to reproducibility, and it also saves time. This document will recreate your figures for you in the same document where you are writing text. So no more doing analysis, saving a plot, pasting that plot into Word, redoing the analysis, re\-saving, re\-pasting, etc.
This 1\-minute video does the best job of introducing RMarkdown: [What is RMarkdown?](https://vimeo.com/178485416).
Now let’s experience this a bit ourselves and then we’ll talk about it more.
### 3\.3\.1 Create an RMarkdown file
Let’s do this together:
File \-\> New File \-\> RMarkdown… (or alternatively you can click the green plus in the top left \-\> RMarkdown).
Let’s title it “Testing” and write our name as author, then click OK with the recommended Default Output Format, which is HTML.
OK, first off: by opening a file, we are seeing the 4th pane of the RStudio console, which here is a text editor. This lets us dock and organize our files within RStudio instead of having a bunch of different windows open (but there are options to pop them out if that is what you prefer).
Let’s have a look at this file — it’s not blank; there is some initial text is already provided for you. Let’s have a high\-level look through of it:
* The top part has the Title and Author we provided, as well as today’s date and the output type as an HTML document like we selected above.
* There are white and grey sections. These are the 2 main languages that make up an RMarkdown file.
+ **Grey sections are R code**
+ **White sections are Markdown text**
* There is black and blue text (we’ll ignore the green text for now).
### 3\.3\.2 Knit your RMarkdown file
Let’s go ahead and “Knit” by clicking the blue yarn at the top of the RMarkdown file.
It’s going to ask us to save first, I’ll name mine “testing.Rmd.” Note that this is by default going to save this file in your home directory `/~`. Since this is a testing document this is fine to save here; we will get more organized about where we save files very soon. Once you click Save, the knit process will be able to continue.
OK so how cool is this, we’ve just made an html file! This is a single webpage that we are viewing locally on our own computers. Knitting this RMarkdown document has rendered — we also say formatted — both the Markdown text (white) and the R code (grey), and the it also executed — we also say ran — the R code.
Let’s have a look at them side\-by\-side:
Let’s take a deeper look at these two files. So much of learning to code is looking for patterns.
#### 3\.3\.2\.1 Activity
Introduce yourself to the person sitting next to you. Discuss what you notice with these two files. Then we will have a brief share\-out with the group. (5 mins)
### 3\.3\.3 Markdown text
Let’s look more deeply at the Markdown text. Markdown is a formatting language for plain text, and there are only a handful of rules to know.
Notice the syntax for:
* **headers** with `#` or `##`
* **bold** with `**`
To see more of the rules, let’s look at RStudio’s built\-in reference. Let’s do this: Help \> Markdown Quick Reference
There are also good [cheatsheets](https://github.com/adam-p/markdown-here/wiki/Markdown-Here-Cheatsheet) available online.
### 3\.3\.4 R code
Let’s look at the R code that we see executed in our knitted document.
We see that:
* `summary(cars)` produces a table with information about cars
* `plot(pressure)` produces a plot with information about pressure
There are a couple of things going on here.
`summary()` and `plot()` are called **functions**; they are operations and these ones come installed with R. We call functions installed with R **base R functions**. This is similar to Excel’s functions and formulas.
`cars` and `pressure` are small datasets that come installed with R.
We’ll talk more about functions and data shortly.
### 3\.3\.5 Code chunks
R code is written in code chunks, which are grey.
Each of them start with 3 backticks and `{r label}` that signify there will be R code following. Anything inside the brackets (`{ }`) is instructions for RMarkdown about that code to run. For example:
* the first chunk labeled “setup” says `include=FALSE`, and we don’t see it included in the HTML document.
* the second chunk labeled “cars” has no additional instructions, and in the HTML document we see the code and the evaluation of that code (a summary table)
* the third chunk labeled “pressure” says `echo=FALSE`, and in the HTML document we do not see the code echoed, we only see the plot when the code is executed.
> **Aside: Code chunk labels** It is possible to label your code chunks. This is to help us navigate between them and keep them organized. In our example Rmd, our three chunks say `r` as the language, and have a label (`setup`, `cars`, `pressure`).
>
> Labels are optional, but will become powerful as you become a powerful R user. But if you label your code chunks, you must have unique labels.
Notice how the word `FALSE` is all capitals. Capitalization matters in R; `TRUE/FALSE` is something that R can interpret as a binary yes/no or 1/0\.
There are many more options available that we will discuss as we get more familiar with RMarkdown.
#### 3\.3\.5\.1 New code chunks
We can create a new chunk in your RMarkdown first in one of these ways:
* click “Insert \> R” at the top of the editor pane (with the green plus and green box)
* type it by hand:
\`\`\`{r}
\`\`\`
* copy\-paste an existing chunk — but remember to relabel it something unique! (we’ll explore this more in a moment)
> **Aside**: doesn’t have to be only R, other languages supported.
Let’s create a new code chunk at the end of our document.
Now, let’s write some code in R. Let’s say we want to see the summary of the `pressure` data. I’m going to press enter to to add some extra carriage returns because sometimes I find it more pleasant to look at my code, and it helps in troubleshooting, which is often about identifying typos. R lets you use as much whitespace as you would like.
```
summary(pressure)
```
We can knit this and see the summary of `pressure`. This is the same data that we see with the plot just above.
> Troubleshooting: Did trying to knit your document produce an error? Start by looking at your code again. Do you have both open `(` and close `)` parentheses? Are your code chunk fences (\`\`\`) correct?
3\.4 R code in the Console
--------------------------
So far we have been telling R to execute our code only when we knit the document, but we can also write code in the Console to interact with the live R process.
The Console (bottom left pane of the RStudio IDE) is where you can interact with the R engine and run code directly.
Let’s type this in the Console: `summary(pressure)` and hit enter. We see the pressure summary table returned; it is the same information that we saw in our knitted html document. By default, R will display (we also say “print”) the executed result in the Console
```
summary(pressure)
```
We can also do math as we can in Excel: type the following and press enter.
```
8*22.3
```
### 3\.4\.1 Error messages
When you code in R or any language, you will encounter errors. We will discuss troubleshooting tips more deeply tomorrow in [Collaborating \& getting help](#collaboration); here we will just get a little comfortable with them.
#### 3\.4\.1\.1 R error messages
**Error messages are your friends**.
What do they look like? I’ll demo typing in the Console `summary(pressur)`
```
summary(pressur)
#> Error in summary(pressur): object 'pressur' not found
```
Error messages are R’s way of saying that it didn’t understand what you said. This is like in English when we say “What?” or “Pardon?” And like in spoken language, some error messages are more helpful than others. Like if someone says “Sorry, could you repeat that last word” rather than only “What?”
In this case, R is saying “I didn’t understand `pressur`.” R tracks the datasets it has available as objects, as well as any additional objects that you make. `pressur` is not among them, so it says that it is not found.
The first step of becoming a proficient R user is to move past the exasperation of “it’s not working!” and **read the error message**. Errors will be less frustrating with the mindset that **most likely the problem is your typo or misuse**, and not that R is broken or hates you. Read the error message to learn what is wrong.
#### 3\.4\.1\.2 RMarkdown error messages
Errors can also occur in RMarkdown. I said a moment ago that you label your code chunks, they need to be unique. Let’s see what happens if not. If I (re)name our `summary(pressure)` chunk to “cars,” I will see an error when you try to knit:
```
processing file: testing.Rmd
Error in parse_block(g[-1], g[1], params.src) : duplicate label 'cars'
Calls: <Anonymous> ... process_file -> split_file -> lapply -> FUN -> parse_block
Execution halted
```
There are two things to focus on here.
First: This error message starts out in a pretty cryptic way: I don’t expect you to know what `parse_block(g[-1]...` means. But, expecting that the error message is really trying to help me, I continue scanning the message which allows me to identify the problem: `duplicate label 'cars'`.
Second: This error is in the “R Markdown” tab on the bottom left of the RStudio IDE; it is not in the Console. That is because when RMarkdown is knitted, it actually spins up an R workspace separately from what is passed to the Console; this is one of the ways that R Markdown enables reproducibility because it is a self\-contained instance of R.
You can click back and forth between the Console and the R Markdown tab; this is something to look out for as we continue. We will work in the Console and R Markdown and will discuss strategies for where and how to work as we go. Let’s click back to Console now.
### 3\.4\.2 Running RMarkdown code chunks
So far we have written code in our RMarkdown file that is executed when we knit the file. We have also written code directly in the Console that is executed when we press enter/return. Additionally, we can write code in an RMarkdown code chunk and execute it by sending it into the Console (i.e. we can execute code without knitting the document).
How do we do it? There are several ways. Let’s do each of these with `summary(pressure)`.
**First approach: send R code to the Console.**
This approach involves selecting (highlighting) the R code only (`summary(pressure)`), not any of the backticks/fences from the code chunk.
> **Troubleshooting:** If you see `Error: attempt to use zero-length variable name` it is because you have accidentally highlighted the backticks along with the R code. Try again — and don’t forget that you can add spaces within the code chunk or make your RStudio session bigger (View \> Zoom In)!
Do this by selecting code and then:
1. copy\-pasting into the Console and press enter/return.
2. clicking ‘Run’ from RStudio IDE. This is available from:
1. the bar above the file (green arrow)
2. the menu bar: Code \> Run Selected Line(s)
3. keyboard shortcut: command\-return
**Second approach: run full code chunk.**
Since we are already grouping relevant code together in chunks, it’s reasonable that we might want to run it all together at once.
Do this by placing your curser within a code chunk and then:
1. clicking the little black down arrow next to the Run green arrow and selecting Run Current Chunk. Notice there are also options to run all chunks, run all chunks above or below…
### 3\.4\.3 Writing code in a file vs. Console
When should you write code in a file (.Rmd or .R script) and when should you write it in the Console?
We write things in the file that are necessary for our analysis and that we want to preserve for reproducibility; we will be doing this throughout the workshop to give you a good sense of this. A file is also a great way for you to take notes to yourself.
The Console is good for doing quick calculations like `8*22.3`, testing functions, for calling help pages, for installing packages. We’ll explore these things next.
3\.5 R functions
----------------
Like Excel, the power of R comes not from doing small operations individually (like `8*22.3`). R’s power comes from being able to operate on whole suites of numbers and datasets.
And also like Excel, some of the biggest power in R is that there are built\-in functions that you can use in your analyses (and, as we’ll see, R users can easily create and share functions, and it is this open source developer and contributor community that makes R so awesome).
R has a mind\-blowing collection of built\-in functions that are used with the same syntax: function name with parentheses around what the function needs to do what it is supposed to do.
We’ve seen a few functions already: we’ve seen `plot()` and `summary()`.
Functions always have the same structure: a name, parentheses, and arguments that you can specify. `function_name(arguments)`. When we talk about function names, we use the convention `function_name()` (the name with empty parentheses), but in practice, we usually supply arguments to the function `function_name(arguments)` so that it works on some data. Let’s see a few more function examples.
Like in Excel, there is a function called “sum” to calculate a total. In R, it is spelled lowercase: `sum()`. (As I type in the Console, R will provide suggestions).
Let’s use the `sum()` function to calculate the sum of all the distances traveled in the `cars` dataset. We specify a single column of a dataset using the `$` operator:
```
sum(cars$dist)
```
Another function is simply called `c()`; which combines values together.
So let’s create a new R code chunk. And we’ll write:
```
c(1, 7:9)
```
```
## [1] 1 7 8 9
```
> Aside: some functions don’t require arguments: try typing `date()` into the Console. Be sure to type the parentheses (`date()`); otherwise R will return the code behind the `date()` function rather than the output that you want/expect.
So you can see that this combines these values all into the same place, which is called a vector here. We could also do this with a non\-numeric examples, which are called “strings”:
```
c("San Francisco", "Cal Academy")
```
```
## [1] "San Francisco" "Cal Academy"
```
We need to put quotes around non\-numeric values so that R does not interpret them as an object. It would definitely get grumpy and give us an error that it did not have an object by these names. And you see that R also prints in quotes.
We can also put functions inside of other functions. This is called nested functions. When we add another function inside a function, R will evaluate them from the inside\-out.
```
c(sum(cars$dist), "San Francisco", "Cal Academy")
```
```
## [1] "2149" "San Francisco" "Cal Academy"
```
So R first evaluated the `sum(cars$dist)`, and then evaluates the `c()` statement.
This example demonstrates another key idea in R: the idea of **classes**. The output R provides is called a vector, and everything within that vector has to be the same type of thing: we can’t have both numbers and words inside. So here R is able to first calculate `sum(cars$dist)` as a number, but then `c()` will turn that number into a text, called a “string” in R: you see that it is in quotes. It is no longer a numeric, it is a string.
This is a big difference between R and Excel, since Excel allows you to have a mix of text and numeric in the same column or row. R’s way can feel restrictive, but it is also more predictable. In Excel, you might have a single number in your whole sheet that Excel is silently interpreting as text so it is causing errors in the analyses. In R, the whole column will be the same type. This can still cause trouble, but that is where the good practices that we are learning together can help minimize that kind of trouble.
We will not discuss classes or work with nested functions very much in this workshop (the tidyverse design and pipe operator make nested functions less prevalent). But we wanted to introduce them to you because they will be something you encounter as you continue on your journey with R.
3\.6 Help pages
---------------
Every function available to you should have a help page, and you access it by typing a question mark preceding the function name in the Console.
Let’s have a deeper look at the arguments for `plot()`, using the help pages.
```
?plot
```
This opens up the correct page in the Help Tab in the bottom\-right of the RStudio IDE. You can also click on the tab and type in the function name in the search bar.
All help pages will have the same format, here is how I look at it:
The help page tells the name of the package in the top left, and broken down into sections:
> Help pages
>
> \- Description: An extended description of what the function does.
> \- Usage: The arguments of the function and their default values.
> \- Arguments: An explanation of the data each argument is expecting.
> \- Details: Any important details to be aware of.
> \- Value: The data the function returns.
> \- See Also: Any related functions you might find useful.
> \- Examples: Some examples for how to use the function.
When I look at a help page, I start with the Description to see if I am in the right place for what I need to do. Reading the description for `plot` lets me know that yup, this is the function I want.
I next look at the usage and arguments, which give me a more concrete view into what the function does. `plot` requires arguments for `x` and `y`. But we passed only one argument to `plot()`: we passed the cars dataset (`plot(cars)`). R is able to understand that it should use the two columns in that dataset as x and y, and it does so based on order: the first column “speed” becomes x and the second column “dist” becomes y. The `...` means that there are many other arguments we can pass to `plot()`, which we should expect: I think we can all agree that it would be nice to have the option of making this figure a little more beautiful and compelling. Glancing at some of the arguments, we can understand here to be about the style of the plots.
Next, I usually scroll down to the bottom to the examples. This is where I can actually see how the function is used, and I can also paste those examples into the Console to see their output. Let’s try it:
```
plot(sin, -pi, 2*pi)
```
3\.7 Commenting
---------------
I’ve been working in the Console to illustrate working interactively with the live R process. But it is likely that you may want to write some of these things as notes in your R Markdown file. That’s great!
But you may not want everything you type to be run when you knit your document. So you can tell R not to run something by “commenting it out.” This is done with one or more pound/hash/number signs: `#`. So if I wanted to write a note to myself about using `?` to open the help pages, I would write this in my R Markdown code chunk:
```
## open help pages with ?:
# ?plot
```
RStudio color\-codes comments as green so they are easier to see.
Notice that my convention is to use two `##`’s for my notes, and only one for the code that I don’t want to run now, but might want to run other times. I like this convention because in RStudio you can uncomment/recomment multiple lines of code at once if you use just one `#`: do this by going to the menu Code \> Comment/Uncomment Lines (keyboard shortcut on my Mac: Shift\-Command\-C).
> **Aside**: Note also that the hashtag `#` is used differently in Markdown and in R. In R, a hashtag indicates a comment that will not be evaluated. You can use as many as you want: `#` is equivalent to `######`. In Markdown, a hashtag indicates a level of a header. And the number you use matters: `#` is a “level one header,” meaning the biggest font and the top of the hierarchy. `###` is a level three header, and will show up nested below the `#` and `##` headers.
3\.8 Assigning objects with `<-`
--------------------------------
In Excel, data are stored in the spreadsheet. In R, they are stored in objects. Data can be a variety of formats, for example numeric and strings like we just talked about.
We will be working with data objects that are rectangular in shape. If they only have one column or one row, they are also called a vector. And we assign these objects names.
This is a big difference with Excel, where you usually identify data by its location on the grid, like `$A1:D$20`. (You can do this with Excel by naming ranges of cells, but many people don’t do this.)
We assign an object a name by writing the name along with the assignment operator `<-`. Let’s try it by creating a variable called “x” and assigning it to 10\.
```
x <- 10
```
When I see this written, in my head I hear “x gets 10\.”
When we send this to the Console (I do this with Command \- Enter), notice how nothing is printed in return. This is because when we assign a variable, by default it is not returned. We can see what x is by typing it in the Console and hitting enter.
We can also assign objects with existing objects. Let’s say we want to have the distance traveled by cars in its own variable, and multiply by 1000 (assuming these data are in km and we want m).
```
dist_m <- cars$dist * 1000
```
Object names can be whatever you want, although it is wise to not name objects by functions that you know exist, for example “c” or “false.” Additionally, they cannot start with a digit and cannot contain spaces. Different folks have different conventions; you will be wise to adopt a [convention for demarcating words](http://en.wikipedia.org/wiki/Snake_case) in names.
```
## i_use_snake_case
## other.people.use.periods
## evenOthersUseCamelCase
## also-there-is-kebab-case
```
3\.9 R Packages
---------------
So far we’ve been using a couple functions that are included with R out\-of\-the\-box such as `plot()` and `c()`. We say that these functions are from “Base R.” But, one of the amazing things about R is that a vast user community is always creating new functions and packages that expand R’s capabilities.
In R, the fundamental unit of shareable code is the package. A package bundles together code, data, documentation (including to create the help pages), and tests, and is easy to share with others. They increase the power of R by improving existing base R functionalities, or by adding new ones.
The traditional place to download packages is from CRAN, the [Comprehensive R Archive Network](https://cran.r-project.org/), which is where you downloaded R. CRAN is like a grocery store or iTunes for vetted R packages.
> **Aside**: You can also install packages from GitHub; see [`devtools::install_github()`](https://devtools.r-lib.org/)
You don’t need to go to CRAN’s website to install packages, this can be accomplished within R using the command `install.packages("package-name-in-quotes")`.
### 3\.9\.1 How do you know what packages/functions exist?
How do you know what packages exist? Well, how do you know what movies exist on iTunes? You learn what’s available based on your needs, interests the community around you. We’ll introduce you to several really powerful packages that we work with and help you find others that might be of interest to you. *provide examples here*
### 3\.9\.2 Installing R Packages
Let’s install several packages that we will be using shortly. Write this in your R Markdown document and run it:
```
## setup packages
install.packages("usethis")
```
And after you run it, comment it out:
```
## setup packages
# install.packages("usethis")
```
Now we’ve installed the package, but we need to tell R that we are going to use the functions within the `usethis` package. We do this by using the function `library()`.
In my mind, this is analogous to needing to wire your house for electricity: this is something you do once; this is `install.packages`. But then you need to turn on the lights each time you need them (R Session).
It’s a nice convention to do this on the same line as your commented\-out `install.packages()` line; this makes it easier for someone (including you in a future time or computer) to install the package easily.
```
## setup packages
library(usethis) # install.packages("usethis")
```
When `usethis` is successfully attached, you won’t get any feedback in the Console. So unless you get an error, this worked for you.
Now let’s do the same with the `here` package.
```
library(here) # install.packages("here")
```
```
## here() starts at /Users/lowndes/github/rstudio-conf-2020/r-for-excel
```
```
# here() starts at /Users/lowndes
```
`here` also successfully attached but isn’t quiet about it. It is a “chatty” package; when we attached it did so, and responded with the filepath where we are working from. This is the same as `~/` which we saw earlier.
Finally, let’s install the `tidyverse` package.
```
# install.packages("tidyverse")
```
“The tidyverse is a coherent system of packages for data manipulation, exploration and visualization that share a common design philosophy.” \- Joseph Rickert: [What is the tidyverse?](https://rviews.rstudio.com/2017/06/08/what-is-the-tidyverse/), RStudio Community Blog.
This may take a little while to complete.
3\.10 GitHub brief intro \& config
----------------------------------
Before we break, we are going to set up Git and GitHub which we will be using along with R and RStudio for the rest of the workshop.
Before we do the setup configuration, let me take a moment to talk about what Git and GitHub are.
It helps me to think of GitHub like Dropbox: you identify folders for GitHub to ‘track’ and it syncs them to the cloud. This is good first\-and\-foremost because it makes a back\-up copy of your files: if your computer dies not all of your work is gone. But with GitHub, you have to be more deliberate about when syncs are made. This is because GitHub saves these as different versions, with information about who contributed when, line\-by\-line. This makes collaboration easier, and it allows you to roll\-back to different versions or contribute to others’ work.
git will track and version your files, GitHub stores this online and enables you to collaborate with others (and yourself). Although git and GitHub are two different things, distinct from each other, we can think of them as a bundle since we will always use them together.
### 3\.10\.1 Configure GitHub
This set up is a one\-time thing! You will only have to do this once per computer. We’ll walk through this together. In a browser, go to github.com and to your profile page as a reminder.
**You will need to remember your GitHub username, the email address you created your GitHub account with, and your GitHub password.**
We will be using the `use_git_config()` function from the `usethis` package we just installed. Since we already installed and attached this package, type this into your Console:
```
## use_git_config function with my username and email as arguments
use_git_config(user.name = "jules32", user.email = "jules32@example.org")
```
If you see `Error in use_git_config() : could not find function "use_git_config"` please run `library("usethis")`
### 3\.10\.2 Ensure that Git/GitHub/RStudio are communicating
We are going to go through a few steps to ensure the Git/GitHub are communicating with RStudio
#### 3\.10\.2\.1 RStudio: New Project
Click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
#### 3\.10\.2\.2 Select Version Control
#### 3\.10\.2\.3 Select Git
Since we are using git.
Do you see what I see?
If yes, hooray! Time for a break!
If no, we will help you troubleshoot.
1. Double check that GitHub username and email are correct
2. Troubleshooting, starting with [HappyGitWithR’s troubleshooting chapter](http://happygitwithr.com/troubleshooting.html)
* `which git` (Mac, Linux, or anything running a bash shell)
* `where git` (Windows, when not in a bash shell)
1. Potentially set up a RStudio Cloud account: <https://rstudio.cloud/>
### 3\.10\.3 Troubleshooting
#### 3\.10\.3\.1 Configure git from Terminal
If `usethis` fails, the following is the classic approach to configuring **git**. Open the Git Bash program (Windows) or the Terminal (Mac) and type the following:
```
# display your version of git
git --version
# replace USER with your Github user account
git config --global user.name USER
# replace NAME@EMAIL.EDU with the email you used to register with Github
git config --global user.email NAME@EMAIL.EDU
# list your config to confirm user.* variables set
git config --list
```
This will configure git with global (`--global`) commands, which means it will apply ‘globally’ to all your future github repositories, rather than only to this one now. **Note for PCs**: We’ve seen PC failures correct themselves by doing the above but omitting `--global`. (Then you will need to configure GitHub for every repo you clone but that is fine for now).
#### 3\.10\.3\.2 Troubleshooting
All troubleshooting starts with reading Happy Git With R’s [RStudio, Git, GitHub Hell](http://happygitwithr.com/troubleshooting.html) troubleshooting chapter.
##### 3\.10\.3\.2\.1 New(ish) Error on a Mac
We’ve also seen the following errors from RStudio:
```
error key does not contain a section --global terminal
```
and
```
fatal: not in a git directory
```
To solve this, go to the Terminal and type:
`which git`
Look at the filepath that is returned. Does it say anything to do with Apple?
\-\> If yes, then the [Git you downloaded](https://git-scm.com/downloads) isn’t installed, please redownload if necessary, and follow instructions to install.
\-\> If no, (in the example image, the filepath does not say anything with Apple) then proceed below:
In RStudio, navigate to: Tools \> Global Options \> Git/SVN.
Does the **“Git executable”** filepath match what the url in Terminal says?
If not, click the browse button and navigate there.
> *Note*: on my laptop, even though I navigated to /usr/local/bin/git, it then automatically redirect because /usr/local/bin/git was an alias on my computer. That is fine. Click OK.
### 3\.10\.4 END **RStudio/RMarkdown** session!
3\.1 Summary
------------
We will begin learning R through RMarkdown, which helps you tell your story of data analysis because you can write text alongside the code. We are actually learning two languages at once: R and Markdown.
### 3\.1\.1 Objectives
In this lesson we will get familiar with:
* the RStudio interface
* RMarkdown
* functions, packages, help pages, and error messages
* assigning variables and commenting
* configuring GitHub with RStudio
### 3\.1\.2 Resources
* [What is RMarkdown?](https://vimeo.com/178485416) awesome 1\-minute video by RStudio
* [R for Data Science](http://r4ds.had.co.nz/) by Hadley Wickham and Garrett Grolemund
* [STAT 545](http://stat545.com/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com) by Jenny Bryan
* [R for Excel Users](https://blog.shotwell.ca/posts/r_for_excel_users/) by Gordon Shotwell
* [Welcome to the tidyverse](https://joss.theoj.org/papers/10.21105/joss.01686) by Hadley Wickham et al.
* [A GIF\-based introduction to RStudio](https://www.pipinghotdata.com/posts/2020-09-07-introducing-the-rstudio-ide-and-r-markdown/) \- Shannon Pileggi, Piping Hot Data
### 3\.1\.1 Objectives
In this lesson we will get familiar with:
* the RStudio interface
* RMarkdown
* functions, packages, help pages, and error messages
* assigning variables and commenting
* configuring GitHub with RStudio
### 3\.1\.2 Resources
* [What is RMarkdown?](https://vimeo.com/178485416) awesome 1\-minute video by RStudio
* [R for Data Science](http://r4ds.had.co.nz/) by Hadley Wickham and Garrett Grolemund
* [STAT 545](http://stat545.com/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com) by Jenny Bryan
* [R for Excel Users](https://blog.shotwell.ca/posts/r_for_excel_users/) by Gordon Shotwell
* [Welcome to the tidyverse](https://joss.theoj.org/papers/10.21105/joss.01686) by Hadley Wickham et al.
* [A GIF\-based introduction to RStudio](https://www.pipinghotdata.com/posts/2020-09-07-introducing-the-rstudio-ide-and-r-markdown/) \- Shannon Pileggi, Piping Hot Data
3\.2 RStudio Orientation
------------------------
What is the RStudio IDE (integrated development environment)? The RStudio IDE is software that greatly improves your R experience.
I think that **R is your airplane, and the RStudio IDE is your airport**. You are the pilot, and you use R to go places! With practice you’ll gain skills and confidence; you can fly further distances and get through tricky situations. You will become an awesome pilot and can fly your plane anywhere. And the RStudio IDE provides support! Runways, communication, community, and other services that makes your life as a pilot much easier. It provides not only the infrastructure but a hub for the community that you can interact with.
To launch RStudio, double\-click on the RStudio icon. Launching RStudio also launches R, and you will probably never open R by itself.
Notice the default panes:
* Console (entire left)
* Environment/History (tabbed in upper right)
* Files/Plots/Packages/Help (tabbed in lower right)
We won’t click through this all immediately but we will become familiar with more of the options and capabilities throughout the next few days.
Something critical to know now is that you can make everything you see BIGGER by going to the navigation pane: View \> Zoom In. Learn these keyboard shortcuts; being able to see what you’re typing will help avoid typos \& help us help you.
An important first question: **where are we?**
If you’ve have opened RStudio for the first time, you’ll be in your Home directory. This is noted by the `~/` at the top of the console. You can see too that the Files pane in the lower right shows what is in the Home directory where you are. You can navigate around within that Files pane and explore, but note that you won’t change where you are: even as you click through you’ll still be Home: `~/`.
We are going to have our first experience with R through RMarkdown, so let’s do the following.
3\.3 Intro to RMarkdown
-----------------------
An RMarkdown file is a plain text file that allow us to write code and text together, and when it is “knit,” the code will be evaluated and the text formatted so that it creates a reproducible report or document that is nice to read as a human.
This is really critical to reproducibility, and it also saves time. This document will recreate your figures for you in the same document where you are writing text. So no more doing analysis, saving a plot, pasting that plot into Word, redoing the analysis, re\-saving, re\-pasting, etc.
This 1\-minute video does the best job of introducing RMarkdown: [What is RMarkdown?](https://vimeo.com/178485416).
Now let’s experience this a bit ourselves and then we’ll talk about it more.
### 3\.3\.1 Create an RMarkdown file
Let’s do this together:
File \-\> New File \-\> RMarkdown… (or alternatively you can click the green plus in the top left \-\> RMarkdown).
Let’s title it “Testing” and write our name as author, then click OK with the recommended Default Output Format, which is HTML.
OK, first off: by opening a file, we are seeing the 4th pane of the RStudio console, which here is a text editor. This lets us dock and organize our files within RStudio instead of having a bunch of different windows open (but there are options to pop them out if that is what you prefer).
Let’s have a look at this file — it’s not blank; there is some initial text is already provided for you. Let’s have a high\-level look through of it:
* The top part has the Title and Author we provided, as well as today’s date and the output type as an HTML document like we selected above.
* There are white and grey sections. These are the 2 main languages that make up an RMarkdown file.
+ **Grey sections are R code**
+ **White sections are Markdown text**
* There is black and blue text (we’ll ignore the green text for now).
### 3\.3\.2 Knit your RMarkdown file
Let’s go ahead and “Knit” by clicking the blue yarn at the top of the RMarkdown file.
It’s going to ask us to save first, I’ll name mine “testing.Rmd.” Note that this is by default going to save this file in your home directory `/~`. Since this is a testing document this is fine to save here; we will get more organized about where we save files very soon. Once you click Save, the knit process will be able to continue.
OK so how cool is this, we’ve just made an html file! This is a single webpage that we are viewing locally on our own computers. Knitting this RMarkdown document has rendered — we also say formatted — both the Markdown text (white) and the R code (grey), and the it also executed — we also say ran — the R code.
Let’s have a look at them side\-by\-side:
Let’s take a deeper look at these two files. So much of learning to code is looking for patterns.
#### 3\.3\.2\.1 Activity
Introduce yourself to the person sitting next to you. Discuss what you notice with these two files. Then we will have a brief share\-out with the group. (5 mins)
### 3\.3\.3 Markdown text
Let’s look more deeply at the Markdown text. Markdown is a formatting language for plain text, and there are only a handful of rules to know.
Notice the syntax for:
* **headers** with `#` or `##`
* **bold** with `**`
To see more of the rules, let’s look at RStudio’s built\-in reference. Let’s do this: Help \> Markdown Quick Reference
There are also good [cheatsheets](https://github.com/adam-p/markdown-here/wiki/Markdown-Here-Cheatsheet) available online.
### 3\.3\.4 R code
Let’s look at the R code that we see executed in our knitted document.
We see that:
* `summary(cars)` produces a table with information about cars
* `plot(pressure)` produces a plot with information about pressure
There are a couple of things going on here.
`summary()` and `plot()` are called **functions**; they are operations and these ones come installed with R. We call functions installed with R **base R functions**. This is similar to Excel’s functions and formulas.
`cars` and `pressure` are small datasets that come installed with R.
We’ll talk more about functions and data shortly.
### 3\.3\.5 Code chunks
R code is written in code chunks, which are grey.
Each of them start with 3 backticks and `{r label}` that signify there will be R code following. Anything inside the brackets (`{ }`) is instructions for RMarkdown about that code to run. For example:
* the first chunk labeled “setup” says `include=FALSE`, and we don’t see it included in the HTML document.
* the second chunk labeled “cars” has no additional instructions, and in the HTML document we see the code and the evaluation of that code (a summary table)
* the third chunk labeled “pressure” says `echo=FALSE`, and in the HTML document we do not see the code echoed, we only see the plot when the code is executed.
> **Aside: Code chunk labels** It is possible to label your code chunks. This is to help us navigate between them and keep them organized. In our example Rmd, our three chunks say `r` as the language, and have a label (`setup`, `cars`, `pressure`).
>
> Labels are optional, but will become powerful as you become a powerful R user. But if you label your code chunks, you must have unique labels.
Notice how the word `FALSE` is all capitals. Capitalization matters in R; `TRUE/FALSE` is something that R can interpret as a binary yes/no or 1/0\.
There are many more options available that we will discuss as we get more familiar with RMarkdown.
#### 3\.3\.5\.1 New code chunks
We can create a new chunk in your RMarkdown first in one of these ways:
* click “Insert \> R” at the top of the editor pane (with the green plus and green box)
* type it by hand:
\`\`\`{r}
\`\`\`
* copy\-paste an existing chunk — but remember to relabel it something unique! (we’ll explore this more in a moment)
> **Aside**: doesn’t have to be only R, other languages supported.
Let’s create a new code chunk at the end of our document.
Now, let’s write some code in R. Let’s say we want to see the summary of the `pressure` data. I’m going to press enter to to add some extra carriage returns because sometimes I find it more pleasant to look at my code, and it helps in troubleshooting, which is often about identifying typos. R lets you use as much whitespace as you would like.
```
summary(pressure)
```
We can knit this and see the summary of `pressure`. This is the same data that we see with the plot just above.
> Troubleshooting: Did trying to knit your document produce an error? Start by looking at your code again. Do you have both open `(` and close `)` parentheses? Are your code chunk fences (\`\`\`) correct?
### 3\.3\.1 Create an RMarkdown file
Let’s do this together:
File \-\> New File \-\> RMarkdown… (or alternatively you can click the green plus in the top left \-\> RMarkdown).
Let’s title it “Testing” and write our name as author, then click OK with the recommended Default Output Format, which is HTML.
OK, first off: by opening a file, we are seeing the 4th pane of the RStudio console, which here is a text editor. This lets us dock and organize our files within RStudio instead of having a bunch of different windows open (but there are options to pop them out if that is what you prefer).
Let’s have a look at this file — it’s not blank; there is some initial text is already provided for you. Let’s have a high\-level look through of it:
* The top part has the Title and Author we provided, as well as today’s date and the output type as an HTML document like we selected above.
* There are white and grey sections. These are the 2 main languages that make up an RMarkdown file.
+ **Grey sections are R code**
+ **White sections are Markdown text**
* There is black and blue text (we’ll ignore the green text for now).
### 3\.3\.2 Knit your RMarkdown file
Let’s go ahead and “Knit” by clicking the blue yarn at the top of the RMarkdown file.
It’s going to ask us to save first, I’ll name mine “testing.Rmd.” Note that this is by default going to save this file in your home directory `/~`. Since this is a testing document this is fine to save here; we will get more organized about where we save files very soon. Once you click Save, the knit process will be able to continue.
OK so how cool is this, we’ve just made an html file! This is a single webpage that we are viewing locally on our own computers. Knitting this RMarkdown document has rendered — we also say formatted — both the Markdown text (white) and the R code (grey), and the it also executed — we also say ran — the R code.
Let’s have a look at them side\-by\-side:
Let’s take a deeper look at these two files. So much of learning to code is looking for patterns.
#### 3\.3\.2\.1 Activity
Introduce yourself to the person sitting next to you. Discuss what you notice with these two files. Then we will have a brief share\-out with the group. (5 mins)
#### 3\.3\.2\.1 Activity
Introduce yourself to the person sitting next to you. Discuss what you notice with these two files. Then we will have a brief share\-out with the group. (5 mins)
### 3\.3\.3 Markdown text
Let’s look more deeply at the Markdown text. Markdown is a formatting language for plain text, and there are only a handful of rules to know.
Notice the syntax for:
* **headers** with `#` or `##`
* **bold** with `**`
To see more of the rules, let’s look at RStudio’s built\-in reference. Let’s do this: Help \> Markdown Quick Reference
There are also good [cheatsheets](https://github.com/adam-p/markdown-here/wiki/Markdown-Here-Cheatsheet) available online.
### 3\.3\.4 R code
Let’s look at the R code that we see executed in our knitted document.
We see that:
* `summary(cars)` produces a table with information about cars
* `plot(pressure)` produces a plot with information about pressure
There are a couple of things going on here.
`summary()` and `plot()` are called **functions**; they are operations and these ones come installed with R. We call functions installed with R **base R functions**. This is similar to Excel’s functions and formulas.
`cars` and `pressure` are small datasets that come installed with R.
We’ll talk more about functions and data shortly.
### 3\.3\.5 Code chunks
R code is written in code chunks, which are grey.
Each of them start with 3 backticks and `{r label}` that signify there will be R code following. Anything inside the brackets (`{ }`) is instructions for RMarkdown about that code to run. For example:
* the first chunk labeled “setup” says `include=FALSE`, and we don’t see it included in the HTML document.
* the second chunk labeled “cars” has no additional instructions, and in the HTML document we see the code and the evaluation of that code (a summary table)
* the third chunk labeled “pressure” says `echo=FALSE`, and in the HTML document we do not see the code echoed, we only see the plot when the code is executed.
> **Aside: Code chunk labels** It is possible to label your code chunks. This is to help us navigate between them and keep them organized. In our example Rmd, our three chunks say `r` as the language, and have a label (`setup`, `cars`, `pressure`).
>
> Labels are optional, but will become powerful as you become a powerful R user. But if you label your code chunks, you must have unique labels.
Notice how the word `FALSE` is all capitals. Capitalization matters in R; `TRUE/FALSE` is something that R can interpret as a binary yes/no or 1/0\.
There are many more options available that we will discuss as we get more familiar with RMarkdown.
#### 3\.3\.5\.1 New code chunks
We can create a new chunk in your RMarkdown first in one of these ways:
* click “Insert \> R” at the top of the editor pane (with the green plus and green box)
* type it by hand:
\`\`\`{r}
\`\`\`
* copy\-paste an existing chunk — but remember to relabel it something unique! (we’ll explore this more in a moment)
> **Aside**: doesn’t have to be only R, other languages supported.
Let’s create a new code chunk at the end of our document.
Now, let’s write some code in R. Let’s say we want to see the summary of the `pressure` data. I’m going to press enter to to add some extra carriage returns because sometimes I find it more pleasant to look at my code, and it helps in troubleshooting, which is often about identifying typos. R lets you use as much whitespace as you would like.
```
summary(pressure)
```
We can knit this and see the summary of `pressure`. This is the same data that we see with the plot just above.
> Troubleshooting: Did trying to knit your document produce an error? Start by looking at your code again. Do you have both open `(` and close `)` parentheses? Are your code chunk fences (\`\`\`) correct?
#### 3\.3\.5\.1 New code chunks
We can create a new chunk in your RMarkdown first in one of these ways:
* click “Insert \> R” at the top of the editor pane (with the green plus and green box)
* type it by hand:
\`\`\`{r}
\`\`\`
* copy\-paste an existing chunk — but remember to relabel it something unique! (we’ll explore this more in a moment)
> **Aside**: doesn’t have to be only R, other languages supported.
Let’s create a new code chunk at the end of our document.
Now, let’s write some code in R. Let’s say we want to see the summary of the `pressure` data. I’m going to press enter to to add some extra carriage returns because sometimes I find it more pleasant to look at my code, and it helps in troubleshooting, which is often about identifying typos. R lets you use as much whitespace as you would like.
```
summary(pressure)
```
We can knit this and see the summary of `pressure`. This is the same data that we see with the plot just above.
> Troubleshooting: Did trying to knit your document produce an error? Start by looking at your code again. Do you have both open `(` and close `)` parentheses? Are your code chunk fences (\`\`\`) correct?
3\.4 R code in the Console
--------------------------
So far we have been telling R to execute our code only when we knit the document, but we can also write code in the Console to interact with the live R process.
The Console (bottom left pane of the RStudio IDE) is where you can interact with the R engine and run code directly.
Let’s type this in the Console: `summary(pressure)` and hit enter. We see the pressure summary table returned; it is the same information that we saw in our knitted html document. By default, R will display (we also say “print”) the executed result in the Console
```
summary(pressure)
```
We can also do math as we can in Excel: type the following and press enter.
```
8*22.3
```
### 3\.4\.1 Error messages
When you code in R or any language, you will encounter errors. We will discuss troubleshooting tips more deeply tomorrow in [Collaborating \& getting help](#collaboration); here we will just get a little comfortable with them.
#### 3\.4\.1\.1 R error messages
**Error messages are your friends**.
What do they look like? I’ll demo typing in the Console `summary(pressur)`
```
summary(pressur)
#> Error in summary(pressur): object 'pressur' not found
```
Error messages are R’s way of saying that it didn’t understand what you said. This is like in English when we say “What?” or “Pardon?” And like in spoken language, some error messages are more helpful than others. Like if someone says “Sorry, could you repeat that last word” rather than only “What?”
In this case, R is saying “I didn’t understand `pressur`.” R tracks the datasets it has available as objects, as well as any additional objects that you make. `pressur` is not among them, so it says that it is not found.
The first step of becoming a proficient R user is to move past the exasperation of “it’s not working!” and **read the error message**. Errors will be less frustrating with the mindset that **most likely the problem is your typo or misuse**, and not that R is broken or hates you. Read the error message to learn what is wrong.
#### 3\.4\.1\.2 RMarkdown error messages
Errors can also occur in RMarkdown. I said a moment ago that you label your code chunks, they need to be unique. Let’s see what happens if not. If I (re)name our `summary(pressure)` chunk to “cars,” I will see an error when you try to knit:
```
processing file: testing.Rmd
Error in parse_block(g[-1], g[1], params.src) : duplicate label 'cars'
Calls: <Anonymous> ... process_file -> split_file -> lapply -> FUN -> parse_block
Execution halted
```
There are two things to focus on here.
First: This error message starts out in a pretty cryptic way: I don’t expect you to know what `parse_block(g[-1]...` means. But, expecting that the error message is really trying to help me, I continue scanning the message which allows me to identify the problem: `duplicate label 'cars'`.
Second: This error is in the “R Markdown” tab on the bottom left of the RStudio IDE; it is not in the Console. That is because when RMarkdown is knitted, it actually spins up an R workspace separately from what is passed to the Console; this is one of the ways that R Markdown enables reproducibility because it is a self\-contained instance of R.
You can click back and forth between the Console and the R Markdown tab; this is something to look out for as we continue. We will work in the Console and R Markdown and will discuss strategies for where and how to work as we go. Let’s click back to Console now.
### 3\.4\.2 Running RMarkdown code chunks
So far we have written code in our RMarkdown file that is executed when we knit the file. We have also written code directly in the Console that is executed when we press enter/return. Additionally, we can write code in an RMarkdown code chunk and execute it by sending it into the Console (i.e. we can execute code without knitting the document).
How do we do it? There are several ways. Let’s do each of these with `summary(pressure)`.
**First approach: send R code to the Console.**
This approach involves selecting (highlighting) the R code only (`summary(pressure)`), not any of the backticks/fences from the code chunk.
> **Troubleshooting:** If you see `Error: attempt to use zero-length variable name` it is because you have accidentally highlighted the backticks along with the R code. Try again — and don’t forget that you can add spaces within the code chunk or make your RStudio session bigger (View \> Zoom In)!
Do this by selecting code and then:
1. copy\-pasting into the Console and press enter/return.
2. clicking ‘Run’ from RStudio IDE. This is available from:
1. the bar above the file (green arrow)
2. the menu bar: Code \> Run Selected Line(s)
3. keyboard shortcut: command\-return
**Second approach: run full code chunk.**
Since we are already grouping relevant code together in chunks, it’s reasonable that we might want to run it all together at once.
Do this by placing your curser within a code chunk and then:
1. clicking the little black down arrow next to the Run green arrow and selecting Run Current Chunk. Notice there are also options to run all chunks, run all chunks above or below…
### 3\.4\.3 Writing code in a file vs. Console
When should you write code in a file (.Rmd or .R script) and when should you write it in the Console?
We write things in the file that are necessary for our analysis and that we want to preserve for reproducibility; we will be doing this throughout the workshop to give you a good sense of this. A file is also a great way for you to take notes to yourself.
The Console is good for doing quick calculations like `8*22.3`, testing functions, for calling help pages, for installing packages. We’ll explore these things next.
### 3\.4\.1 Error messages
When you code in R or any language, you will encounter errors. We will discuss troubleshooting tips more deeply tomorrow in [Collaborating \& getting help](#collaboration); here we will just get a little comfortable with them.
#### 3\.4\.1\.1 R error messages
**Error messages are your friends**.
What do they look like? I’ll demo typing in the Console `summary(pressur)`
```
summary(pressur)
#> Error in summary(pressur): object 'pressur' not found
```
Error messages are R’s way of saying that it didn’t understand what you said. This is like in English when we say “What?” or “Pardon?” And like in spoken language, some error messages are more helpful than others. Like if someone says “Sorry, could you repeat that last word” rather than only “What?”
In this case, R is saying “I didn’t understand `pressur`.” R tracks the datasets it has available as objects, as well as any additional objects that you make. `pressur` is not among them, so it says that it is not found.
The first step of becoming a proficient R user is to move past the exasperation of “it’s not working!” and **read the error message**. Errors will be less frustrating with the mindset that **most likely the problem is your typo or misuse**, and not that R is broken or hates you. Read the error message to learn what is wrong.
#### 3\.4\.1\.2 RMarkdown error messages
Errors can also occur in RMarkdown. I said a moment ago that you label your code chunks, they need to be unique. Let’s see what happens if not. If I (re)name our `summary(pressure)` chunk to “cars,” I will see an error when you try to knit:
```
processing file: testing.Rmd
Error in parse_block(g[-1], g[1], params.src) : duplicate label 'cars'
Calls: <Anonymous> ... process_file -> split_file -> lapply -> FUN -> parse_block
Execution halted
```
There are two things to focus on here.
First: This error message starts out in a pretty cryptic way: I don’t expect you to know what `parse_block(g[-1]...` means. But, expecting that the error message is really trying to help me, I continue scanning the message which allows me to identify the problem: `duplicate label 'cars'`.
Second: This error is in the “R Markdown” tab on the bottom left of the RStudio IDE; it is not in the Console. That is because when RMarkdown is knitted, it actually spins up an R workspace separately from what is passed to the Console; this is one of the ways that R Markdown enables reproducibility because it is a self\-contained instance of R.
You can click back and forth between the Console and the R Markdown tab; this is something to look out for as we continue. We will work in the Console and R Markdown and will discuss strategies for where and how to work as we go. Let’s click back to Console now.
#### 3\.4\.1\.1 R error messages
**Error messages are your friends**.
What do they look like? I’ll demo typing in the Console `summary(pressur)`
```
summary(pressur)
#> Error in summary(pressur): object 'pressur' not found
```
Error messages are R’s way of saying that it didn’t understand what you said. This is like in English when we say “What?” or “Pardon?” And like in spoken language, some error messages are more helpful than others. Like if someone says “Sorry, could you repeat that last word” rather than only “What?”
In this case, R is saying “I didn’t understand `pressur`.” R tracks the datasets it has available as objects, as well as any additional objects that you make. `pressur` is not among them, so it says that it is not found.
The first step of becoming a proficient R user is to move past the exasperation of “it’s not working!” and **read the error message**. Errors will be less frustrating with the mindset that **most likely the problem is your typo or misuse**, and not that R is broken or hates you. Read the error message to learn what is wrong.
#### 3\.4\.1\.2 RMarkdown error messages
Errors can also occur in RMarkdown. I said a moment ago that you label your code chunks, they need to be unique. Let’s see what happens if not. If I (re)name our `summary(pressure)` chunk to “cars,” I will see an error when you try to knit:
```
processing file: testing.Rmd
Error in parse_block(g[-1], g[1], params.src) : duplicate label 'cars'
Calls: <Anonymous> ... process_file -> split_file -> lapply -> FUN -> parse_block
Execution halted
```
There are two things to focus on here.
First: This error message starts out in a pretty cryptic way: I don’t expect you to know what `parse_block(g[-1]...` means. But, expecting that the error message is really trying to help me, I continue scanning the message which allows me to identify the problem: `duplicate label 'cars'`.
Second: This error is in the “R Markdown” tab on the bottom left of the RStudio IDE; it is not in the Console. That is because when RMarkdown is knitted, it actually spins up an R workspace separately from what is passed to the Console; this is one of the ways that R Markdown enables reproducibility because it is a self\-contained instance of R.
You can click back and forth between the Console and the R Markdown tab; this is something to look out for as we continue. We will work in the Console and R Markdown and will discuss strategies for where and how to work as we go. Let’s click back to Console now.
### 3\.4\.2 Running RMarkdown code chunks
So far we have written code in our RMarkdown file that is executed when we knit the file. We have also written code directly in the Console that is executed when we press enter/return. Additionally, we can write code in an RMarkdown code chunk and execute it by sending it into the Console (i.e. we can execute code without knitting the document).
How do we do it? There are several ways. Let’s do each of these with `summary(pressure)`.
**First approach: send R code to the Console.**
This approach involves selecting (highlighting) the R code only (`summary(pressure)`), not any of the backticks/fences from the code chunk.
> **Troubleshooting:** If you see `Error: attempt to use zero-length variable name` it is because you have accidentally highlighted the backticks along with the R code. Try again — and don’t forget that you can add spaces within the code chunk or make your RStudio session bigger (View \> Zoom In)!
Do this by selecting code and then:
1. copy\-pasting into the Console and press enter/return.
2. clicking ‘Run’ from RStudio IDE. This is available from:
1. the bar above the file (green arrow)
2. the menu bar: Code \> Run Selected Line(s)
3. keyboard shortcut: command\-return
**Second approach: run full code chunk.**
Since we are already grouping relevant code together in chunks, it’s reasonable that we might want to run it all together at once.
Do this by placing your curser within a code chunk and then:
1. clicking the little black down arrow next to the Run green arrow and selecting Run Current Chunk. Notice there are also options to run all chunks, run all chunks above or below…
### 3\.4\.3 Writing code in a file vs. Console
When should you write code in a file (.Rmd or .R script) and when should you write it in the Console?
We write things in the file that are necessary for our analysis and that we want to preserve for reproducibility; we will be doing this throughout the workshop to give you a good sense of this. A file is also a great way for you to take notes to yourself.
The Console is good for doing quick calculations like `8*22.3`, testing functions, for calling help pages, for installing packages. We’ll explore these things next.
3\.5 R functions
----------------
Like Excel, the power of R comes not from doing small operations individually (like `8*22.3`). R’s power comes from being able to operate on whole suites of numbers and datasets.
And also like Excel, some of the biggest power in R is that there are built\-in functions that you can use in your analyses (and, as we’ll see, R users can easily create and share functions, and it is this open source developer and contributor community that makes R so awesome).
R has a mind\-blowing collection of built\-in functions that are used with the same syntax: function name with parentheses around what the function needs to do what it is supposed to do.
We’ve seen a few functions already: we’ve seen `plot()` and `summary()`.
Functions always have the same structure: a name, parentheses, and arguments that you can specify. `function_name(arguments)`. When we talk about function names, we use the convention `function_name()` (the name with empty parentheses), but in practice, we usually supply arguments to the function `function_name(arguments)` so that it works on some data. Let’s see a few more function examples.
Like in Excel, there is a function called “sum” to calculate a total. In R, it is spelled lowercase: `sum()`. (As I type in the Console, R will provide suggestions).
Let’s use the `sum()` function to calculate the sum of all the distances traveled in the `cars` dataset. We specify a single column of a dataset using the `$` operator:
```
sum(cars$dist)
```
Another function is simply called `c()`; which combines values together.
So let’s create a new R code chunk. And we’ll write:
```
c(1, 7:9)
```
```
## [1] 1 7 8 9
```
> Aside: some functions don’t require arguments: try typing `date()` into the Console. Be sure to type the parentheses (`date()`); otherwise R will return the code behind the `date()` function rather than the output that you want/expect.
So you can see that this combines these values all into the same place, which is called a vector here. We could also do this with a non\-numeric examples, which are called “strings”:
```
c("San Francisco", "Cal Academy")
```
```
## [1] "San Francisco" "Cal Academy"
```
We need to put quotes around non\-numeric values so that R does not interpret them as an object. It would definitely get grumpy and give us an error that it did not have an object by these names. And you see that R also prints in quotes.
We can also put functions inside of other functions. This is called nested functions. When we add another function inside a function, R will evaluate them from the inside\-out.
```
c(sum(cars$dist), "San Francisco", "Cal Academy")
```
```
## [1] "2149" "San Francisco" "Cal Academy"
```
So R first evaluated the `sum(cars$dist)`, and then evaluates the `c()` statement.
This example demonstrates another key idea in R: the idea of **classes**. The output R provides is called a vector, and everything within that vector has to be the same type of thing: we can’t have both numbers and words inside. So here R is able to first calculate `sum(cars$dist)` as a number, but then `c()` will turn that number into a text, called a “string” in R: you see that it is in quotes. It is no longer a numeric, it is a string.
This is a big difference between R and Excel, since Excel allows you to have a mix of text and numeric in the same column or row. R’s way can feel restrictive, but it is also more predictable. In Excel, you might have a single number in your whole sheet that Excel is silently interpreting as text so it is causing errors in the analyses. In R, the whole column will be the same type. This can still cause trouble, but that is where the good practices that we are learning together can help minimize that kind of trouble.
We will not discuss classes or work with nested functions very much in this workshop (the tidyverse design and pipe operator make nested functions less prevalent). But we wanted to introduce them to you because they will be something you encounter as you continue on your journey with R.
3\.6 Help pages
---------------
Every function available to you should have a help page, and you access it by typing a question mark preceding the function name in the Console.
Let’s have a deeper look at the arguments for `plot()`, using the help pages.
```
?plot
```
This opens up the correct page in the Help Tab in the bottom\-right of the RStudio IDE. You can also click on the tab and type in the function name in the search bar.
All help pages will have the same format, here is how I look at it:
The help page tells the name of the package in the top left, and broken down into sections:
> Help pages
>
> \- Description: An extended description of what the function does.
> \- Usage: The arguments of the function and their default values.
> \- Arguments: An explanation of the data each argument is expecting.
> \- Details: Any important details to be aware of.
> \- Value: The data the function returns.
> \- See Also: Any related functions you might find useful.
> \- Examples: Some examples for how to use the function.
When I look at a help page, I start with the Description to see if I am in the right place for what I need to do. Reading the description for `plot` lets me know that yup, this is the function I want.
I next look at the usage and arguments, which give me a more concrete view into what the function does. `plot` requires arguments for `x` and `y`. But we passed only one argument to `plot()`: we passed the cars dataset (`plot(cars)`). R is able to understand that it should use the two columns in that dataset as x and y, and it does so based on order: the first column “speed” becomes x and the second column “dist” becomes y. The `...` means that there are many other arguments we can pass to `plot()`, which we should expect: I think we can all agree that it would be nice to have the option of making this figure a little more beautiful and compelling. Glancing at some of the arguments, we can understand here to be about the style of the plots.
Next, I usually scroll down to the bottom to the examples. This is where I can actually see how the function is used, and I can also paste those examples into the Console to see their output. Let’s try it:
```
plot(sin, -pi, 2*pi)
```
3\.7 Commenting
---------------
I’ve been working in the Console to illustrate working interactively with the live R process. But it is likely that you may want to write some of these things as notes in your R Markdown file. That’s great!
But you may not want everything you type to be run when you knit your document. So you can tell R not to run something by “commenting it out.” This is done with one or more pound/hash/number signs: `#`. So if I wanted to write a note to myself about using `?` to open the help pages, I would write this in my R Markdown code chunk:
```
## open help pages with ?:
# ?plot
```
RStudio color\-codes comments as green so they are easier to see.
Notice that my convention is to use two `##`’s for my notes, and only one for the code that I don’t want to run now, but might want to run other times. I like this convention because in RStudio you can uncomment/recomment multiple lines of code at once if you use just one `#`: do this by going to the menu Code \> Comment/Uncomment Lines (keyboard shortcut on my Mac: Shift\-Command\-C).
> **Aside**: Note also that the hashtag `#` is used differently in Markdown and in R. In R, a hashtag indicates a comment that will not be evaluated. You can use as many as you want: `#` is equivalent to `######`. In Markdown, a hashtag indicates a level of a header. And the number you use matters: `#` is a “level one header,” meaning the biggest font and the top of the hierarchy. `###` is a level three header, and will show up nested below the `#` and `##` headers.
3\.8 Assigning objects with `<-`
--------------------------------
In Excel, data are stored in the spreadsheet. In R, they are stored in objects. Data can be a variety of formats, for example numeric and strings like we just talked about.
We will be working with data objects that are rectangular in shape. If they only have one column or one row, they are also called a vector. And we assign these objects names.
This is a big difference with Excel, where you usually identify data by its location on the grid, like `$A1:D$20`. (You can do this with Excel by naming ranges of cells, but many people don’t do this.)
We assign an object a name by writing the name along with the assignment operator `<-`. Let’s try it by creating a variable called “x” and assigning it to 10\.
```
x <- 10
```
When I see this written, in my head I hear “x gets 10\.”
When we send this to the Console (I do this with Command \- Enter), notice how nothing is printed in return. This is because when we assign a variable, by default it is not returned. We can see what x is by typing it in the Console and hitting enter.
We can also assign objects with existing objects. Let’s say we want to have the distance traveled by cars in its own variable, and multiply by 1000 (assuming these data are in km and we want m).
```
dist_m <- cars$dist * 1000
```
Object names can be whatever you want, although it is wise to not name objects by functions that you know exist, for example “c” or “false.” Additionally, they cannot start with a digit and cannot contain spaces. Different folks have different conventions; you will be wise to adopt a [convention for demarcating words](http://en.wikipedia.org/wiki/Snake_case) in names.
```
## i_use_snake_case
## other.people.use.periods
## evenOthersUseCamelCase
## also-there-is-kebab-case
```
3\.9 R Packages
---------------
So far we’ve been using a couple functions that are included with R out\-of\-the\-box such as `plot()` and `c()`. We say that these functions are from “Base R.” But, one of the amazing things about R is that a vast user community is always creating new functions and packages that expand R’s capabilities.
In R, the fundamental unit of shareable code is the package. A package bundles together code, data, documentation (including to create the help pages), and tests, and is easy to share with others. They increase the power of R by improving existing base R functionalities, or by adding new ones.
The traditional place to download packages is from CRAN, the [Comprehensive R Archive Network](https://cran.r-project.org/), which is where you downloaded R. CRAN is like a grocery store or iTunes for vetted R packages.
> **Aside**: You can also install packages from GitHub; see [`devtools::install_github()`](https://devtools.r-lib.org/)
You don’t need to go to CRAN’s website to install packages, this can be accomplished within R using the command `install.packages("package-name-in-quotes")`.
### 3\.9\.1 How do you know what packages/functions exist?
How do you know what packages exist? Well, how do you know what movies exist on iTunes? You learn what’s available based on your needs, interests the community around you. We’ll introduce you to several really powerful packages that we work with and help you find others that might be of interest to you. *provide examples here*
### 3\.9\.2 Installing R Packages
Let’s install several packages that we will be using shortly. Write this in your R Markdown document and run it:
```
## setup packages
install.packages("usethis")
```
And after you run it, comment it out:
```
## setup packages
# install.packages("usethis")
```
Now we’ve installed the package, but we need to tell R that we are going to use the functions within the `usethis` package. We do this by using the function `library()`.
In my mind, this is analogous to needing to wire your house for electricity: this is something you do once; this is `install.packages`. But then you need to turn on the lights each time you need them (R Session).
It’s a nice convention to do this on the same line as your commented\-out `install.packages()` line; this makes it easier for someone (including you in a future time or computer) to install the package easily.
```
## setup packages
library(usethis) # install.packages("usethis")
```
When `usethis` is successfully attached, you won’t get any feedback in the Console. So unless you get an error, this worked for you.
Now let’s do the same with the `here` package.
```
library(here) # install.packages("here")
```
```
## here() starts at /Users/lowndes/github/rstudio-conf-2020/r-for-excel
```
```
# here() starts at /Users/lowndes
```
`here` also successfully attached but isn’t quiet about it. It is a “chatty” package; when we attached it did so, and responded with the filepath where we are working from. This is the same as `~/` which we saw earlier.
Finally, let’s install the `tidyverse` package.
```
# install.packages("tidyverse")
```
“The tidyverse is a coherent system of packages for data manipulation, exploration and visualization that share a common design philosophy.” \- Joseph Rickert: [What is the tidyverse?](https://rviews.rstudio.com/2017/06/08/what-is-the-tidyverse/), RStudio Community Blog.
This may take a little while to complete.
### 3\.9\.1 How do you know what packages/functions exist?
How do you know what packages exist? Well, how do you know what movies exist on iTunes? You learn what’s available based on your needs, interests the community around you. We’ll introduce you to several really powerful packages that we work with and help you find others that might be of interest to you. *provide examples here*
### 3\.9\.2 Installing R Packages
Let’s install several packages that we will be using shortly. Write this in your R Markdown document and run it:
```
## setup packages
install.packages("usethis")
```
And after you run it, comment it out:
```
## setup packages
# install.packages("usethis")
```
Now we’ve installed the package, but we need to tell R that we are going to use the functions within the `usethis` package. We do this by using the function `library()`.
In my mind, this is analogous to needing to wire your house for electricity: this is something you do once; this is `install.packages`. But then you need to turn on the lights each time you need them (R Session).
It’s a nice convention to do this on the same line as your commented\-out `install.packages()` line; this makes it easier for someone (including you in a future time or computer) to install the package easily.
```
## setup packages
library(usethis) # install.packages("usethis")
```
When `usethis` is successfully attached, you won’t get any feedback in the Console. So unless you get an error, this worked for you.
Now let’s do the same with the `here` package.
```
library(here) # install.packages("here")
```
```
## here() starts at /Users/lowndes/github/rstudio-conf-2020/r-for-excel
```
```
# here() starts at /Users/lowndes
```
`here` also successfully attached but isn’t quiet about it. It is a “chatty” package; when we attached it did so, and responded with the filepath where we are working from. This is the same as `~/` which we saw earlier.
Finally, let’s install the `tidyverse` package.
```
# install.packages("tidyverse")
```
“The tidyverse is a coherent system of packages for data manipulation, exploration and visualization that share a common design philosophy.” \- Joseph Rickert: [What is the tidyverse?](https://rviews.rstudio.com/2017/06/08/what-is-the-tidyverse/), RStudio Community Blog.
This may take a little while to complete.
3\.10 GitHub brief intro \& config
----------------------------------
Before we break, we are going to set up Git and GitHub which we will be using along with R and RStudio for the rest of the workshop.
Before we do the setup configuration, let me take a moment to talk about what Git and GitHub are.
It helps me to think of GitHub like Dropbox: you identify folders for GitHub to ‘track’ and it syncs them to the cloud. This is good first\-and\-foremost because it makes a back\-up copy of your files: if your computer dies not all of your work is gone. But with GitHub, you have to be more deliberate about when syncs are made. This is because GitHub saves these as different versions, with information about who contributed when, line\-by\-line. This makes collaboration easier, and it allows you to roll\-back to different versions or contribute to others’ work.
git will track and version your files, GitHub stores this online and enables you to collaborate with others (and yourself). Although git and GitHub are two different things, distinct from each other, we can think of them as a bundle since we will always use them together.
### 3\.10\.1 Configure GitHub
This set up is a one\-time thing! You will only have to do this once per computer. We’ll walk through this together. In a browser, go to github.com and to your profile page as a reminder.
**You will need to remember your GitHub username, the email address you created your GitHub account with, and your GitHub password.**
We will be using the `use_git_config()` function from the `usethis` package we just installed. Since we already installed and attached this package, type this into your Console:
```
## use_git_config function with my username and email as arguments
use_git_config(user.name = "jules32", user.email = "jules32@example.org")
```
If you see `Error in use_git_config() : could not find function "use_git_config"` please run `library("usethis")`
### 3\.10\.2 Ensure that Git/GitHub/RStudio are communicating
We are going to go through a few steps to ensure the Git/GitHub are communicating with RStudio
#### 3\.10\.2\.1 RStudio: New Project
Click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
#### 3\.10\.2\.2 Select Version Control
#### 3\.10\.2\.3 Select Git
Since we are using git.
Do you see what I see?
If yes, hooray! Time for a break!
If no, we will help you troubleshoot.
1. Double check that GitHub username and email are correct
2. Troubleshooting, starting with [HappyGitWithR’s troubleshooting chapter](http://happygitwithr.com/troubleshooting.html)
* `which git` (Mac, Linux, or anything running a bash shell)
* `where git` (Windows, when not in a bash shell)
1. Potentially set up a RStudio Cloud account: <https://rstudio.cloud/>
### 3\.10\.3 Troubleshooting
#### 3\.10\.3\.1 Configure git from Terminal
If `usethis` fails, the following is the classic approach to configuring **git**. Open the Git Bash program (Windows) or the Terminal (Mac) and type the following:
```
# display your version of git
git --version
# replace USER with your Github user account
git config --global user.name USER
# replace NAME@EMAIL.EDU with the email you used to register with Github
git config --global user.email NAME@EMAIL.EDU
# list your config to confirm user.* variables set
git config --list
```
This will configure git with global (`--global`) commands, which means it will apply ‘globally’ to all your future github repositories, rather than only to this one now. **Note for PCs**: We’ve seen PC failures correct themselves by doing the above but omitting `--global`. (Then you will need to configure GitHub for every repo you clone but that is fine for now).
#### 3\.10\.3\.2 Troubleshooting
All troubleshooting starts with reading Happy Git With R’s [RStudio, Git, GitHub Hell](http://happygitwithr.com/troubleshooting.html) troubleshooting chapter.
##### 3\.10\.3\.2\.1 New(ish) Error on a Mac
We’ve also seen the following errors from RStudio:
```
error key does not contain a section --global terminal
```
and
```
fatal: not in a git directory
```
To solve this, go to the Terminal and type:
`which git`
Look at the filepath that is returned. Does it say anything to do with Apple?
\-\> If yes, then the [Git you downloaded](https://git-scm.com/downloads) isn’t installed, please redownload if necessary, and follow instructions to install.
\-\> If no, (in the example image, the filepath does not say anything with Apple) then proceed below:
In RStudio, navigate to: Tools \> Global Options \> Git/SVN.
Does the **“Git executable”** filepath match what the url in Terminal says?
If not, click the browse button and navigate there.
> *Note*: on my laptop, even though I navigated to /usr/local/bin/git, it then automatically redirect because /usr/local/bin/git was an alias on my computer. That is fine. Click OK.
### 3\.10\.4 END **RStudio/RMarkdown** session!
### 3\.10\.1 Configure GitHub
This set up is a one\-time thing! You will only have to do this once per computer. We’ll walk through this together. In a browser, go to github.com and to your profile page as a reminder.
**You will need to remember your GitHub username, the email address you created your GitHub account with, and your GitHub password.**
We will be using the `use_git_config()` function from the `usethis` package we just installed. Since we already installed and attached this package, type this into your Console:
```
## use_git_config function with my username and email as arguments
use_git_config(user.name = "jules32", user.email = "jules32@example.org")
```
If you see `Error in use_git_config() : could not find function "use_git_config"` please run `library("usethis")`
### 3\.10\.2 Ensure that Git/GitHub/RStudio are communicating
We are going to go through a few steps to ensure the Git/GitHub are communicating with RStudio
#### 3\.10\.2\.1 RStudio: New Project
Click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
#### 3\.10\.2\.2 Select Version Control
#### 3\.10\.2\.3 Select Git
Since we are using git.
Do you see what I see?
If yes, hooray! Time for a break!
If no, we will help you troubleshoot.
1. Double check that GitHub username and email are correct
2. Troubleshooting, starting with [HappyGitWithR’s troubleshooting chapter](http://happygitwithr.com/troubleshooting.html)
* `which git` (Mac, Linux, or anything running a bash shell)
* `where git` (Windows, when not in a bash shell)
1. Potentially set up a RStudio Cloud account: <https://rstudio.cloud/>
#### 3\.10\.2\.1 RStudio: New Project
Click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
#### 3\.10\.2\.2 Select Version Control
#### 3\.10\.2\.3 Select Git
Since we are using git.
Do you see what I see?
If yes, hooray! Time for a break!
If no, we will help you troubleshoot.
1. Double check that GitHub username and email are correct
2. Troubleshooting, starting with [HappyGitWithR’s troubleshooting chapter](http://happygitwithr.com/troubleshooting.html)
* `which git` (Mac, Linux, or anything running a bash shell)
* `where git` (Windows, when not in a bash shell)
1. Potentially set up a RStudio Cloud account: <https://rstudio.cloud/>
### 3\.10\.3 Troubleshooting
#### 3\.10\.3\.1 Configure git from Terminal
If `usethis` fails, the following is the classic approach to configuring **git**. Open the Git Bash program (Windows) or the Terminal (Mac) and type the following:
```
# display your version of git
git --version
# replace USER with your Github user account
git config --global user.name USER
# replace NAME@EMAIL.EDU with the email you used to register with Github
git config --global user.email NAME@EMAIL.EDU
# list your config to confirm user.* variables set
git config --list
```
This will configure git with global (`--global`) commands, which means it will apply ‘globally’ to all your future github repositories, rather than only to this one now. **Note for PCs**: We’ve seen PC failures correct themselves by doing the above but omitting `--global`. (Then you will need to configure GitHub for every repo you clone but that is fine for now).
#### 3\.10\.3\.2 Troubleshooting
All troubleshooting starts with reading Happy Git With R’s [RStudio, Git, GitHub Hell](http://happygitwithr.com/troubleshooting.html) troubleshooting chapter.
##### 3\.10\.3\.2\.1 New(ish) Error on a Mac
We’ve also seen the following errors from RStudio:
```
error key does not contain a section --global terminal
```
and
```
fatal: not in a git directory
```
To solve this, go to the Terminal and type:
`which git`
Look at the filepath that is returned. Does it say anything to do with Apple?
\-\> If yes, then the [Git you downloaded](https://git-scm.com/downloads) isn’t installed, please redownload if necessary, and follow instructions to install.
\-\> If no, (in the example image, the filepath does not say anything with Apple) then proceed below:
In RStudio, navigate to: Tools \> Global Options \> Git/SVN.
Does the **“Git executable”** filepath match what the url in Terminal says?
If not, click the browse button and navigate there.
> *Note*: on my laptop, even though I navigated to /usr/local/bin/git, it then automatically redirect because /usr/local/bin/git was an alias on my computer. That is fine. Click OK.
#### 3\.10\.3\.1 Configure git from Terminal
If `usethis` fails, the following is the classic approach to configuring **git**. Open the Git Bash program (Windows) or the Terminal (Mac) and type the following:
```
# display your version of git
git --version
# replace USER with your Github user account
git config --global user.name USER
# replace NAME@EMAIL.EDU with the email you used to register with Github
git config --global user.email NAME@EMAIL.EDU
# list your config to confirm user.* variables set
git config --list
```
This will configure git with global (`--global`) commands, which means it will apply ‘globally’ to all your future github repositories, rather than only to this one now. **Note for PCs**: We’ve seen PC failures correct themselves by doing the above but omitting `--global`. (Then you will need to configure GitHub for every repo you clone but that is fine for now).
#### 3\.10\.3\.2 Troubleshooting
All troubleshooting starts with reading Happy Git With R’s [RStudio, Git, GitHub Hell](http://happygitwithr.com/troubleshooting.html) troubleshooting chapter.
##### 3\.10\.3\.2\.1 New(ish) Error on a Mac
We’ve also seen the following errors from RStudio:
```
error key does not contain a section --global terminal
```
and
```
fatal: not in a git directory
```
To solve this, go to the Terminal and type:
`which git`
Look at the filepath that is returned. Does it say anything to do with Apple?
\-\> If yes, then the [Git you downloaded](https://git-scm.com/downloads) isn’t installed, please redownload if necessary, and follow instructions to install.
\-\> If no, (in the example image, the filepath does not say anything with Apple) then proceed below:
In RStudio, navigate to: Tools \> Global Options \> Git/SVN.
Does the **“Git executable”** filepath match what the url in Terminal says?
If not, click the browse button and navigate there.
> *Note*: on my laptop, even though I navigated to /usr/local/bin/git, it then automatically redirect because /usr/local/bin/git was an alias on my computer. That is fine. Click OK.
##### 3\.10\.3\.2\.1 New(ish) Error on a Mac
We’ve also seen the following errors from RStudio:
```
error key does not contain a section --global terminal
```
and
```
fatal: not in a git directory
```
To solve this, go to the Terminal and type:
`which git`
Look at the filepath that is returned. Does it say anything to do with Apple?
\-\> If yes, then the [Git you downloaded](https://git-scm.com/downloads) isn’t installed, please redownload if necessary, and follow instructions to install.
\-\> If no, (in the example image, the filepath does not say anything with Apple) then proceed below:
In RStudio, navigate to: Tools \> Global Options \> Git/SVN.
Does the **“Git executable”** filepath match what the url in Terminal says?
If not, click the browse button and navigate there.
> *Note*: on my laptop, even though I navigated to /usr/local/bin/git, it then automatically redirect because /usr/local/bin/git was an alias on my computer. That is fine. Click OK.
### 3\.10\.4 END **RStudio/RMarkdown** session!
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/rstudio.html |
Chapter 3 R \& RStudio, RMarkdown
=================================
3\.1 Summary
------------
We will begin learning R through RMarkdown, which helps you tell your story of data analysis because you can write text alongside the code. We are actually learning two languages at once: R and Markdown.
### 3\.1\.1 Objectives
In this lesson we will get familiar with:
* the RStudio interface
* RMarkdown
* functions, packages, help pages, and error messages
* assigning variables and commenting
* configuring GitHub with RStudio
### 3\.1\.2 Resources
* [What is RMarkdown?](https://vimeo.com/178485416) awesome 1\-minute video by RStudio
* [R for Data Science](http://r4ds.had.co.nz/) by Hadley Wickham and Garrett Grolemund
* [STAT 545](http://stat545.com/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com) by Jenny Bryan
* [R for Excel Users](https://blog.shotwell.ca/posts/r_for_excel_users/) by Gordon Shotwell
* [Welcome to the tidyverse](https://joss.theoj.org/papers/10.21105/joss.01686) by Hadley Wickham et al.
* [A GIF\-based introduction to RStudio](https://www.pipinghotdata.com/posts/2020-09-07-introducing-the-rstudio-ide-and-r-markdown/) \- Shannon Pileggi, Piping Hot Data
3\.2 RStudio Orientation
------------------------
What is the RStudio IDE (integrated development environment)? The RStudio IDE is software that greatly improves your R experience.
I think that **R is your airplane, and the RStudio IDE is your airport**. You are the pilot, and you use R to go places! With practice you’ll gain skills and confidence; you can fly further distances and get through tricky situations. You will become an awesome pilot and can fly your plane anywhere. And the RStudio IDE provides support! Runways, communication, community, and other services that makes your life as a pilot much easier. It provides not only the infrastructure but a hub for the community that you can interact with.
To launch RStudio, double\-click on the RStudio icon. Launching RStudio also launches R, and you will probably never open R by itself.
Notice the default panes:
* Console (entire left)
* Environment/History (tabbed in upper right)
* Files/Plots/Packages/Help (tabbed in lower right)
We won’t click through this all immediately but we will become familiar with more of the options and capabilities throughout the next few days.
Something critical to know now is that you can make everything you see BIGGER by going to the navigation pane: View \> Zoom In. Learn these keyboard shortcuts; being able to see what you’re typing will help avoid typos \& help us help you.
An important first question: **where are we?**
If you’ve have opened RStudio for the first time, you’ll be in your Home directory. This is noted by the `~/` at the top of the console. You can see too that the Files pane in the lower right shows what is in the Home directory where you are. You can navigate around within that Files pane and explore, but note that you won’t change where you are: even as you click through you’ll still be Home: `~/`.
We are going to have our first experience with R through RMarkdown, so let’s do the following.
3\.3 Intro to RMarkdown
-----------------------
An RMarkdown file is a plain text file that allow us to write code and text together, and when it is “knit,” the code will be evaluated and the text formatted so that it creates a reproducible report or document that is nice to read as a human.
This is really critical to reproducibility, and it also saves time. This document will recreate your figures for you in the same document where you are writing text. So no more doing analysis, saving a plot, pasting that plot into Word, redoing the analysis, re\-saving, re\-pasting, etc.
This 1\-minute video does the best job of introducing RMarkdown: [What is RMarkdown?](https://vimeo.com/178485416).
Now let’s experience this a bit ourselves and then we’ll talk about it more.
### 3\.3\.1 Create an RMarkdown file
Let’s do this together:
File \-\> New File \-\> RMarkdown… (or alternatively you can click the green plus in the top left \-\> RMarkdown).
Let’s title it “Testing” and write our name as author, then click OK with the recommended Default Output Format, which is HTML.
OK, first off: by opening a file, we are seeing the 4th pane of the RStudio console, which here is a text editor. This lets us dock and organize our files within RStudio instead of having a bunch of different windows open (but there are options to pop them out if that is what you prefer).
Let’s have a look at this file — it’s not blank; there is some initial text is already provided for you. Let’s have a high\-level look through of it:
* The top part has the Title and Author we provided, as well as today’s date and the output type as an HTML document like we selected above.
* There are white and grey sections. These are the 2 main languages that make up an RMarkdown file.
+ **Grey sections are R code**
+ **White sections are Markdown text**
* There is black and blue text (we’ll ignore the green text for now).
### 3\.3\.2 Knit your RMarkdown file
Let’s go ahead and “Knit” by clicking the blue yarn at the top of the RMarkdown file.
It’s going to ask us to save first, I’ll name mine “testing.Rmd.” Note that this is by default going to save this file in your home directory `/~`. Since this is a testing document this is fine to save here; we will get more organized about where we save files very soon. Once you click Save, the knit process will be able to continue.
OK so how cool is this, we’ve just made an html file! This is a single webpage that we are viewing locally on our own computers. Knitting this RMarkdown document has rendered — we also say formatted — both the Markdown text (white) and the R code (grey), and the it also executed — we also say ran — the R code.
Let’s have a look at them side\-by\-side:
Let’s take a deeper look at these two files. So much of learning to code is looking for patterns.
#### 3\.3\.2\.1 Activity
Introduce yourself to the person sitting next to you. Discuss what you notice with these two files. Then we will have a brief share\-out with the group. (5 mins)
### 3\.3\.3 Markdown text
Let’s look more deeply at the Markdown text. Markdown is a formatting language for plain text, and there are only a handful of rules to know.
Notice the syntax for:
* **headers** with `#` or `##`
* **bold** with `**`
To see more of the rules, let’s look at RStudio’s built\-in reference. Let’s do this: Help \> Markdown Quick Reference
There are also good [cheatsheets](https://github.com/adam-p/markdown-here/wiki/Markdown-Here-Cheatsheet) available online.
### 3\.3\.4 R code
Let’s look at the R code that we see executed in our knitted document.
We see that:
* `summary(cars)` produces a table with information about cars
* `plot(pressure)` produces a plot with information about pressure
There are a couple of things going on here.
`summary()` and `plot()` are called **functions**; they are operations and these ones come installed with R. We call functions installed with R **base R functions**. This is similar to Excel’s functions and formulas.
`cars` and `pressure` are small datasets that come installed with R.
We’ll talk more about functions and data shortly.
### 3\.3\.5 Code chunks
R code is written in code chunks, which are grey.
Each of them start with 3 backticks and `{r label}` that signify there will be R code following. Anything inside the brackets (`{ }`) is instructions for RMarkdown about that code to run. For example:
* the first chunk labeled “setup” says `include=FALSE`, and we don’t see it included in the HTML document.
* the second chunk labeled “cars” has no additional instructions, and in the HTML document we see the code and the evaluation of that code (a summary table)
* the third chunk labeled “pressure” says `echo=FALSE`, and in the HTML document we do not see the code echoed, we only see the plot when the code is executed.
> **Aside: Code chunk labels** It is possible to label your code chunks. This is to help us navigate between them and keep them organized. In our example Rmd, our three chunks say `r` as the language, and have a label (`setup`, `cars`, `pressure`).
>
> Labels are optional, but will become powerful as you become a powerful R user. But if you label your code chunks, you must have unique labels.
Notice how the word `FALSE` is all capitals. Capitalization matters in R; `TRUE/FALSE` is something that R can interpret as a binary yes/no or 1/0\.
There are many more options available that we will discuss as we get more familiar with RMarkdown.
#### 3\.3\.5\.1 New code chunks
We can create a new chunk in your RMarkdown first in one of these ways:
* click “Insert \> R” at the top of the editor pane (with the green plus and green box)
* type it by hand:
\`\`\`{r}
\`\`\`
* copy\-paste an existing chunk — but remember to relabel it something unique! (we’ll explore this more in a moment)
> **Aside**: doesn’t have to be only R, other languages supported.
Let’s create a new code chunk at the end of our document.
Now, let’s write some code in R. Let’s say we want to see the summary of the `pressure` data. I’m going to press enter to to add some extra carriage returns because sometimes I find it more pleasant to look at my code, and it helps in troubleshooting, which is often about identifying typos. R lets you use as much whitespace as you would like.
```
summary(pressure)
```
We can knit this and see the summary of `pressure`. This is the same data that we see with the plot just above.
> Troubleshooting: Did trying to knit your document produce an error? Start by looking at your code again. Do you have both open `(` and close `)` parentheses? Are your code chunk fences (\`\`\`) correct?
3\.4 R code in the Console
--------------------------
So far we have been telling R to execute our code only when we knit the document, but we can also write code in the Console to interact with the live R process.
The Console (bottom left pane of the RStudio IDE) is where you can interact with the R engine and run code directly.
Let’s type this in the Console: `summary(pressure)` and hit enter. We see the pressure summary table returned; it is the same information that we saw in our knitted html document. By default, R will display (we also say “print”) the executed result in the Console
```
summary(pressure)
```
We can also do math as we can in Excel: type the following and press enter.
```
8*22.3
```
### 3\.4\.1 Error messages
When you code in R or any language, you will encounter errors. We will discuss troubleshooting tips more deeply tomorrow in [Collaborating \& getting help](#collaboration); here we will just get a little comfortable with them.
#### 3\.4\.1\.1 R error messages
**Error messages are your friends**.
What do they look like? I’ll demo typing in the Console `summary(pressur)`
```
summary(pressur)
#> Error in summary(pressur): object 'pressur' not found
```
Error messages are R’s way of saying that it didn’t understand what you said. This is like in English when we say “What?” or “Pardon?” And like in spoken language, some error messages are more helpful than others. Like if someone says “Sorry, could you repeat that last word” rather than only “What?”
In this case, R is saying “I didn’t understand `pressur`.” R tracks the datasets it has available as objects, as well as any additional objects that you make. `pressur` is not among them, so it says that it is not found.
The first step of becoming a proficient R user is to move past the exasperation of “it’s not working!” and **read the error message**. Errors will be less frustrating with the mindset that **most likely the problem is your typo or misuse**, and not that R is broken or hates you. Read the error message to learn what is wrong.
#### 3\.4\.1\.2 RMarkdown error messages
Errors can also occur in RMarkdown. I said a moment ago that you label your code chunks, they need to be unique. Let’s see what happens if not. If I (re)name our `summary(pressure)` chunk to “cars,” I will see an error when you try to knit:
```
processing file: testing.Rmd
Error in parse_block(g[-1], g[1], params.src) : duplicate label 'cars'
Calls: <Anonymous> ... process_file -> split_file -> lapply -> FUN -> parse_block
Execution halted
```
There are two things to focus on here.
First: This error message starts out in a pretty cryptic way: I don’t expect you to know what `parse_block(g[-1]...` means. But, expecting that the error message is really trying to help me, I continue scanning the message which allows me to identify the problem: `duplicate label 'cars'`.
Second: This error is in the “R Markdown” tab on the bottom left of the RStudio IDE; it is not in the Console. That is because when RMarkdown is knitted, it actually spins up an R workspace separately from what is passed to the Console; this is one of the ways that R Markdown enables reproducibility because it is a self\-contained instance of R.
You can click back and forth between the Console and the R Markdown tab; this is something to look out for as we continue. We will work in the Console and R Markdown and will discuss strategies for where and how to work as we go. Let’s click back to Console now.
### 3\.4\.2 Running RMarkdown code chunks
So far we have written code in our RMarkdown file that is executed when we knit the file. We have also written code directly in the Console that is executed when we press enter/return. Additionally, we can write code in an RMarkdown code chunk and execute it by sending it into the Console (i.e. we can execute code without knitting the document).
How do we do it? There are several ways. Let’s do each of these with `summary(pressure)`.
**First approach: send R code to the Console.**
This approach involves selecting (highlighting) the R code only (`summary(pressure)`), not any of the backticks/fences from the code chunk.
> **Troubleshooting:** If you see `Error: attempt to use zero-length variable name` it is because you have accidentally highlighted the backticks along with the R code. Try again — and don’t forget that you can add spaces within the code chunk or make your RStudio session bigger (View \> Zoom In)!
Do this by selecting code and then:
1. copy\-pasting into the Console and press enter/return.
2. clicking ‘Run’ from RStudio IDE. This is available from:
1. the bar above the file (green arrow)
2. the menu bar: Code \> Run Selected Line(s)
3. keyboard shortcut: command\-return
**Second approach: run full code chunk.**
Since we are already grouping relevant code together in chunks, it’s reasonable that we might want to run it all together at once.
Do this by placing your curser within a code chunk and then:
1. clicking the little black down arrow next to the Run green arrow and selecting Run Current Chunk. Notice there are also options to run all chunks, run all chunks above or below…
### 3\.4\.3 Writing code in a file vs. Console
When should you write code in a file (.Rmd or .R script) and when should you write it in the Console?
We write things in the file that are necessary for our analysis and that we want to preserve for reproducibility; we will be doing this throughout the workshop to give you a good sense of this. A file is also a great way for you to take notes to yourself.
The Console is good for doing quick calculations like `8*22.3`, testing functions, for calling help pages, for installing packages. We’ll explore these things next.
3\.5 R functions
----------------
Like Excel, the power of R comes not from doing small operations individually (like `8*22.3`). R’s power comes from being able to operate on whole suites of numbers and datasets.
And also like Excel, some of the biggest power in R is that there are built\-in functions that you can use in your analyses (and, as we’ll see, R users can easily create and share functions, and it is this open source developer and contributor community that makes R so awesome).
R has a mind\-blowing collection of built\-in functions that are used with the same syntax: function name with parentheses around what the function needs to do what it is supposed to do.
We’ve seen a few functions already: we’ve seen `plot()` and `summary()`.
Functions always have the same structure: a name, parentheses, and arguments that you can specify. `function_name(arguments)`. When we talk about function names, we use the convention `function_name()` (the name with empty parentheses), but in practice, we usually supply arguments to the function `function_name(arguments)` so that it works on some data. Let’s see a few more function examples.
Like in Excel, there is a function called “sum” to calculate a total. In R, it is spelled lowercase: `sum()`. (As I type in the Console, R will provide suggestions).
Let’s use the `sum()` function to calculate the sum of all the distances traveled in the `cars` dataset. We specify a single column of a dataset using the `$` operator:
```
sum(cars$dist)
```
Another function is simply called `c()`; which combines values together.
So let’s create a new R code chunk. And we’ll write:
```
c(1, 7:9)
```
```
## [1] 1 7 8 9
```
> Aside: some functions don’t require arguments: try typing `date()` into the Console. Be sure to type the parentheses (`date()`); otherwise R will return the code behind the `date()` function rather than the output that you want/expect.
So you can see that this combines these values all into the same place, which is called a vector here. We could also do this with a non\-numeric examples, which are called “strings”:
```
c("San Francisco", "Cal Academy")
```
```
## [1] "San Francisco" "Cal Academy"
```
We need to put quotes around non\-numeric values so that R does not interpret them as an object. It would definitely get grumpy and give us an error that it did not have an object by these names. And you see that R also prints in quotes.
We can also put functions inside of other functions. This is called nested functions. When we add another function inside a function, R will evaluate them from the inside\-out.
```
c(sum(cars$dist), "San Francisco", "Cal Academy")
```
```
## [1] "2149" "San Francisco" "Cal Academy"
```
So R first evaluated the `sum(cars$dist)`, and then evaluates the `c()` statement.
This example demonstrates another key idea in R: the idea of **classes**. The output R provides is called a vector, and everything within that vector has to be the same type of thing: we can’t have both numbers and words inside. So here R is able to first calculate `sum(cars$dist)` as a number, but then `c()` will turn that number into a text, called a “string” in R: you see that it is in quotes. It is no longer a numeric, it is a string.
This is a big difference between R and Excel, since Excel allows you to have a mix of text and numeric in the same column or row. R’s way can feel restrictive, but it is also more predictable. In Excel, you might have a single number in your whole sheet that Excel is silently interpreting as text so it is causing errors in the analyses. In R, the whole column will be the same type. This can still cause trouble, but that is where the good practices that we are learning together can help minimize that kind of trouble.
We will not discuss classes or work with nested functions very much in this workshop (the tidyverse design and pipe operator make nested functions less prevalent). But we wanted to introduce them to you because they will be something you encounter as you continue on your journey with R.
3\.6 Help pages
---------------
Every function available to you should have a help page, and you access it by typing a question mark preceding the function name in the Console.
Let’s have a deeper look at the arguments for `plot()`, using the help pages.
```
?plot
```
This opens up the correct page in the Help Tab in the bottom\-right of the RStudio IDE. You can also click on the tab and type in the function name in the search bar.
All help pages will have the same format, here is how I look at it:
The help page tells the name of the package in the top left, and broken down into sections:
> Help pages
>
> \- Description: An extended description of what the function does.
> \- Usage: The arguments of the function and their default values.
> \- Arguments: An explanation of the data each argument is expecting.
> \- Details: Any important details to be aware of.
> \- Value: The data the function returns.
> \- See Also: Any related functions you might find useful.
> \- Examples: Some examples for how to use the function.
When I look at a help page, I start with the Description to see if I am in the right place for what I need to do. Reading the description for `plot` lets me know that yup, this is the function I want.
I next look at the usage and arguments, which give me a more concrete view into what the function does. `plot` requires arguments for `x` and `y`. But we passed only one argument to `plot()`: we passed the cars dataset (`plot(cars)`). R is able to understand that it should use the two columns in that dataset as x and y, and it does so based on order: the first column “speed” becomes x and the second column “dist” becomes y. The `...` means that there are many other arguments we can pass to `plot()`, which we should expect: I think we can all agree that it would be nice to have the option of making this figure a little more beautiful and compelling. Glancing at some of the arguments, we can understand here to be about the style of the plots.
Next, I usually scroll down to the bottom to the examples. This is where I can actually see how the function is used, and I can also paste those examples into the Console to see their output. Let’s try it:
```
plot(sin, -pi, 2*pi)
```
3\.7 Commenting
---------------
I’ve been working in the Console to illustrate working interactively with the live R process. But it is likely that you may want to write some of these things as notes in your R Markdown file. That’s great!
But you may not want everything you type to be run when you knit your document. So you can tell R not to run something by “commenting it out.” This is done with one or more pound/hash/number signs: `#`. So if I wanted to write a note to myself about using `?` to open the help pages, I would write this in my R Markdown code chunk:
```
## open help pages with ?:
# ?plot
```
RStudio color\-codes comments as green so they are easier to see.
Notice that my convention is to use two `##`’s for my notes, and only one for the code that I don’t want to run now, but might want to run other times. I like this convention because in RStudio you can uncomment/recomment multiple lines of code at once if you use just one `#`: do this by going to the menu Code \> Comment/Uncomment Lines (keyboard shortcut on my Mac: Shift\-Command\-C).
> **Aside**: Note also that the hashtag `#` is used differently in Markdown and in R. In R, a hashtag indicates a comment that will not be evaluated. You can use as many as you want: `#` is equivalent to `######`. In Markdown, a hashtag indicates a level of a header. And the number you use matters: `#` is a “level one header,” meaning the biggest font and the top of the hierarchy. `###` is a level three header, and will show up nested below the `#` and `##` headers.
3\.8 Assigning objects with `<-`
--------------------------------
In Excel, data are stored in the spreadsheet. In R, they are stored in objects. Data can be a variety of formats, for example numeric and strings like we just talked about.
We will be working with data objects that are rectangular in shape. If they only have one column or one row, they are also called a vector. And we assign these objects names.
This is a big difference with Excel, where you usually identify data by its location on the grid, like `$A1:D$20`. (You can do this with Excel by naming ranges of cells, but many people don’t do this.)
We assign an object a name by writing the name along with the assignment operator `<-`. Let’s try it by creating a variable called “x” and assigning it to 10\.
```
x <- 10
```
When I see this written, in my head I hear “x gets 10\.”
When we send this to the Console (I do this with Command \- Enter), notice how nothing is printed in return. This is because when we assign a variable, by default it is not returned. We can see what x is by typing it in the Console and hitting enter.
We can also assign objects with existing objects. Let’s say we want to have the distance traveled by cars in its own variable, and multiply by 1000 (assuming these data are in km and we want m).
```
dist_m <- cars$dist * 1000
```
Object names can be whatever you want, although it is wise to not name objects by functions that you know exist, for example “c” or “false.” Additionally, they cannot start with a digit and cannot contain spaces. Different folks have different conventions; you will be wise to adopt a [convention for demarcating words](http://en.wikipedia.org/wiki/Snake_case) in names.
```
## i_use_snake_case
## other.people.use.periods
## evenOthersUseCamelCase
## also-there-is-kebab-case
```
3\.9 R Packages
---------------
So far we’ve been using a couple functions that are included with R out\-of\-the\-box such as `plot()` and `c()`. We say that these functions are from “Base R.” But, one of the amazing things about R is that a vast user community is always creating new functions and packages that expand R’s capabilities.
In R, the fundamental unit of shareable code is the package. A package bundles together code, data, documentation (including to create the help pages), and tests, and is easy to share with others. They increase the power of R by improving existing base R functionalities, or by adding new ones.
The traditional place to download packages is from CRAN, the [Comprehensive R Archive Network](https://cran.r-project.org/), which is where you downloaded R. CRAN is like a grocery store or iTunes for vetted R packages.
> **Aside**: You can also install packages from GitHub; see [`devtools::install_github()`](https://devtools.r-lib.org/)
You don’t need to go to CRAN’s website to install packages, this can be accomplished within R using the command `install.packages("package-name-in-quotes")`.
### 3\.9\.1 How do you know what packages/functions exist?
How do you know what packages exist? Well, how do you know what movies exist on iTunes? You learn what’s available based on your needs, interests the community around you. We’ll introduce you to several really powerful packages that we work with and help you find others that might be of interest to you. *provide examples here*
### 3\.9\.2 Installing R Packages
Let’s install several packages that we will be using shortly. Write this in your R Markdown document and run it:
```
## setup packages
install.packages("usethis")
```
And after you run it, comment it out:
```
## setup packages
# install.packages("usethis")
```
Now we’ve installed the package, but we need to tell R that we are going to use the functions within the `usethis` package. We do this by using the function `library()`.
In my mind, this is analogous to needing to wire your house for electricity: this is something you do once; this is `install.packages`. But then you need to turn on the lights each time you need them (R Session).
It’s a nice convention to do this on the same line as your commented\-out `install.packages()` line; this makes it easier for someone (including you in a future time or computer) to install the package easily.
```
## setup packages
library(usethis) # install.packages("usethis")
```
When `usethis` is successfully attached, you won’t get any feedback in the Console. So unless you get an error, this worked for you.
Now let’s do the same with the `here` package.
```
library(here) # install.packages("here")
```
```
## here() starts at /Users/lowndes/github/rstudio-conf-2020/r-for-excel
```
```
# here() starts at /Users/lowndes
```
`here` also successfully attached but isn’t quiet about it. It is a “chatty” package; when we attached it did so, and responded with the filepath where we are working from. This is the same as `~/` which we saw earlier.
Finally, let’s install the `tidyverse` package.
```
# install.packages("tidyverse")
```
“The tidyverse is a coherent system of packages for data manipulation, exploration and visualization that share a common design philosophy.” \- Joseph Rickert: [What is the tidyverse?](https://rviews.rstudio.com/2017/06/08/what-is-the-tidyverse/), RStudio Community Blog.
This may take a little while to complete.
3\.10 GitHub brief intro \& config
----------------------------------
Before we break, we are going to set up Git and GitHub which we will be using along with R and RStudio for the rest of the workshop.
Before we do the setup configuration, let me take a moment to talk about what Git and GitHub are.
It helps me to think of GitHub like Dropbox: you identify folders for GitHub to ‘track’ and it syncs them to the cloud. This is good first\-and\-foremost because it makes a back\-up copy of your files: if your computer dies not all of your work is gone. But with GitHub, you have to be more deliberate about when syncs are made. This is because GitHub saves these as different versions, with information about who contributed when, line\-by\-line. This makes collaboration easier, and it allows you to roll\-back to different versions or contribute to others’ work.
git will track and version your files, GitHub stores this online and enables you to collaborate with others (and yourself). Although git and GitHub are two different things, distinct from each other, we can think of them as a bundle since we will always use them together.
### 3\.10\.1 Configure GitHub
This set up is a one\-time thing! You will only have to do this once per computer. We’ll walk through this together. In a browser, go to github.com and to your profile page as a reminder.
**You will need to remember your GitHub username, the email address you created your GitHub account with, and your GitHub password.**
We will be using the `use_git_config()` function from the `usethis` package we just installed. Since we already installed and attached this package, type this into your Console:
```
## use_git_config function with my username and email as arguments
use_git_config(user.name = "jules32", user.email = "jules32@example.org")
```
If you see `Error in use_git_config() : could not find function "use_git_config"` please run `library("usethis")`
### 3\.10\.2 Ensure that Git/GitHub/RStudio are communicating
We are going to go through a few steps to ensure the Git/GitHub are communicating with RStudio
#### 3\.10\.2\.1 RStudio: New Project
Click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
#### 3\.10\.2\.2 Select Version Control
#### 3\.10\.2\.3 Select Git
Since we are using git.
Do you see what I see?
If yes, hooray! Time for a break!
If no, we will help you troubleshoot.
1. Double check that GitHub username and email are correct
2. Troubleshooting, starting with [HappyGitWithR’s troubleshooting chapter](http://happygitwithr.com/troubleshooting.html)
* `which git` (Mac, Linux, or anything running a bash shell)
* `where git` (Windows, when not in a bash shell)
1. Potentially set up a RStudio Cloud account: <https://rstudio.cloud/>
### 3\.10\.3 Troubleshooting
#### 3\.10\.3\.1 Configure git from Terminal
If `usethis` fails, the following is the classic approach to configuring **git**. Open the Git Bash program (Windows) or the Terminal (Mac) and type the following:
```
# display your version of git
git --version
# replace USER with your Github user account
git config --global user.name USER
# replace NAME@EMAIL.EDU with the email you used to register with Github
git config --global user.email NAME@EMAIL.EDU
# list your config to confirm user.* variables set
git config --list
```
This will configure git with global (`--global`) commands, which means it will apply ‘globally’ to all your future github repositories, rather than only to this one now. **Note for PCs**: We’ve seen PC failures correct themselves by doing the above but omitting `--global`. (Then you will need to configure GitHub for every repo you clone but that is fine for now).
#### 3\.10\.3\.2 Troubleshooting
All troubleshooting starts with reading Happy Git With R’s [RStudio, Git, GitHub Hell](http://happygitwithr.com/troubleshooting.html) troubleshooting chapter.
##### 3\.10\.3\.2\.1 New(ish) Error on a Mac
We’ve also seen the following errors from RStudio:
```
error key does not contain a section --global terminal
```
and
```
fatal: not in a git directory
```
To solve this, go to the Terminal and type:
`which git`
Look at the filepath that is returned. Does it say anything to do with Apple?
\-\> If yes, then the [Git you downloaded](https://git-scm.com/downloads) isn’t installed, please redownload if necessary, and follow instructions to install.
\-\> If no, (in the example image, the filepath does not say anything with Apple) then proceed below:
In RStudio, navigate to: Tools \> Global Options \> Git/SVN.
Does the **“Git executable”** filepath match what the url in Terminal says?
If not, click the browse button and navigate there.
> *Note*: on my laptop, even though I navigated to /usr/local/bin/git, it then automatically redirect because /usr/local/bin/git was an alias on my computer. That is fine. Click OK.
### 3\.10\.4 END **RStudio/RMarkdown** session!
3\.1 Summary
------------
We will begin learning R through RMarkdown, which helps you tell your story of data analysis because you can write text alongside the code. We are actually learning two languages at once: R and Markdown.
### 3\.1\.1 Objectives
In this lesson we will get familiar with:
* the RStudio interface
* RMarkdown
* functions, packages, help pages, and error messages
* assigning variables and commenting
* configuring GitHub with RStudio
### 3\.1\.2 Resources
* [What is RMarkdown?](https://vimeo.com/178485416) awesome 1\-minute video by RStudio
* [R for Data Science](http://r4ds.had.co.nz/) by Hadley Wickham and Garrett Grolemund
* [STAT 545](http://stat545.com/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com) by Jenny Bryan
* [R for Excel Users](https://blog.shotwell.ca/posts/r_for_excel_users/) by Gordon Shotwell
* [Welcome to the tidyverse](https://joss.theoj.org/papers/10.21105/joss.01686) by Hadley Wickham et al.
* [A GIF\-based introduction to RStudio](https://www.pipinghotdata.com/posts/2020-09-07-introducing-the-rstudio-ide-and-r-markdown/) \- Shannon Pileggi, Piping Hot Data
### 3\.1\.1 Objectives
In this lesson we will get familiar with:
* the RStudio interface
* RMarkdown
* functions, packages, help pages, and error messages
* assigning variables and commenting
* configuring GitHub with RStudio
### 3\.1\.2 Resources
* [What is RMarkdown?](https://vimeo.com/178485416) awesome 1\-minute video by RStudio
* [R for Data Science](http://r4ds.had.co.nz/) by Hadley Wickham and Garrett Grolemund
* [STAT 545](http://stat545.com/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com) by Jenny Bryan
* [R for Excel Users](https://blog.shotwell.ca/posts/r_for_excel_users/) by Gordon Shotwell
* [Welcome to the tidyverse](https://joss.theoj.org/papers/10.21105/joss.01686) by Hadley Wickham et al.
* [A GIF\-based introduction to RStudio](https://www.pipinghotdata.com/posts/2020-09-07-introducing-the-rstudio-ide-and-r-markdown/) \- Shannon Pileggi, Piping Hot Data
3\.2 RStudio Orientation
------------------------
What is the RStudio IDE (integrated development environment)? The RStudio IDE is software that greatly improves your R experience.
I think that **R is your airplane, and the RStudio IDE is your airport**. You are the pilot, and you use R to go places! With practice you’ll gain skills and confidence; you can fly further distances and get through tricky situations. You will become an awesome pilot and can fly your plane anywhere. And the RStudio IDE provides support! Runways, communication, community, and other services that makes your life as a pilot much easier. It provides not only the infrastructure but a hub for the community that you can interact with.
To launch RStudio, double\-click on the RStudio icon. Launching RStudio also launches R, and you will probably never open R by itself.
Notice the default panes:
* Console (entire left)
* Environment/History (tabbed in upper right)
* Files/Plots/Packages/Help (tabbed in lower right)
We won’t click through this all immediately but we will become familiar with more of the options and capabilities throughout the next few days.
Something critical to know now is that you can make everything you see BIGGER by going to the navigation pane: View \> Zoom In. Learn these keyboard shortcuts; being able to see what you’re typing will help avoid typos \& help us help you.
An important first question: **where are we?**
If you’ve have opened RStudio for the first time, you’ll be in your Home directory. This is noted by the `~/` at the top of the console. You can see too that the Files pane in the lower right shows what is in the Home directory where you are. You can navigate around within that Files pane and explore, but note that you won’t change where you are: even as you click through you’ll still be Home: `~/`.
We are going to have our first experience with R through RMarkdown, so let’s do the following.
3\.3 Intro to RMarkdown
-----------------------
An RMarkdown file is a plain text file that allow us to write code and text together, and when it is “knit,” the code will be evaluated and the text formatted so that it creates a reproducible report or document that is nice to read as a human.
This is really critical to reproducibility, and it also saves time. This document will recreate your figures for you in the same document where you are writing text. So no more doing analysis, saving a plot, pasting that plot into Word, redoing the analysis, re\-saving, re\-pasting, etc.
This 1\-minute video does the best job of introducing RMarkdown: [What is RMarkdown?](https://vimeo.com/178485416).
Now let’s experience this a bit ourselves and then we’ll talk about it more.
### 3\.3\.1 Create an RMarkdown file
Let’s do this together:
File \-\> New File \-\> RMarkdown… (or alternatively you can click the green plus in the top left \-\> RMarkdown).
Let’s title it “Testing” and write our name as author, then click OK with the recommended Default Output Format, which is HTML.
OK, first off: by opening a file, we are seeing the 4th pane of the RStudio console, which here is a text editor. This lets us dock and organize our files within RStudio instead of having a bunch of different windows open (but there are options to pop them out if that is what you prefer).
Let’s have a look at this file — it’s not blank; there is some initial text is already provided for you. Let’s have a high\-level look through of it:
* The top part has the Title and Author we provided, as well as today’s date and the output type as an HTML document like we selected above.
* There are white and grey sections. These are the 2 main languages that make up an RMarkdown file.
+ **Grey sections are R code**
+ **White sections are Markdown text**
* There is black and blue text (we’ll ignore the green text for now).
### 3\.3\.2 Knit your RMarkdown file
Let’s go ahead and “Knit” by clicking the blue yarn at the top of the RMarkdown file.
It’s going to ask us to save first, I’ll name mine “testing.Rmd.” Note that this is by default going to save this file in your home directory `/~`. Since this is a testing document this is fine to save here; we will get more organized about where we save files very soon. Once you click Save, the knit process will be able to continue.
OK so how cool is this, we’ve just made an html file! This is a single webpage that we are viewing locally on our own computers. Knitting this RMarkdown document has rendered — we also say formatted — both the Markdown text (white) and the R code (grey), and the it also executed — we also say ran — the R code.
Let’s have a look at them side\-by\-side:
Let’s take a deeper look at these two files. So much of learning to code is looking for patterns.
#### 3\.3\.2\.1 Activity
Introduce yourself to the person sitting next to you. Discuss what you notice with these two files. Then we will have a brief share\-out with the group. (5 mins)
### 3\.3\.3 Markdown text
Let’s look more deeply at the Markdown text. Markdown is a formatting language for plain text, and there are only a handful of rules to know.
Notice the syntax for:
* **headers** with `#` or `##`
* **bold** with `**`
To see more of the rules, let’s look at RStudio’s built\-in reference. Let’s do this: Help \> Markdown Quick Reference
There are also good [cheatsheets](https://github.com/adam-p/markdown-here/wiki/Markdown-Here-Cheatsheet) available online.
### 3\.3\.4 R code
Let’s look at the R code that we see executed in our knitted document.
We see that:
* `summary(cars)` produces a table with information about cars
* `plot(pressure)` produces a plot with information about pressure
There are a couple of things going on here.
`summary()` and `plot()` are called **functions**; they are operations and these ones come installed with R. We call functions installed with R **base R functions**. This is similar to Excel’s functions and formulas.
`cars` and `pressure` are small datasets that come installed with R.
We’ll talk more about functions and data shortly.
### 3\.3\.5 Code chunks
R code is written in code chunks, which are grey.
Each of them start with 3 backticks and `{r label}` that signify there will be R code following. Anything inside the brackets (`{ }`) is instructions for RMarkdown about that code to run. For example:
* the first chunk labeled “setup” says `include=FALSE`, and we don’t see it included in the HTML document.
* the second chunk labeled “cars” has no additional instructions, and in the HTML document we see the code and the evaluation of that code (a summary table)
* the third chunk labeled “pressure” says `echo=FALSE`, and in the HTML document we do not see the code echoed, we only see the plot when the code is executed.
> **Aside: Code chunk labels** It is possible to label your code chunks. This is to help us navigate between them and keep them organized. In our example Rmd, our three chunks say `r` as the language, and have a label (`setup`, `cars`, `pressure`).
>
> Labels are optional, but will become powerful as you become a powerful R user. But if you label your code chunks, you must have unique labels.
Notice how the word `FALSE` is all capitals. Capitalization matters in R; `TRUE/FALSE` is something that R can interpret as a binary yes/no or 1/0\.
There are many more options available that we will discuss as we get more familiar with RMarkdown.
#### 3\.3\.5\.1 New code chunks
We can create a new chunk in your RMarkdown first in one of these ways:
* click “Insert \> R” at the top of the editor pane (with the green plus and green box)
* type it by hand:
\`\`\`{r}
\`\`\`
* copy\-paste an existing chunk — but remember to relabel it something unique! (we’ll explore this more in a moment)
> **Aside**: doesn’t have to be only R, other languages supported.
Let’s create a new code chunk at the end of our document.
Now, let’s write some code in R. Let’s say we want to see the summary of the `pressure` data. I’m going to press enter to to add some extra carriage returns because sometimes I find it more pleasant to look at my code, and it helps in troubleshooting, which is often about identifying typos. R lets you use as much whitespace as you would like.
```
summary(pressure)
```
We can knit this and see the summary of `pressure`. This is the same data that we see with the plot just above.
> Troubleshooting: Did trying to knit your document produce an error? Start by looking at your code again. Do you have both open `(` and close `)` parentheses? Are your code chunk fences (\`\`\`) correct?
### 3\.3\.1 Create an RMarkdown file
Let’s do this together:
File \-\> New File \-\> RMarkdown… (or alternatively you can click the green plus in the top left \-\> RMarkdown).
Let’s title it “Testing” and write our name as author, then click OK with the recommended Default Output Format, which is HTML.
OK, first off: by opening a file, we are seeing the 4th pane of the RStudio console, which here is a text editor. This lets us dock and organize our files within RStudio instead of having a bunch of different windows open (but there are options to pop them out if that is what you prefer).
Let’s have a look at this file — it’s not blank; there is some initial text is already provided for you. Let’s have a high\-level look through of it:
* The top part has the Title and Author we provided, as well as today’s date and the output type as an HTML document like we selected above.
* There are white and grey sections. These are the 2 main languages that make up an RMarkdown file.
+ **Grey sections are R code**
+ **White sections are Markdown text**
* There is black and blue text (we’ll ignore the green text for now).
### 3\.3\.2 Knit your RMarkdown file
Let’s go ahead and “Knit” by clicking the blue yarn at the top of the RMarkdown file.
It’s going to ask us to save first, I’ll name mine “testing.Rmd.” Note that this is by default going to save this file in your home directory `/~`. Since this is a testing document this is fine to save here; we will get more organized about where we save files very soon. Once you click Save, the knit process will be able to continue.
OK so how cool is this, we’ve just made an html file! This is a single webpage that we are viewing locally on our own computers. Knitting this RMarkdown document has rendered — we also say formatted — both the Markdown text (white) and the R code (grey), and the it also executed — we also say ran — the R code.
Let’s have a look at them side\-by\-side:
Let’s take a deeper look at these two files. So much of learning to code is looking for patterns.
#### 3\.3\.2\.1 Activity
Introduce yourself to the person sitting next to you. Discuss what you notice with these two files. Then we will have a brief share\-out with the group. (5 mins)
#### 3\.3\.2\.1 Activity
Introduce yourself to the person sitting next to you. Discuss what you notice with these two files. Then we will have a brief share\-out with the group. (5 mins)
### 3\.3\.3 Markdown text
Let’s look more deeply at the Markdown text. Markdown is a formatting language for plain text, and there are only a handful of rules to know.
Notice the syntax for:
* **headers** with `#` or `##`
* **bold** with `**`
To see more of the rules, let’s look at RStudio’s built\-in reference. Let’s do this: Help \> Markdown Quick Reference
There are also good [cheatsheets](https://github.com/adam-p/markdown-here/wiki/Markdown-Here-Cheatsheet) available online.
### 3\.3\.4 R code
Let’s look at the R code that we see executed in our knitted document.
We see that:
* `summary(cars)` produces a table with information about cars
* `plot(pressure)` produces a plot with information about pressure
There are a couple of things going on here.
`summary()` and `plot()` are called **functions**; they are operations and these ones come installed with R. We call functions installed with R **base R functions**. This is similar to Excel’s functions and formulas.
`cars` and `pressure` are small datasets that come installed with R.
We’ll talk more about functions and data shortly.
### 3\.3\.5 Code chunks
R code is written in code chunks, which are grey.
Each of them start with 3 backticks and `{r label}` that signify there will be R code following. Anything inside the brackets (`{ }`) is instructions for RMarkdown about that code to run. For example:
* the first chunk labeled “setup” says `include=FALSE`, and we don’t see it included in the HTML document.
* the second chunk labeled “cars” has no additional instructions, and in the HTML document we see the code and the evaluation of that code (a summary table)
* the third chunk labeled “pressure” says `echo=FALSE`, and in the HTML document we do not see the code echoed, we only see the plot when the code is executed.
> **Aside: Code chunk labels** It is possible to label your code chunks. This is to help us navigate between them and keep them organized. In our example Rmd, our three chunks say `r` as the language, and have a label (`setup`, `cars`, `pressure`).
>
> Labels are optional, but will become powerful as you become a powerful R user. But if you label your code chunks, you must have unique labels.
Notice how the word `FALSE` is all capitals. Capitalization matters in R; `TRUE/FALSE` is something that R can interpret as a binary yes/no or 1/0\.
There are many more options available that we will discuss as we get more familiar with RMarkdown.
#### 3\.3\.5\.1 New code chunks
We can create a new chunk in your RMarkdown first in one of these ways:
* click “Insert \> R” at the top of the editor pane (with the green plus and green box)
* type it by hand:
\`\`\`{r}
\`\`\`
* copy\-paste an existing chunk — but remember to relabel it something unique! (we’ll explore this more in a moment)
> **Aside**: doesn’t have to be only R, other languages supported.
Let’s create a new code chunk at the end of our document.
Now, let’s write some code in R. Let’s say we want to see the summary of the `pressure` data. I’m going to press enter to to add some extra carriage returns because sometimes I find it more pleasant to look at my code, and it helps in troubleshooting, which is often about identifying typos. R lets you use as much whitespace as you would like.
```
summary(pressure)
```
We can knit this and see the summary of `pressure`. This is the same data that we see with the plot just above.
> Troubleshooting: Did trying to knit your document produce an error? Start by looking at your code again. Do you have both open `(` and close `)` parentheses? Are your code chunk fences (\`\`\`) correct?
#### 3\.3\.5\.1 New code chunks
We can create a new chunk in your RMarkdown first in one of these ways:
* click “Insert \> R” at the top of the editor pane (with the green plus and green box)
* type it by hand:
\`\`\`{r}
\`\`\`
* copy\-paste an existing chunk — but remember to relabel it something unique! (we’ll explore this more in a moment)
> **Aside**: doesn’t have to be only R, other languages supported.
Let’s create a new code chunk at the end of our document.
Now, let’s write some code in R. Let’s say we want to see the summary of the `pressure` data. I’m going to press enter to to add some extra carriage returns because sometimes I find it more pleasant to look at my code, and it helps in troubleshooting, which is often about identifying typos. R lets you use as much whitespace as you would like.
```
summary(pressure)
```
We can knit this and see the summary of `pressure`. This is the same data that we see with the plot just above.
> Troubleshooting: Did trying to knit your document produce an error? Start by looking at your code again. Do you have both open `(` and close `)` parentheses? Are your code chunk fences (\`\`\`) correct?
3\.4 R code in the Console
--------------------------
So far we have been telling R to execute our code only when we knit the document, but we can also write code in the Console to interact with the live R process.
The Console (bottom left pane of the RStudio IDE) is where you can interact with the R engine and run code directly.
Let’s type this in the Console: `summary(pressure)` and hit enter. We see the pressure summary table returned; it is the same information that we saw in our knitted html document. By default, R will display (we also say “print”) the executed result in the Console
```
summary(pressure)
```
We can also do math as we can in Excel: type the following and press enter.
```
8*22.3
```
### 3\.4\.1 Error messages
When you code in R or any language, you will encounter errors. We will discuss troubleshooting tips more deeply tomorrow in [Collaborating \& getting help](#collaboration); here we will just get a little comfortable with them.
#### 3\.4\.1\.1 R error messages
**Error messages are your friends**.
What do they look like? I’ll demo typing in the Console `summary(pressur)`
```
summary(pressur)
#> Error in summary(pressur): object 'pressur' not found
```
Error messages are R’s way of saying that it didn’t understand what you said. This is like in English when we say “What?” or “Pardon?” And like in spoken language, some error messages are more helpful than others. Like if someone says “Sorry, could you repeat that last word” rather than only “What?”
In this case, R is saying “I didn’t understand `pressur`.” R tracks the datasets it has available as objects, as well as any additional objects that you make. `pressur` is not among them, so it says that it is not found.
The first step of becoming a proficient R user is to move past the exasperation of “it’s not working!” and **read the error message**. Errors will be less frustrating with the mindset that **most likely the problem is your typo or misuse**, and not that R is broken or hates you. Read the error message to learn what is wrong.
#### 3\.4\.1\.2 RMarkdown error messages
Errors can also occur in RMarkdown. I said a moment ago that you label your code chunks, they need to be unique. Let’s see what happens if not. If I (re)name our `summary(pressure)` chunk to “cars,” I will see an error when you try to knit:
```
processing file: testing.Rmd
Error in parse_block(g[-1], g[1], params.src) : duplicate label 'cars'
Calls: <Anonymous> ... process_file -> split_file -> lapply -> FUN -> parse_block
Execution halted
```
There are two things to focus on here.
First: This error message starts out in a pretty cryptic way: I don’t expect you to know what `parse_block(g[-1]...` means. But, expecting that the error message is really trying to help me, I continue scanning the message which allows me to identify the problem: `duplicate label 'cars'`.
Second: This error is in the “R Markdown” tab on the bottom left of the RStudio IDE; it is not in the Console. That is because when RMarkdown is knitted, it actually spins up an R workspace separately from what is passed to the Console; this is one of the ways that R Markdown enables reproducibility because it is a self\-contained instance of R.
You can click back and forth between the Console and the R Markdown tab; this is something to look out for as we continue. We will work in the Console and R Markdown and will discuss strategies for where and how to work as we go. Let’s click back to Console now.
### 3\.4\.2 Running RMarkdown code chunks
So far we have written code in our RMarkdown file that is executed when we knit the file. We have also written code directly in the Console that is executed when we press enter/return. Additionally, we can write code in an RMarkdown code chunk and execute it by sending it into the Console (i.e. we can execute code without knitting the document).
How do we do it? There are several ways. Let’s do each of these with `summary(pressure)`.
**First approach: send R code to the Console.**
This approach involves selecting (highlighting) the R code only (`summary(pressure)`), not any of the backticks/fences from the code chunk.
> **Troubleshooting:** If you see `Error: attempt to use zero-length variable name` it is because you have accidentally highlighted the backticks along with the R code. Try again — and don’t forget that you can add spaces within the code chunk or make your RStudio session bigger (View \> Zoom In)!
Do this by selecting code and then:
1. copy\-pasting into the Console and press enter/return.
2. clicking ‘Run’ from RStudio IDE. This is available from:
1. the bar above the file (green arrow)
2. the menu bar: Code \> Run Selected Line(s)
3. keyboard shortcut: command\-return
**Second approach: run full code chunk.**
Since we are already grouping relevant code together in chunks, it’s reasonable that we might want to run it all together at once.
Do this by placing your curser within a code chunk and then:
1. clicking the little black down arrow next to the Run green arrow and selecting Run Current Chunk. Notice there are also options to run all chunks, run all chunks above or below…
### 3\.4\.3 Writing code in a file vs. Console
When should you write code in a file (.Rmd or .R script) and when should you write it in the Console?
We write things in the file that are necessary for our analysis and that we want to preserve for reproducibility; we will be doing this throughout the workshop to give you a good sense of this. A file is also a great way for you to take notes to yourself.
The Console is good for doing quick calculations like `8*22.3`, testing functions, for calling help pages, for installing packages. We’ll explore these things next.
### 3\.4\.1 Error messages
When you code in R or any language, you will encounter errors. We will discuss troubleshooting tips more deeply tomorrow in [Collaborating \& getting help](#collaboration); here we will just get a little comfortable with them.
#### 3\.4\.1\.1 R error messages
**Error messages are your friends**.
What do they look like? I’ll demo typing in the Console `summary(pressur)`
```
summary(pressur)
#> Error in summary(pressur): object 'pressur' not found
```
Error messages are R’s way of saying that it didn’t understand what you said. This is like in English when we say “What?” or “Pardon?” And like in spoken language, some error messages are more helpful than others. Like if someone says “Sorry, could you repeat that last word” rather than only “What?”
In this case, R is saying “I didn’t understand `pressur`.” R tracks the datasets it has available as objects, as well as any additional objects that you make. `pressur` is not among them, so it says that it is not found.
The first step of becoming a proficient R user is to move past the exasperation of “it’s not working!” and **read the error message**. Errors will be less frustrating with the mindset that **most likely the problem is your typo or misuse**, and not that R is broken or hates you. Read the error message to learn what is wrong.
#### 3\.4\.1\.2 RMarkdown error messages
Errors can also occur in RMarkdown. I said a moment ago that you label your code chunks, they need to be unique. Let’s see what happens if not. If I (re)name our `summary(pressure)` chunk to “cars,” I will see an error when you try to knit:
```
processing file: testing.Rmd
Error in parse_block(g[-1], g[1], params.src) : duplicate label 'cars'
Calls: <Anonymous> ... process_file -> split_file -> lapply -> FUN -> parse_block
Execution halted
```
There are two things to focus on here.
First: This error message starts out in a pretty cryptic way: I don’t expect you to know what `parse_block(g[-1]...` means. But, expecting that the error message is really trying to help me, I continue scanning the message which allows me to identify the problem: `duplicate label 'cars'`.
Second: This error is in the “R Markdown” tab on the bottom left of the RStudio IDE; it is not in the Console. That is because when RMarkdown is knitted, it actually spins up an R workspace separately from what is passed to the Console; this is one of the ways that R Markdown enables reproducibility because it is a self\-contained instance of R.
You can click back and forth between the Console and the R Markdown tab; this is something to look out for as we continue. We will work in the Console and R Markdown and will discuss strategies for where and how to work as we go. Let’s click back to Console now.
#### 3\.4\.1\.1 R error messages
**Error messages are your friends**.
What do they look like? I’ll demo typing in the Console `summary(pressur)`
```
summary(pressur)
#> Error in summary(pressur): object 'pressur' not found
```
Error messages are R’s way of saying that it didn’t understand what you said. This is like in English when we say “What?” or “Pardon?” And like in spoken language, some error messages are more helpful than others. Like if someone says “Sorry, could you repeat that last word” rather than only “What?”
In this case, R is saying “I didn’t understand `pressur`.” R tracks the datasets it has available as objects, as well as any additional objects that you make. `pressur` is not among them, so it says that it is not found.
The first step of becoming a proficient R user is to move past the exasperation of “it’s not working!” and **read the error message**. Errors will be less frustrating with the mindset that **most likely the problem is your typo or misuse**, and not that R is broken or hates you. Read the error message to learn what is wrong.
#### 3\.4\.1\.2 RMarkdown error messages
Errors can also occur in RMarkdown. I said a moment ago that you label your code chunks, they need to be unique. Let’s see what happens if not. If I (re)name our `summary(pressure)` chunk to “cars,” I will see an error when you try to knit:
```
processing file: testing.Rmd
Error in parse_block(g[-1], g[1], params.src) : duplicate label 'cars'
Calls: <Anonymous> ... process_file -> split_file -> lapply -> FUN -> parse_block
Execution halted
```
There are two things to focus on here.
First: This error message starts out in a pretty cryptic way: I don’t expect you to know what `parse_block(g[-1]...` means. But, expecting that the error message is really trying to help me, I continue scanning the message which allows me to identify the problem: `duplicate label 'cars'`.
Second: This error is in the “R Markdown” tab on the bottom left of the RStudio IDE; it is not in the Console. That is because when RMarkdown is knitted, it actually spins up an R workspace separately from what is passed to the Console; this is one of the ways that R Markdown enables reproducibility because it is a self\-contained instance of R.
You can click back and forth between the Console and the R Markdown tab; this is something to look out for as we continue. We will work in the Console and R Markdown and will discuss strategies for where and how to work as we go. Let’s click back to Console now.
### 3\.4\.2 Running RMarkdown code chunks
So far we have written code in our RMarkdown file that is executed when we knit the file. We have also written code directly in the Console that is executed when we press enter/return. Additionally, we can write code in an RMarkdown code chunk and execute it by sending it into the Console (i.e. we can execute code without knitting the document).
How do we do it? There are several ways. Let’s do each of these with `summary(pressure)`.
**First approach: send R code to the Console.**
This approach involves selecting (highlighting) the R code only (`summary(pressure)`), not any of the backticks/fences from the code chunk.
> **Troubleshooting:** If you see `Error: attempt to use zero-length variable name` it is because you have accidentally highlighted the backticks along with the R code. Try again — and don’t forget that you can add spaces within the code chunk or make your RStudio session bigger (View \> Zoom In)!
Do this by selecting code and then:
1. copy\-pasting into the Console and press enter/return.
2. clicking ‘Run’ from RStudio IDE. This is available from:
1. the bar above the file (green arrow)
2. the menu bar: Code \> Run Selected Line(s)
3. keyboard shortcut: command\-return
**Second approach: run full code chunk.**
Since we are already grouping relevant code together in chunks, it’s reasonable that we might want to run it all together at once.
Do this by placing your curser within a code chunk and then:
1. clicking the little black down arrow next to the Run green arrow and selecting Run Current Chunk. Notice there are also options to run all chunks, run all chunks above or below…
### 3\.4\.3 Writing code in a file vs. Console
When should you write code in a file (.Rmd or .R script) and when should you write it in the Console?
We write things in the file that are necessary for our analysis and that we want to preserve for reproducibility; we will be doing this throughout the workshop to give you a good sense of this. A file is also a great way for you to take notes to yourself.
The Console is good for doing quick calculations like `8*22.3`, testing functions, for calling help pages, for installing packages. We’ll explore these things next.
3\.5 R functions
----------------
Like Excel, the power of R comes not from doing small operations individually (like `8*22.3`). R’s power comes from being able to operate on whole suites of numbers and datasets.
And also like Excel, some of the biggest power in R is that there are built\-in functions that you can use in your analyses (and, as we’ll see, R users can easily create and share functions, and it is this open source developer and contributor community that makes R so awesome).
R has a mind\-blowing collection of built\-in functions that are used with the same syntax: function name with parentheses around what the function needs to do what it is supposed to do.
We’ve seen a few functions already: we’ve seen `plot()` and `summary()`.
Functions always have the same structure: a name, parentheses, and arguments that you can specify. `function_name(arguments)`. When we talk about function names, we use the convention `function_name()` (the name with empty parentheses), but in practice, we usually supply arguments to the function `function_name(arguments)` so that it works on some data. Let’s see a few more function examples.
Like in Excel, there is a function called “sum” to calculate a total. In R, it is spelled lowercase: `sum()`. (As I type in the Console, R will provide suggestions).
Let’s use the `sum()` function to calculate the sum of all the distances traveled in the `cars` dataset. We specify a single column of a dataset using the `$` operator:
```
sum(cars$dist)
```
Another function is simply called `c()`; which combines values together.
So let’s create a new R code chunk. And we’ll write:
```
c(1, 7:9)
```
```
## [1] 1 7 8 9
```
> Aside: some functions don’t require arguments: try typing `date()` into the Console. Be sure to type the parentheses (`date()`); otherwise R will return the code behind the `date()` function rather than the output that you want/expect.
So you can see that this combines these values all into the same place, which is called a vector here. We could also do this with a non\-numeric examples, which are called “strings”:
```
c("San Francisco", "Cal Academy")
```
```
## [1] "San Francisco" "Cal Academy"
```
We need to put quotes around non\-numeric values so that R does not interpret them as an object. It would definitely get grumpy and give us an error that it did not have an object by these names. And you see that R also prints in quotes.
We can also put functions inside of other functions. This is called nested functions. When we add another function inside a function, R will evaluate them from the inside\-out.
```
c(sum(cars$dist), "San Francisco", "Cal Academy")
```
```
## [1] "2149" "San Francisco" "Cal Academy"
```
So R first evaluated the `sum(cars$dist)`, and then evaluates the `c()` statement.
This example demonstrates another key idea in R: the idea of **classes**. The output R provides is called a vector, and everything within that vector has to be the same type of thing: we can’t have both numbers and words inside. So here R is able to first calculate `sum(cars$dist)` as a number, but then `c()` will turn that number into a text, called a “string” in R: you see that it is in quotes. It is no longer a numeric, it is a string.
This is a big difference between R and Excel, since Excel allows you to have a mix of text and numeric in the same column or row. R’s way can feel restrictive, but it is also more predictable. In Excel, you might have a single number in your whole sheet that Excel is silently interpreting as text so it is causing errors in the analyses. In R, the whole column will be the same type. This can still cause trouble, but that is where the good practices that we are learning together can help minimize that kind of trouble.
We will not discuss classes or work with nested functions very much in this workshop (the tidyverse design and pipe operator make nested functions less prevalent). But we wanted to introduce them to you because they will be something you encounter as you continue on your journey with R.
3\.6 Help pages
---------------
Every function available to you should have a help page, and you access it by typing a question mark preceding the function name in the Console.
Let’s have a deeper look at the arguments for `plot()`, using the help pages.
```
?plot
```
This opens up the correct page in the Help Tab in the bottom\-right of the RStudio IDE. You can also click on the tab and type in the function name in the search bar.
All help pages will have the same format, here is how I look at it:
The help page tells the name of the package in the top left, and broken down into sections:
> Help pages
>
> \- Description: An extended description of what the function does.
> \- Usage: The arguments of the function and their default values.
> \- Arguments: An explanation of the data each argument is expecting.
> \- Details: Any important details to be aware of.
> \- Value: The data the function returns.
> \- See Also: Any related functions you might find useful.
> \- Examples: Some examples for how to use the function.
When I look at a help page, I start with the Description to see if I am in the right place for what I need to do. Reading the description for `plot` lets me know that yup, this is the function I want.
I next look at the usage and arguments, which give me a more concrete view into what the function does. `plot` requires arguments for `x` and `y`. But we passed only one argument to `plot()`: we passed the cars dataset (`plot(cars)`). R is able to understand that it should use the two columns in that dataset as x and y, and it does so based on order: the first column “speed” becomes x and the second column “dist” becomes y. The `...` means that there are many other arguments we can pass to `plot()`, which we should expect: I think we can all agree that it would be nice to have the option of making this figure a little more beautiful and compelling. Glancing at some of the arguments, we can understand here to be about the style of the plots.
Next, I usually scroll down to the bottom to the examples. This is where I can actually see how the function is used, and I can also paste those examples into the Console to see their output. Let’s try it:
```
plot(sin, -pi, 2*pi)
```
3\.7 Commenting
---------------
I’ve been working in the Console to illustrate working interactively with the live R process. But it is likely that you may want to write some of these things as notes in your R Markdown file. That’s great!
But you may not want everything you type to be run when you knit your document. So you can tell R not to run something by “commenting it out.” This is done with one or more pound/hash/number signs: `#`. So if I wanted to write a note to myself about using `?` to open the help pages, I would write this in my R Markdown code chunk:
```
## open help pages with ?:
# ?plot
```
RStudio color\-codes comments as green so they are easier to see.
Notice that my convention is to use two `##`’s for my notes, and only one for the code that I don’t want to run now, but might want to run other times. I like this convention because in RStudio you can uncomment/recomment multiple lines of code at once if you use just one `#`: do this by going to the menu Code \> Comment/Uncomment Lines (keyboard shortcut on my Mac: Shift\-Command\-C).
> **Aside**: Note also that the hashtag `#` is used differently in Markdown and in R. In R, a hashtag indicates a comment that will not be evaluated. You can use as many as you want: `#` is equivalent to `######`. In Markdown, a hashtag indicates a level of a header. And the number you use matters: `#` is a “level one header,” meaning the biggest font and the top of the hierarchy. `###` is a level three header, and will show up nested below the `#` and `##` headers.
3\.8 Assigning objects with `<-`
--------------------------------
In Excel, data are stored in the spreadsheet. In R, they are stored in objects. Data can be a variety of formats, for example numeric and strings like we just talked about.
We will be working with data objects that are rectangular in shape. If they only have one column or one row, they are also called a vector. And we assign these objects names.
This is a big difference with Excel, where you usually identify data by its location on the grid, like `$A1:D$20`. (You can do this with Excel by naming ranges of cells, but many people don’t do this.)
We assign an object a name by writing the name along with the assignment operator `<-`. Let’s try it by creating a variable called “x” and assigning it to 10\.
```
x <- 10
```
When I see this written, in my head I hear “x gets 10\.”
When we send this to the Console (I do this with Command \- Enter), notice how nothing is printed in return. This is because when we assign a variable, by default it is not returned. We can see what x is by typing it in the Console and hitting enter.
We can also assign objects with existing objects. Let’s say we want to have the distance traveled by cars in its own variable, and multiply by 1000 (assuming these data are in km and we want m).
```
dist_m <- cars$dist * 1000
```
Object names can be whatever you want, although it is wise to not name objects by functions that you know exist, for example “c” or “false.” Additionally, they cannot start with a digit and cannot contain spaces. Different folks have different conventions; you will be wise to adopt a [convention for demarcating words](http://en.wikipedia.org/wiki/Snake_case) in names.
```
## i_use_snake_case
## other.people.use.periods
## evenOthersUseCamelCase
## also-there-is-kebab-case
```
3\.9 R Packages
---------------
So far we’ve been using a couple functions that are included with R out\-of\-the\-box such as `plot()` and `c()`. We say that these functions are from “Base R.” But, one of the amazing things about R is that a vast user community is always creating new functions and packages that expand R’s capabilities.
In R, the fundamental unit of shareable code is the package. A package bundles together code, data, documentation (including to create the help pages), and tests, and is easy to share with others. They increase the power of R by improving existing base R functionalities, or by adding new ones.
The traditional place to download packages is from CRAN, the [Comprehensive R Archive Network](https://cran.r-project.org/), which is where you downloaded R. CRAN is like a grocery store or iTunes for vetted R packages.
> **Aside**: You can also install packages from GitHub; see [`devtools::install_github()`](https://devtools.r-lib.org/)
You don’t need to go to CRAN’s website to install packages, this can be accomplished within R using the command `install.packages("package-name-in-quotes")`.
### 3\.9\.1 How do you know what packages/functions exist?
How do you know what packages exist? Well, how do you know what movies exist on iTunes? You learn what’s available based on your needs, interests the community around you. We’ll introduce you to several really powerful packages that we work with and help you find others that might be of interest to you. *provide examples here*
### 3\.9\.2 Installing R Packages
Let’s install several packages that we will be using shortly. Write this in your R Markdown document and run it:
```
## setup packages
install.packages("usethis")
```
And after you run it, comment it out:
```
## setup packages
# install.packages("usethis")
```
Now we’ve installed the package, but we need to tell R that we are going to use the functions within the `usethis` package. We do this by using the function `library()`.
In my mind, this is analogous to needing to wire your house for electricity: this is something you do once; this is `install.packages`. But then you need to turn on the lights each time you need them (R Session).
It’s a nice convention to do this on the same line as your commented\-out `install.packages()` line; this makes it easier for someone (including you in a future time or computer) to install the package easily.
```
## setup packages
library(usethis) # install.packages("usethis")
```
When `usethis` is successfully attached, you won’t get any feedback in the Console. So unless you get an error, this worked for you.
Now let’s do the same with the `here` package.
```
library(here) # install.packages("here")
```
```
## here() starts at /Users/lowndes/github/rstudio-conf-2020/r-for-excel
```
```
# here() starts at /Users/lowndes
```
`here` also successfully attached but isn’t quiet about it. It is a “chatty” package; when we attached it did so, and responded with the filepath where we are working from. This is the same as `~/` which we saw earlier.
Finally, let’s install the `tidyverse` package.
```
# install.packages("tidyverse")
```
“The tidyverse is a coherent system of packages for data manipulation, exploration and visualization that share a common design philosophy.” \- Joseph Rickert: [What is the tidyverse?](https://rviews.rstudio.com/2017/06/08/what-is-the-tidyverse/), RStudio Community Blog.
This may take a little while to complete.
### 3\.9\.1 How do you know what packages/functions exist?
How do you know what packages exist? Well, how do you know what movies exist on iTunes? You learn what’s available based on your needs, interests the community around you. We’ll introduce you to several really powerful packages that we work with and help you find others that might be of interest to you. *provide examples here*
### 3\.9\.2 Installing R Packages
Let’s install several packages that we will be using shortly. Write this in your R Markdown document and run it:
```
## setup packages
install.packages("usethis")
```
And after you run it, comment it out:
```
## setup packages
# install.packages("usethis")
```
Now we’ve installed the package, but we need to tell R that we are going to use the functions within the `usethis` package. We do this by using the function `library()`.
In my mind, this is analogous to needing to wire your house for electricity: this is something you do once; this is `install.packages`. But then you need to turn on the lights each time you need them (R Session).
It’s a nice convention to do this on the same line as your commented\-out `install.packages()` line; this makes it easier for someone (including you in a future time or computer) to install the package easily.
```
## setup packages
library(usethis) # install.packages("usethis")
```
When `usethis` is successfully attached, you won’t get any feedback in the Console. So unless you get an error, this worked for you.
Now let’s do the same with the `here` package.
```
library(here) # install.packages("here")
```
```
## here() starts at /Users/lowndes/github/rstudio-conf-2020/r-for-excel
```
```
# here() starts at /Users/lowndes
```
`here` also successfully attached but isn’t quiet about it. It is a “chatty” package; when we attached it did so, and responded with the filepath where we are working from. This is the same as `~/` which we saw earlier.
Finally, let’s install the `tidyverse` package.
```
# install.packages("tidyverse")
```
“The tidyverse is a coherent system of packages for data manipulation, exploration and visualization that share a common design philosophy.” \- Joseph Rickert: [What is the tidyverse?](https://rviews.rstudio.com/2017/06/08/what-is-the-tidyverse/), RStudio Community Blog.
This may take a little while to complete.
3\.10 GitHub brief intro \& config
----------------------------------
Before we break, we are going to set up Git and GitHub which we will be using along with R and RStudio for the rest of the workshop.
Before we do the setup configuration, let me take a moment to talk about what Git and GitHub are.
It helps me to think of GitHub like Dropbox: you identify folders for GitHub to ‘track’ and it syncs them to the cloud. This is good first\-and\-foremost because it makes a back\-up copy of your files: if your computer dies not all of your work is gone. But with GitHub, you have to be more deliberate about when syncs are made. This is because GitHub saves these as different versions, with information about who contributed when, line\-by\-line. This makes collaboration easier, and it allows you to roll\-back to different versions or contribute to others’ work.
git will track and version your files, GitHub stores this online and enables you to collaborate with others (and yourself). Although git and GitHub are two different things, distinct from each other, we can think of them as a bundle since we will always use them together.
### 3\.10\.1 Configure GitHub
This set up is a one\-time thing! You will only have to do this once per computer. We’ll walk through this together. In a browser, go to github.com and to your profile page as a reminder.
**You will need to remember your GitHub username, the email address you created your GitHub account with, and your GitHub password.**
We will be using the `use_git_config()` function from the `usethis` package we just installed. Since we already installed and attached this package, type this into your Console:
```
## use_git_config function with my username and email as arguments
use_git_config(user.name = "jules32", user.email = "jules32@example.org")
```
If you see `Error in use_git_config() : could not find function "use_git_config"` please run `library("usethis")`
### 3\.10\.2 Ensure that Git/GitHub/RStudio are communicating
We are going to go through a few steps to ensure the Git/GitHub are communicating with RStudio
#### 3\.10\.2\.1 RStudio: New Project
Click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
#### 3\.10\.2\.2 Select Version Control
#### 3\.10\.2\.3 Select Git
Since we are using git.
Do you see what I see?
If yes, hooray! Time for a break!
If no, we will help you troubleshoot.
1. Double check that GitHub username and email are correct
2. Troubleshooting, starting with [HappyGitWithR’s troubleshooting chapter](http://happygitwithr.com/troubleshooting.html)
* `which git` (Mac, Linux, or anything running a bash shell)
* `where git` (Windows, when not in a bash shell)
1. Potentially set up a RStudio Cloud account: <https://rstudio.cloud/>
### 3\.10\.3 Troubleshooting
#### 3\.10\.3\.1 Configure git from Terminal
If `usethis` fails, the following is the classic approach to configuring **git**. Open the Git Bash program (Windows) or the Terminal (Mac) and type the following:
```
# display your version of git
git --version
# replace USER with your Github user account
git config --global user.name USER
# replace NAME@EMAIL.EDU with the email you used to register with Github
git config --global user.email NAME@EMAIL.EDU
# list your config to confirm user.* variables set
git config --list
```
This will configure git with global (`--global`) commands, which means it will apply ‘globally’ to all your future github repositories, rather than only to this one now. **Note for PCs**: We’ve seen PC failures correct themselves by doing the above but omitting `--global`. (Then you will need to configure GitHub for every repo you clone but that is fine for now).
#### 3\.10\.3\.2 Troubleshooting
All troubleshooting starts with reading Happy Git With R’s [RStudio, Git, GitHub Hell](http://happygitwithr.com/troubleshooting.html) troubleshooting chapter.
##### 3\.10\.3\.2\.1 New(ish) Error on a Mac
We’ve also seen the following errors from RStudio:
```
error key does not contain a section --global terminal
```
and
```
fatal: not in a git directory
```
To solve this, go to the Terminal and type:
`which git`
Look at the filepath that is returned. Does it say anything to do with Apple?
\-\> If yes, then the [Git you downloaded](https://git-scm.com/downloads) isn’t installed, please redownload if necessary, and follow instructions to install.
\-\> If no, (in the example image, the filepath does not say anything with Apple) then proceed below:
In RStudio, navigate to: Tools \> Global Options \> Git/SVN.
Does the **“Git executable”** filepath match what the url in Terminal says?
If not, click the browse button and navigate there.
> *Note*: on my laptop, even though I navigated to /usr/local/bin/git, it then automatically redirect because /usr/local/bin/git was an alias on my computer. That is fine. Click OK.
### 3\.10\.4 END **RStudio/RMarkdown** session!
### 3\.10\.1 Configure GitHub
This set up is a one\-time thing! You will only have to do this once per computer. We’ll walk through this together. In a browser, go to github.com and to your profile page as a reminder.
**You will need to remember your GitHub username, the email address you created your GitHub account with, and your GitHub password.**
We will be using the `use_git_config()` function from the `usethis` package we just installed. Since we already installed and attached this package, type this into your Console:
```
## use_git_config function with my username and email as arguments
use_git_config(user.name = "jules32", user.email = "jules32@example.org")
```
If you see `Error in use_git_config() : could not find function "use_git_config"` please run `library("usethis")`
### 3\.10\.2 Ensure that Git/GitHub/RStudio are communicating
We are going to go through a few steps to ensure the Git/GitHub are communicating with RStudio
#### 3\.10\.2\.1 RStudio: New Project
Click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
#### 3\.10\.2\.2 Select Version Control
#### 3\.10\.2\.3 Select Git
Since we are using git.
Do you see what I see?
If yes, hooray! Time for a break!
If no, we will help you troubleshoot.
1. Double check that GitHub username and email are correct
2. Troubleshooting, starting with [HappyGitWithR’s troubleshooting chapter](http://happygitwithr.com/troubleshooting.html)
* `which git` (Mac, Linux, or anything running a bash shell)
* `where git` (Windows, when not in a bash shell)
1. Potentially set up a RStudio Cloud account: <https://rstudio.cloud/>
#### 3\.10\.2\.1 RStudio: New Project
Click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
#### 3\.10\.2\.2 Select Version Control
#### 3\.10\.2\.3 Select Git
Since we are using git.
Do you see what I see?
If yes, hooray! Time for a break!
If no, we will help you troubleshoot.
1. Double check that GitHub username and email are correct
2. Troubleshooting, starting with [HappyGitWithR’s troubleshooting chapter](http://happygitwithr.com/troubleshooting.html)
* `which git` (Mac, Linux, or anything running a bash shell)
* `where git` (Windows, when not in a bash shell)
1. Potentially set up a RStudio Cloud account: <https://rstudio.cloud/>
### 3\.10\.3 Troubleshooting
#### 3\.10\.3\.1 Configure git from Terminal
If `usethis` fails, the following is the classic approach to configuring **git**. Open the Git Bash program (Windows) or the Terminal (Mac) and type the following:
```
# display your version of git
git --version
# replace USER with your Github user account
git config --global user.name USER
# replace NAME@EMAIL.EDU with the email you used to register with Github
git config --global user.email NAME@EMAIL.EDU
# list your config to confirm user.* variables set
git config --list
```
This will configure git with global (`--global`) commands, which means it will apply ‘globally’ to all your future github repositories, rather than only to this one now. **Note for PCs**: We’ve seen PC failures correct themselves by doing the above but omitting `--global`. (Then you will need to configure GitHub for every repo you clone but that is fine for now).
#### 3\.10\.3\.2 Troubleshooting
All troubleshooting starts with reading Happy Git With R’s [RStudio, Git, GitHub Hell](http://happygitwithr.com/troubleshooting.html) troubleshooting chapter.
##### 3\.10\.3\.2\.1 New(ish) Error on a Mac
We’ve also seen the following errors from RStudio:
```
error key does not contain a section --global terminal
```
and
```
fatal: not in a git directory
```
To solve this, go to the Terminal and type:
`which git`
Look at the filepath that is returned. Does it say anything to do with Apple?
\-\> If yes, then the [Git you downloaded](https://git-scm.com/downloads) isn’t installed, please redownload if necessary, and follow instructions to install.
\-\> If no, (in the example image, the filepath does not say anything with Apple) then proceed below:
In RStudio, navigate to: Tools \> Global Options \> Git/SVN.
Does the **“Git executable”** filepath match what the url in Terminal says?
If not, click the browse button and navigate there.
> *Note*: on my laptop, even though I navigated to /usr/local/bin/git, it then automatically redirect because /usr/local/bin/git was an alias on my computer. That is fine. Click OK.
#### 3\.10\.3\.1 Configure git from Terminal
If `usethis` fails, the following is the classic approach to configuring **git**. Open the Git Bash program (Windows) or the Terminal (Mac) and type the following:
```
# display your version of git
git --version
# replace USER with your Github user account
git config --global user.name USER
# replace NAME@EMAIL.EDU with the email you used to register with Github
git config --global user.email NAME@EMAIL.EDU
# list your config to confirm user.* variables set
git config --list
```
This will configure git with global (`--global`) commands, which means it will apply ‘globally’ to all your future github repositories, rather than only to this one now. **Note for PCs**: We’ve seen PC failures correct themselves by doing the above but omitting `--global`. (Then you will need to configure GitHub for every repo you clone but that is fine for now).
#### 3\.10\.3\.2 Troubleshooting
All troubleshooting starts with reading Happy Git With R’s [RStudio, Git, GitHub Hell](http://happygitwithr.com/troubleshooting.html) troubleshooting chapter.
##### 3\.10\.3\.2\.1 New(ish) Error on a Mac
We’ve also seen the following errors from RStudio:
```
error key does not contain a section --global terminal
```
and
```
fatal: not in a git directory
```
To solve this, go to the Terminal and type:
`which git`
Look at the filepath that is returned. Does it say anything to do with Apple?
\-\> If yes, then the [Git you downloaded](https://git-scm.com/downloads) isn’t installed, please redownload if necessary, and follow instructions to install.
\-\> If no, (in the example image, the filepath does not say anything with Apple) then proceed below:
In RStudio, navigate to: Tools \> Global Options \> Git/SVN.
Does the **“Git executable”** filepath match what the url in Terminal says?
If not, click the browse button and navigate there.
> *Note*: on my laptop, even though I navigated to /usr/local/bin/git, it then automatically redirect because /usr/local/bin/git was an alias on my computer. That is fine. Click OK.
##### 3\.10\.3\.2\.1 New(ish) Error on a Mac
We’ve also seen the following errors from RStudio:
```
error key does not contain a section --global terminal
```
and
```
fatal: not in a git directory
```
To solve this, go to the Terminal and type:
`which git`
Look at the filepath that is returned. Does it say anything to do with Apple?
\-\> If yes, then the [Git you downloaded](https://git-scm.com/downloads) isn’t installed, please redownload if necessary, and follow instructions to install.
\-\> If no, (in the example image, the filepath does not say anything with Apple) then proceed below:
In RStudio, navigate to: Tools \> Global Options \> Git/SVN.
Does the **“Git executable”** filepath match what the url in Terminal says?
If not, click the browse button and navigate there.
> *Note*: on my laptop, even though I navigated to /usr/local/bin/git, it then automatically redirect because /usr/local/bin/git was an alias on my computer. That is fine. Click OK.
### 3\.10\.4 END **RStudio/RMarkdown** session!
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/github.html |
Chapter 4 GitHub
================
4\.1 Summary
------------
We will learn about version control and practice a workflow with GitHub and RStudio that streamlines working with our most important collaborator: Future You.
### 4\.1\.1 Objectives
Today, we’ll interface with GitHub from our local computers using RStudio.
> **Aside**: There are many other ways to interact with GitHub, including GitHub’s Desktop App and the command line ([here is Jenny Bryan’s list of git clients](http://stat545.com/git02_git-clients.html)). You have the largest suite of options if you interface through the command line, but the most common things you’ll do can be done through one of these other applications (i.e. RStudio).
Here’s what we’ll do, since we’ve already set up git on your computers in the previous session (Chapter [4](github.html#github)):
1. create a repository on Github.com (remote)
2. clone locally using RStudio
3. sync local to remote: pull, stage, commit, push
4. explore github.com files, commit history, README
5. project\-oriented workflows
6. project\-oriented workflows in action
### 4\.1\.2 Resources
* [Excuse me, do you have a moment to talk about version control?](https://peerj.com/preprints/3159/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com/) by Jenny Bryan, specifically [Detect Git from RStudio](http://happygitwithr.com/rstudio-see-git.html)
* [What They Forgot to Teach You About R](https://rstats.wtf/) by Jenny Bryan, specifically [Project\-oriented workflows](https://rstats.wtf/project-oriented-workflow.html)
* [GitHub Quickstart](https://rawgit.com/nazrug/Quickstart/master/GithubQuickstart.html) by Melanie Frazier
* [GitHub for Project Management](https://openscapes.github.io/series/github-issues.html) by Openscapes
4\.2 Why should R users use Github?
-----------------------------------
Modern R users use GitHub because it helps make coding collaborative and social while also providing huge benefits to organization, archiving, and being able to find your files easily when you need them.
One of the most compelling reasons for me is that it ends (or nearly ends) the horror of keeping track of versions.
Basically, we get away from this:
This is a nightmare not only because I have NO idea which is truly the version we used in that analysis we need to update, but because it is going to take a lot of detective work to see what actually changed between each file. Also, it is very sad to think about the amount of time everyone involved is spending on bookkeeping: is everyone downloading an attachment, dragging it to wherever they organize this on their own computers, and then renaming everything? Hours and hours of all of our lives.
But then there is GitHub.
In GitHub, in this example you will likely only see a single file, which is the most recent version. GitHub’s job is to track who made any changes and when (so no need to save a copy with your name or date at the end), and it also requires that you write something human\-readable that will be a breadcrumb for you in the future. It is also designed to be easy to compare versions, and you can easily revert to previous versions.
GitHub also supercharges you as a collaborator. First and foremost with Future You, but also sets you up to collaborate with Future Us!
GitHub, especially in combination with RStudio, is also game\-changing for publishing and distributing. You can — and we will — publish and share files openly on the internet.
### 4\.2\.1 What is Github? And Git?
OK so what is GitHub? And Git?
* **Git** is a program that you install on your computer: it is version control software that tracks changes to your files over time.
* **Github** is a website that is essentially a social media platform for your git\-versioned files. GitHub stores all your versioned files as an archive, but also as allows you to interact with other people’s files and has management tools for the social side of software projects. It has many nice features to be able visualize differences between [images](https://help.github.com/articles/rendering-and-diffing-images/), [rendering](https://help.github.com/articles/mapping-geojson-files-on-github/) \& [diffing](https://github.com/blog/1772-diffable-more-customizable-maps) map data files, [render text data files](https://help.github.com/articles/rendering-csv-and-tsv-data/), and [track changes in text](https://help.github.com/articles/rendering-differences-in-prose-documents/).
Github was developed for software development, so much of the functionality and terminology of that is exciting for professional programmers (e.g., branches and pull requests) isn’t necessarily the right place for us as new R users to get started.
So we will be learning and practicing GitHub’s features and terminology on a “need to know basis” as we start managing our projects with GitHub.
4\.3 Github Configuration
-------------------------
We’ve just configured Github this at the end of Chapter [3](rstudio.html#rstudio). So skip to the next section if you’ve just completed this! However, if you’re dropping in on this chapter to setup Github, make sure you first [configure Github with these instructions](rstudio.html#github-brief-intro-config) before continuing.
4\.4 Create a repository on Github.com
--------------------------------------
Let’s get started by going to <https://github.com> and going to our user profile. You can do this by typing your username in the URL (github.com/username), or after signing in, by clicking on the top\-right button and going to your profile.
This will have an overview of you and your work, and then you can click on the Repository tab
Repositories are the main “unit” of GitHub: they are what GitHub tracks. They are essentially project\-level folders that will contain everything associated with a project. It’s where we’ll start too.
We create a new repository (called a “repo”) by clicking “New repository.”
Choose a name. Call it whatever you want (the shorter the better), or follow me for convenience. I will call mine `r-workshop`.
Also, add a description, make it public, create a README file, and create your repo!
The *Add gitignore* option adds a document where you can identify files or file\-types you want Github to ignore. These files will stay in on the local Github folder (the one on your computer), but will not be uploaded onto the web version of Github.
The *Add a license* option adds a license that describes how other people can use your Github files (e.g., open source, but no one can profit from them, etc.). We won’t worry about this today.
Check out our new repository!
Great! So now we have our new repository that exists in the Cloud. Let’s get it established locally on our computers: that is called “cloning.”
4\.5 Clone your repository using RStudio
----------------------------------------
Let’s clone this repo to our local computer using RStudio. Unlike downloading, cloning keeps all the version control and user information bundled with the files.
### 4\.5\.1 Copy the repo address
First, copy the web address of the repository you want to clone. We will use HTTPS.
> **Aside**: HTTPS is default, but you could alternatively set up with SSH. This is more advanced than we will get into here, but allows 2\-factor authentication. See [Happy Git with R](https://happygitwithr.com/credential-caching.html#special-consideration-re-two-factor-authentication) for more information.
### 4\.5\.2 RStudio: New Project
Now go back to RStudio, and click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
### 4\.5\.3 Select Version Control
### 4\.5\.4 Select Git
Since we are using git.
### 4\.5\.5 Paste the repo address
Paste the repo address (which is still in your clipboard) into in the “Repository URL” field. The “Project directory name” should autofill; if it does not press *tab*, or type it in. It is best practice to keep the “Project directory name” THE SAME as the repository name.
When cloned, this repository is going to become a folder on your computer.
At this point you can save this repo anywhere. There are different schools of thought but we think it is useful to create a high\-level folder where you will keep your github repos to keep them organized. We call ours `github` and keep it in our root folder (`~/github`), and so that is what we will demonstrate here — you are welcome to do the same. Press “Browse…” to navigate to a folder and you have the option of creating a new folder.
Finally, click Create Project.
### 4\.5\.6 Admire your local repo
If everything went well, the repository will show up in RStudio!
The repository is also saved to the location you specified, and you can navigate to it as you normally would in Finder or Windows Explorer:
Hooray!
### 4\.5\.7 Inspect your local repo
Let’s notice a few things:
First, our working directory is set to `~/github/r-workshop`, and `r-workshop` is also named in the top right hand corner.
Second, we have a Git tab in the top right pane! Let’s click on it.
Our Git tab has 2 items:
* .gitignore file
* .Rproj file
These have been added to our repo by RStudio — we can also see them in the File pane in the bottom right of RStudio. These are helper files that RStudio has added to streamline our workflow with GitHub and R. We will talk about these a bit more soon. One thing to note about these files is that they begin with a period (`.`) which means they are hidden files: they show up in the Files pane of RStudio but won’t show up in your Finder or Windows Explorer.
Going back to the Git tab, both these files have little yellow icons with question marks `?`. This is GitHub’s way of saying: “I am responsible for tracking everything that happens in this repo, but I’m not sure what is going on with these files yet. Do you want me to track them too?”
We will handle this in a moment; first let’s look at the README.md file.
### 4\.5\.8 Edit your README file
Let’s also open up the README.md. This is a Markdown file, which is the same language we just learned with R Markdown. It’s like an R Markdown file without the abilities to run R code.
We will edit the file and illustrate how GitHub tracks files that have been modified (to complement seeing how it tracks files that have been added.
README files are common in programming; they are the first place that someone will look to see why code exists and how to run it.
In my README, I’ll write:
```
This repo is for my analyses at RStudio::conf(2020).
```
When I save this, notice how it shows up in my Git tab. It has a blue “M”: GitHub is already tracking this file, and tracking it line\-by\-line, so it knows that something is different: it’s Modified with an M.
Great. Now let’s sync back to GitHub in 4 steps.
4\.6 Sync from RStudio (local) to GitHub (remote)
-------------------------------------------------
Syncing to GitHub.com means 4 steps:
1. Pull
2. Stage
3. Commit
4. Push
We start off this whole process by clicking on the Commit section.
### 4\.6\.1 Pull
We start off by “Pulling” from the remote repository (GitHub.com) to make sure that our local copy has the most up\-to\-date information that is available online. Right now, since we just created the repo and are the only ones that have permission to work on it, we can be pretty confident that there isn’t new information available. But we pull anyways because this is a very safe habit to get into for when you start collaborating with yourself across computers or others. Best practice is to pull often: it costs nothing (other than an internet connection).
Pull by clicking the teal Down Arrow. (Notice also how when you highlight a filename, a preview of the differences displays below).
### 4\.6\.2 Stage
Let’s click the boxes next to each file. This is called “staging a file”: you are indicating that you want GitHub to track this file, and that you will be syncing it shortly. Notice:
* .Rproj and .gitignore files: the question marks turn into an A because these are new files that have been added to your repo (automatically by RStudio, not by you).
* README.md file: the M indicates that this was modified (by you)
These are the codes used to describe how the files are changed, (from the RStudio [cheatsheet](http://www.rstudio.com/wp-content/uploads/2016/01/rstudio-IDE-cheatsheet.pdf)):
### 4\.6\.3 Commit
Committing is different from saving our files (which we still have to do! RStudio will indicate a file is unsaved with red text and an asterix). We commit a single file or a group of files when we are ready to save a snapshot in time of the progress we’ve made. Maybe this is after a big part of the analysis was done, or when you’re done working for the day.
Committing our files is a 2\-step process.
First, you write a “commit message,” which is a human\-readable note about what has changed that will accompany GitHub’s non\-human\-readable alphanumeric code to track our files. I think of commit messages like breadcrumbs to my Future Self: how can I use this space to be useful for me if I’m trying to retrace my steps (and perhaps in a panic?).
Second, you press Commit.
When we have committed successfully, we get a rather unsuccessful\-looking pop\-up message. You can read this message as “Congratulations! You’ve successfully committed 3 files, 2 of which are new!” It is also providing you with that alphanumeric SHA code that GitHub is using to track these files.
If our attempt was not successful, we will see an Error. Otherwise, interpret this message as a joyous one.
> Does your pop\-up message say “Aborting commit due to empty commit message.” GitHub is really serious about writing human\-readable commit messages.
When we close this window there is going to be (in my opinion) a very subtle indication that we are not done with the syncing process.
We have successfully committed our work as a breadcrumb\-message\-approved snapshot in time, but it still only exists locally on our computer. We can commit without an internet connection; we have not done anything yet to tell GitHub that we want this pushed to the remote repo at GitHub.com. So as the last step, we push.
### 4\.6\.4 Push
The last step in the syncing process is to Push!
Awesome! We’re done here in RStudio for the moment, let’s check out the remote on GitHub.com.
4\.7 Commit history
-------------------
The files you added should be on github.com.
Notice how the README.md file we created is automatically displayed at the bottom. Since it is good practice to have a README file that identifies what code does (i.e. why it exists), GitHub will display a Markdown file called README nicely formatted.
Let’s also explore the commit history. The 2 commits we’ve made (the first was when we originally initiated the repo from GitHub.com) are there!
4\.8 Project\-oriented workflows
--------------------------------
Let’s go back to RStudio and how we set up well\-organized projects and workflows for our data analyses.
This GitHub repository is now also an RStudio Project (capital P Project). This just means that RStudio has saved this additional file with extension `.Rproj` (ours is `r-workshop.Rproj`) to store specific settings for this project. It’s a bit of technology to help us get into the good habit of having a project\-oriented workflow.
A [project\-oriented workflow](https://rstats.wtf/project-oriented-workflow.html) means that we are going to organize all of the relevant things we need for our analyses in the same place. That means that this is the place where we keep all of our data, code, figures, notes, etc.
R Projects are great for reproducibility, because our self\-contained working directory will be the **first** place R looks for files.
### 4\.8\.1 Working directory
Now that we have our Project, let’s revisit this important question: where are we? Now we are in our Project. Everything we do will by default be saved here so we can be nice and organized.
And this is important because if Allison clones this repository that you just made and saves it in `Allison/my/projects/way/over/here`, she will still be able to interact with your files as you are here.
4\.9 Project\-oriented workflows in action (aka our analytical setup)
---------------------------------------------------------------------
Let’s get a bit organized. First, let’s create our a new R Markdown file where we will do our analyses. This will be nice because you can also write notes to yourself in this document.
### 4\.9\.1 Create a new Rmd file
So let’s do this (again):
File \> New File \> R Markdown … (or click the green plus in the top left corner).
Let’s set up this file so we can use it for the rest of the day. I’m going to update the header with a new title and add my name, and then I’m going to delete the rest of the document so that we have a clean start.
> **Efficiency Tip**: I use Shift \- Command \- Down Arrow to highlight text from my cursor to the end of the document
```
---
title: "Creating graphs in R with `ggplot2`"
author: "Julie Lowndes"
date: "01/27/2020"
output: html_document
---
# Plots with ggplot2
We are going to make plots in R and it's going to be amazing.
```
Now, let’s save it. I’m going to call my file `plots-ggplot.Rmd`.
Notice that when we save this file, it pops up in our Git tab. Git knows that there is something new in our repo.
Let’s also knit this file. And look: Git also sees the knitted .html.
And let’s practice syncing our file to GitHub: pull, stage, commit, push
> **Troubleshooting:** What if a file doesn’t show up in the Git tab and you expect that it should? Check to make sure you’ve saved the file. If the filename is red with an asterix, there have been changes since it was saved. Remember to save before syncing to GitHub!
### 4\.9\.2 Create data and figures folders
Let’s create a few folders to be organized. Let’s have one for our the raw data, and one for the figures we will output. We can do this in RStudio, in the bottom right pane Files pane by clicking the New Folder button:
* folder called “data”
* folder called “figures”
We can press the refresh button in the top\-right of this pane (next to the “More” button) to have these show up in alphabetical order.
Now let’s go to our Finder or Windows Explorer: our new folders are there as well!
### 4\.9\.3 Move data files to data folder
You downloaded several files for this workshop from the [r\-for\-excel\-data folder](https://drive.google.com/drive/folders/1RywSUw8hxETlROdIhLIntxPsZq0tKSdS?usp=sharing), and we’ll move these data into our repo now. These data files are a mix of comma separate value (.csv) files and some as Excel spreadsheets (.xlsx):
* ca\_np.csv
* ci\_np.xlsx
* fish.csv
* inverts.xlsx
* kelp\_fronds.xlsx
* lobsters.xlsx
* lobsters2\.xlsx
* noaa\_landings.csv
* substrate.xlsx
Copy\-paste or drag all of these files into the ‘data’ subfolder of your R project. Make sure you do not also copy the original folder; we don’t need any subfolders in our data folder.
Now let’s go back to RStudio. We can click on the data folder in the Files tab and see them.
The data folder also shows up in your Git tab. But the figures folder does not. That is because GitHub cannot track an empty folder, it can only track files within a folder.
Let’s sync these data files (we will be able to sync the figures folder shortly). We can stage multiple files at once by typing Command \- A and clicking “Stage” (or using the space bar). To Sync: pull \- stage \- commit \- push!
### 4\.9\.4 Activity
Edit your README and practice syncing (pull, stage, commit, push). For example,
```
"We use the following data from the Santa Barbara Coastal Term Ecological Research and National Oceanic and Atmospheric Administration in our analyses"
```
Explore your Commit History, and discuss with your neighbor.
4\.10 Committing \- how often? Tracking changes in your files
-------------------------------------------------------------
Whenever you make changes to the files in Github, you will walk through the Pull \-\> Stage \-\> Commit \-\> Push steps.
I tend to do this every time I finish a task (basically when I start getting nervous that I will lose my work). Once something is committed, it is very difficult to lose it.
4\.11 Issues
------------
Let’s go back to our repo on GitHub.com, and talk about Issues.
Issues “track ideas, enhancements, tasks, or bugs for work on GitHub.” \- [GitHub help article](https://help.github.com/en/articles/about-issues).
You can create an issue for a topic, track progress, others ask questions, provide links and updates, close issue when completed.
In a public repo, anyone with a username can create and comment on issues. In a private repo, only users with permission can create and comment on issues, or see them at all.
GitHub search is awesome – will search code and issues!
### 4\.11\.1 Issues in the wild!
Here are some examples of “traditional” and “less traditional” Issues:
Bug reports, code, feature, \& help requests: [ggplot2](https://github.com/tidyverse/ggplot2/issues)
Project submissions and progress tracking: [MozillaFestival](https://github.com/MozillaFestival/mozfest-program-2018/issues)
Private conversations and archiving: [OHI Fellows (private)](https://github.com/OHI-Science/globalfellows-issues/issues/)
### 4\.11\.2 END **GitHub** session!
We’ll continue practicing GitHub throughout the rest of the book, but see Chapter [9](collaborating.html#collaborating) for explicit instructions on collaborating in GitHub.
4\.1 Summary
------------
We will learn about version control and practice a workflow with GitHub and RStudio that streamlines working with our most important collaborator: Future You.
### 4\.1\.1 Objectives
Today, we’ll interface with GitHub from our local computers using RStudio.
> **Aside**: There are many other ways to interact with GitHub, including GitHub’s Desktop App and the command line ([here is Jenny Bryan’s list of git clients](http://stat545.com/git02_git-clients.html)). You have the largest suite of options if you interface through the command line, but the most common things you’ll do can be done through one of these other applications (i.e. RStudio).
Here’s what we’ll do, since we’ve already set up git on your computers in the previous session (Chapter [4](github.html#github)):
1. create a repository on Github.com (remote)
2. clone locally using RStudio
3. sync local to remote: pull, stage, commit, push
4. explore github.com files, commit history, README
5. project\-oriented workflows
6. project\-oriented workflows in action
### 4\.1\.2 Resources
* [Excuse me, do you have a moment to talk about version control?](https://peerj.com/preprints/3159/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com/) by Jenny Bryan, specifically [Detect Git from RStudio](http://happygitwithr.com/rstudio-see-git.html)
* [What They Forgot to Teach You About R](https://rstats.wtf/) by Jenny Bryan, specifically [Project\-oriented workflows](https://rstats.wtf/project-oriented-workflow.html)
* [GitHub Quickstart](https://rawgit.com/nazrug/Quickstart/master/GithubQuickstart.html) by Melanie Frazier
* [GitHub for Project Management](https://openscapes.github.io/series/github-issues.html) by Openscapes
### 4\.1\.1 Objectives
Today, we’ll interface with GitHub from our local computers using RStudio.
> **Aside**: There are many other ways to interact with GitHub, including GitHub’s Desktop App and the command line ([here is Jenny Bryan’s list of git clients](http://stat545.com/git02_git-clients.html)). You have the largest suite of options if you interface through the command line, but the most common things you’ll do can be done through one of these other applications (i.e. RStudio).
Here’s what we’ll do, since we’ve already set up git on your computers in the previous session (Chapter [4](github.html#github)):
1. create a repository on Github.com (remote)
2. clone locally using RStudio
3. sync local to remote: pull, stage, commit, push
4. explore github.com files, commit history, README
5. project\-oriented workflows
6. project\-oriented workflows in action
### 4\.1\.2 Resources
* [Excuse me, do you have a moment to talk about version control?](https://peerj.com/preprints/3159/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com/) by Jenny Bryan, specifically [Detect Git from RStudio](http://happygitwithr.com/rstudio-see-git.html)
* [What They Forgot to Teach You About R](https://rstats.wtf/) by Jenny Bryan, specifically [Project\-oriented workflows](https://rstats.wtf/project-oriented-workflow.html)
* [GitHub Quickstart](https://rawgit.com/nazrug/Quickstart/master/GithubQuickstart.html) by Melanie Frazier
* [GitHub for Project Management](https://openscapes.github.io/series/github-issues.html) by Openscapes
4\.2 Why should R users use Github?
-----------------------------------
Modern R users use GitHub because it helps make coding collaborative and social while also providing huge benefits to organization, archiving, and being able to find your files easily when you need them.
One of the most compelling reasons for me is that it ends (or nearly ends) the horror of keeping track of versions.
Basically, we get away from this:
This is a nightmare not only because I have NO idea which is truly the version we used in that analysis we need to update, but because it is going to take a lot of detective work to see what actually changed between each file. Also, it is very sad to think about the amount of time everyone involved is spending on bookkeeping: is everyone downloading an attachment, dragging it to wherever they organize this on their own computers, and then renaming everything? Hours and hours of all of our lives.
But then there is GitHub.
In GitHub, in this example you will likely only see a single file, which is the most recent version. GitHub’s job is to track who made any changes and when (so no need to save a copy with your name or date at the end), and it also requires that you write something human\-readable that will be a breadcrumb for you in the future. It is also designed to be easy to compare versions, and you can easily revert to previous versions.
GitHub also supercharges you as a collaborator. First and foremost with Future You, but also sets you up to collaborate with Future Us!
GitHub, especially in combination with RStudio, is also game\-changing for publishing and distributing. You can — and we will — publish and share files openly on the internet.
### 4\.2\.1 What is Github? And Git?
OK so what is GitHub? And Git?
* **Git** is a program that you install on your computer: it is version control software that tracks changes to your files over time.
* **Github** is a website that is essentially a social media platform for your git\-versioned files. GitHub stores all your versioned files as an archive, but also as allows you to interact with other people’s files and has management tools for the social side of software projects. It has many nice features to be able visualize differences between [images](https://help.github.com/articles/rendering-and-diffing-images/), [rendering](https://help.github.com/articles/mapping-geojson-files-on-github/) \& [diffing](https://github.com/blog/1772-diffable-more-customizable-maps) map data files, [render text data files](https://help.github.com/articles/rendering-csv-and-tsv-data/), and [track changes in text](https://help.github.com/articles/rendering-differences-in-prose-documents/).
Github was developed for software development, so much of the functionality and terminology of that is exciting for professional programmers (e.g., branches and pull requests) isn’t necessarily the right place for us as new R users to get started.
So we will be learning and practicing GitHub’s features and terminology on a “need to know basis” as we start managing our projects with GitHub.
### 4\.2\.1 What is Github? And Git?
OK so what is GitHub? And Git?
* **Git** is a program that you install on your computer: it is version control software that tracks changes to your files over time.
* **Github** is a website that is essentially a social media platform for your git\-versioned files. GitHub stores all your versioned files as an archive, but also as allows you to interact with other people’s files and has management tools for the social side of software projects. It has many nice features to be able visualize differences between [images](https://help.github.com/articles/rendering-and-diffing-images/), [rendering](https://help.github.com/articles/mapping-geojson-files-on-github/) \& [diffing](https://github.com/blog/1772-diffable-more-customizable-maps) map data files, [render text data files](https://help.github.com/articles/rendering-csv-and-tsv-data/), and [track changes in text](https://help.github.com/articles/rendering-differences-in-prose-documents/).
Github was developed for software development, so much of the functionality and terminology of that is exciting for professional programmers (e.g., branches and pull requests) isn’t necessarily the right place for us as new R users to get started.
So we will be learning and practicing GitHub’s features and terminology on a “need to know basis” as we start managing our projects with GitHub.
4\.3 Github Configuration
-------------------------
We’ve just configured Github this at the end of Chapter [3](rstudio.html#rstudio). So skip to the next section if you’ve just completed this! However, if you’re dropping in on this chapter to setup Github, make sure you first [configure Github with these instructions](rstudio.html#github-brief-intro-config) before continuing.
4\.4 Create a repository on Github.com
--------------------------------------
Let’s get started by going to <https://github.com> and going to our user profile. You can do this by typing your username in the URL (github.com/username), or after signing in, by clicking on the top\-right button and going to your profile.
This will have an overview of you and your work, and then you can click on the Repository tab
Repositories are the main “unit” of GitHub: they are what GitHub tracks. They are essentially project\-level folders that will contain everything associated with a project. It’s where we’ll start too.
We create a new repository (called a “repo”) by clicking “New repository.”
Choose a name. Call it whatever you want (the shorter the better), or follow me for convenience. I will call mine `r-workshop`.
Also, add a description, make it public, create a README file, and create your repo!
The *Add gitignore* option adds a document where you can identify files or file\-types you want Github to ignore. These files will stay in on the local Github folder (the one on your computer), but will not be uploaded onto the web version of Github.
The *Add a license* option adds a license that describes how other people can use your Github files (e.g., open source, but no one can profit from them, etc.). We won’t worry about this today.
Check out our new repository!
Great! So now we have our new repository that exists in the Cloud. Let’s get it established locally on our computers: that is called “cloning.”
4\.5 Clone your repository using RStudio
----------------------------------------
Let’s clone this repo to our local computer using RStudio. Unlike downloading, cloning keeps all the version control and user information bundled with the files.
### 4\.5\.1 Copy the repo address
First, copy the web address of the repository you want to clone. We will use HTTPS.
> **Aside**: HTTPS is default, but you could alternatively set up with SSH. This is more advanced than we will get into here, but allows 2\-factor authentication. See [Happy Git with R](https://happygitwithr.com/credential-caching.html#special-consideration-re-two-factor-authentication) for more information.
### 4\.5\.2 RStudio: New Project
Now go back to RStudio, and click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
### 4\.5\.3 Select Version Control
### 4\.5\.4 Select Git
Since we are using git.
### 4\.5\.5 Paste the repo address
Paste the repo address (which is still in your clipboard) into in the “Repository URL” field. The “Project directory name” should autofill; if it does not press *tab*, or type it in. It is best practice to keep the “Project directory name” THE SAME as the repository name.
When cloned, this repository is going to become a folder on your computer.
At this point you can save this repo anywhere. There are different schools of thought but we think it is useful to create a high\-level folder where you will keep your github repos to keep them organized. We call ours `github` and keep it in our root folder (`~/github`), and so that is what we will demonstrate here — you are welcome to do the same. Press “Browse…” to navigate to a folder and you have the option of creating a new folder.
Finally, click Create Project.
### 4\.5\.6 Admire your local repo
If everything went well, the repository will show up in RStudio!
The repository is also saved to the location you specified, and you can navigate to it as you normally would in Finder or Windows Explorer:
Hooray!
### 4\.5\.7 Inspect your local repo
Let’s notice a few things:
First, our working directory is set to `~/github/r-workshop`, and `r-workshop` is also named in the top right hand corner.
Second, we have a Git tab in the top right pane! Let’s click on it.
Our Git tab has 2 items:
* .gitignore file
* .Rproj file
These have been added to our repo by RStudio — we can also see them in the File pane in the bottom right of RStudio. These are helper files that RStudio has added to streamline our workflow with GitHub and R. We will talk about these a bit more soon. One thing to note about these files is that they begin with a period (`.`) which means they are hidden files: they show up in the Files pane of RStudio but won’t show up in your Finder or Windows Explorer.
Going back to the Git tab, both these files have little yellow icons with question marks `?`. This is GitHub’s way of saying: “I am responsible for tracking everything that happens in this repo, but I’m not sure what is going on with these files yet. Do you want me to track them too?”
We will handle this in a moment; first let’s look at the README.md file.
### 4\.5\.8 Edit your README file
Let’s also open up the README.md. This is a Markdown file, which is the same language we just learned with R Markdown. It’s like an R Markdown file without the abilities to run R code.
We will edit the file and illustrate how GitHub tracks files that have been modified (to complement seeing how it tracks files that have been added.
README files are common in programming; they are the first place that someone will look to see why code exists and how to run it.
In my README, I’ll write:
```
This repo is for my analyses at RStudio::conf(2020).
```
When I save this, notice how it shows up in my Git tab. It has a blue “M”: GitHub is already tracking this file, and tracking it line\-by\-line, so it knows that something is different: it’s Modified with an M.
Great. Now let’s sync back to GitHub in 4 steps.
### 4\.5\.1 Copy the repo address
First, copy the web address of the repository you want to clone. We will use HTTPS.
> **Aside**: HTTPS is default, but you could alternatively set up with SSH. This is more advanced than we will get into here, but allows 2\-factor authentication. See [Happy Git with R](https://happygitwithr.com/credential-caching.html#special-consideration-re-two-factor-authentication) for more information.
### 4\.5\.2 RStudio: New Project
Now go back to RStudio, and click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
### 4\.5\.3 Select Version Control
### 4\.5\.4 Select Git
Since we are using git.
### 4\.5\.5 Paste the repo address
Paste the repo address (which is still in your clipboard) into in the “Repository URL” field. The “Project directory name” should autofill; if it does not press *tab*, or type it in. It is best practice to keep the “Project directory name” THE SAME as the repository name.
When cloned, this repository is going to become a folder on your computer.
At this point you can save this repo anywhere. There are different schools of thought but we think it is useful to create a high\-level folder where you will keep your github repos to keep them organized. We call ours `github` and keep it in our root folder (`~/github`), and so that is what we will demonstrate here — you are welcome to do the same. Press “Browse…” to navigate to a folder and you have the option of creating a new folder.
Finally, click Create Project.
### 4\.5\.6 Admire your local repo
If everything went well, the repository will show up in RStudio!
The repository is also saved to the location you specified, and you can navigate to it as you normally would in Finder or Windows Explorer:
Hooray!
### 4\.5\.7 Inspect your local repo
Let’s notice a few things:
First, our working directory is set to `~/github/r-workshop`, and `r-workshop` is also named in the top right hand corner.
Second, we have a Git tab in the top right pane! Let’s click on it.
Our Git tab has 2 items:
* .gitignore file
* .Rproj file
These have been added to our repo by RStudio — we can also see them in the File pane in the bottom right of RStudio. These are helper files that RStudio has added to streamline our workflow with GitHub and R. We will talk about these a bit more soon. One thing to note about these files is that they begin with a period (`.`) which means they are hidden files: they show up in the Files pane of RStudio but won’t show up in your Finder or Windows Explorer.
Going back to the Git tab, both these files have little yellow icons with question marks `?`. This is GitHub’s way of saying: “I am responsible for tracking everything that happens in this repo, but I’m not sure what is going on with these files yet. Do you want me to track them too?”
We will handle this in a moment; first let’s look at the README.md file.
### 4\.5\.8 Edit your README file
Let’s also open up the README.md. This is a Markdown file, which is the same language we just learned with R Markdown. It’s like an R Markdown file without the abilities to run R code.
We will edit the file and illustrate how GitHub tracks files that have been modified (to complement seeing how it tracks files that have been added.
README files are common in programming; they are the first place that someone will look to see why code exists and how to run it.
In my README, I’ll write:
```
This repo is for my analyses at RStudio::conf(2020).
```
When I save this, notice how it shows up in my Git tab. It has a blue “M”: GitHub is already tracking this file, and tracking it line\-by\-line, so it knows that something is different: it’s Modified with an M.
Great. Now let’s sync back to GitHub in 4 steps.
4\.6 Sync from RStudio (local) to GitHub (remote)
-------------------------------------------------
Syncing to GitHub.com means 4 steps:
1. Pull
2. Stage
3. Commit
4. Push
We start off this whole process by clicking on the Commit section.
### 4\.6\.1 Pull
We start off by “Pulling” from the remote repository (GitHub.com) to make sure that our local copy has the most up\-to\-date information that is available online. Right now, since we just created the repo and are the only ones that have permission to work on it, we can be pretty confident that there isn’t new information available. But we pull anyways because this is a very safe habit to get into for when you start collaborating with yourself across computers or others. Best practice is to pull often: it costs nothing (other than an internet connection).
Pull by clicking the teal Down Arrow. (Notice also how when you highlight a filename, a preview of the differences displays below).
### 4\.6\.2 Stage
Let’s click the boxes next to each file. This is called “staging a file”: you are indicating that you want GitHub to track this file, and that you will be syncing it shortly. Notice:
* .Rproj and .gitignore files: the question marks turn into an A because these are new files that have been added to your repo (automatically by RStudio, not by you).
* README.md file: the M indicates that this was modified (by you)
These are the codes used to describe how the files are changed, (from the RStudio [cheatsheet](http://www.rstudio.com/wp-content/uploads/2016/01/rstudio-IDE-cheatsheet.pdf)):
### 4\.6\.3 Commit
Committing is different from saving our files (which we still have to do! RStudio will indicate a file is unsaved with red text and an asterix). We commit a single file or a group of files when we are ready to save a snapshot in time of the progress we’ve made. Maybe this is after a big part of the analysis was done, or when you’re done working for the day.
Committing our files is a 2\-step process.
First, you write a “commit message,” which is a human\-readable note about what has changed that will accompany GitHub’s non\-human\-readable alphanumeric code to track our files. I think of commit messages like breadcrumbs to my Future Self: how can I use this space to be useful for me if I’m trying to retrace my steps (and perhaps in a panic?).
Second, you press Commit.
When we have committed successfully, we get a rather unsuccessful\-looking pop\-up message. You can read this message as “Congratulations! You’ve successfully committed 3 files, 2 of which are new!” It is also providing you with that alphanumeric SHA code that GitHub is using to track these files.
If our attempt was not successful, we will see an Error. Otherwise, interpret this message as a joyous one.
> Does your pop\-up message say “Aborting commit due to empty commit message.” GitHub is really serious about writing human\-readable commit messages.
When we close this window there is going to be (in my opinion) a very subtle indication that we are not done with the syncing process.
We have successfully committed our work as a breadcrumb\-message\-approved snapshot in time, but it still only exists locally on our computer. We can commit without an internet connection; we have not done anything yet to tell GitHub that we want this pushed to the remote repo at GitHub.com. So as the last step, we push.
### 4\.6\.4 Push
The last step in the syncing process is to Push!
Awesome! We’re done here in RStudio for the moment, let’s check out the remote on GitHub.com.
### 4\.6\.1 Pull
We start off by “Pulling” from the remote repository (GitHub.com) to make sure that our local copy has the most up\-to\-date information that is available online. Right now, since we just created the repo and are the only ones that have permission to work on it, we can be pretty confident that there isn’t new information available. But we pull anyways because this is a very safe habit to get into for when you start collaborating with yourself across computers or others. Best practice is to pull often: it costs nothing (other than an internet connection).
Pull by clicking the teal Down Arrow. (Notice also how when you highlight a filename, a preview of the differences displays below).
### 4\.6\.2 Stage
Let’s click the boxes next to each file. This is called “staging a file”: you are indicating that you want GitHub to track this file, and that you will be syncing it shortly. Notice:
* .Rproj and .gitignore files: the question marks turn into an A because these are new files that have been added to your repo (automatically by RStudio, not by you).
* README.md file: the M indicates that this was modified (by you)
These are the codes used to describe how the files are changed, (from the RStudio [cheatsheet](http://www.rstudio.com/wp-content/uploads/2016/01/rstudio-IDE-cheatsheet.pdf)):
### 4\.6\.3 Commit
Committing is different from saving our files (which we still have to do! RStudio will indicate a file is unsaved with red text and an asterix). We commit a single file or a group of files when we are ready to save a snapshot in time of the progress we’ve made. Maybe this is after a big part of the analysis was done, or when you’re done working for the day.
Committing our files is a 2\-step process.
First, you write a “commit message,” which is a human\-readable note about what has changed that will accompany GitHub’s non\-human\-readable alphanumeric code to track our files. I think of commit messages like breadcrumbs to my Future Self: how can I use this space to be useful for me if I’m trying to retrace my steps (and perhaps in a panic?).
Second, you press Commit.
When we have committed successfully, we get a rather unsuccessful\-looking pop\-up message. You can read this message as “Congratulations! You’ve successfully committed 3 files, 2 of which are new!” It is also providing you with that alphanumeric SHA code that GitHub is using to track these files.
If our attempt was not successful, we will see an Error. Otherwise, interpret this message as a joyous one.
> Does your pop\-up message say “Aborting commit due to empty commit message.” GitHub is really serious about writing human\-readable commit messages.
When we close this window there is going to be (in my opinion) a very subtle indication that we are not done with the syncing process.
We have successfully committed our work as a breadcrumb\-message\-approved snapshot in time, but it still only exists locally on our computer. We can commit without an internet connection; we have not done anything yet to tell GitHub that we want this pushed to the remote repo at GitHub.com. So as the last step, we push.
### 4\.6\.4 Push
The last step in the syncing process is to Push!
Awesome! We’re done here in RStudio for the moment, let’s check out the remote on GitHub.com.
4\.7 Commit history
-------------------
The files you added should be on github.com.
Notice how the README.md file we created is automatically displayed at the bottom. Since it is good practice to have a README file that identifies what code does (i.e. why it exists), GitHub will display a Markdown file called README nicely formatted.
Let’s also explore the commit history. The 2 commits we’ve made (the first was when we originally initiated the repo from GitHub.com) are there!
4\.8 Project\-oriented workflows
--------------------------------
Let’s go back to RStudio and how we set up well\-organized projects and workflows for our data analyses.
This GitHub repository is now also an RStudio Project (capital P Project). This just means that RStudio has saved this additional file with extension `.Rproj` (ours is `r-workshop.Rproj`) to store specific settings for this project. It’s a bit of technology to help us get into the good habit of having a project\-oriented workflow.
A [project\-oriented workflow](https://rstats.wtf/project-oriented-workflow.html) means that we are going to organize all of the relevant things we need for our analyses in the same place. That means that this is the place where we keep all of our data, code, figures, notes, etc.
R Projects are great for reproducibility, because our self\-contained working directory will be the **first** place R looks for files.
### 4\.8\.1 Working directory
Now that we have our Project, let’s revisit this important question: where are we? Now we are in our Project. Everything we do will by default be saved here so we can be nice and organized.
And this is important because if Allison clones this repository that you just made and saves it in `Allison/my/projects/way/over/here`, she will still be able to interact with your files as you are here.
### 4\.8\.1 Working directory
Now that we have our Project, let’s revisit this important question: where are we? Now we are in our Project. Everything we do will by default be saved here so we can be nice and organized.
And this is important because if Allison clones this repository that you just made and saves it in `Allison/my/projects/way/over/here`, she will still be able to interact with your files as you are here.
4\.9 Project\-oriented workflows in action (aka our analytical setup)
---------------------------------------------------------------------
Let’s get a bit organized. First, let’s create our a new R Markdown file where we will do our analyses. This will be nice because you can also write notes to yourself in this document.
### 4\.9\.1 Create a new Rmd file
So let’s do this (again):
File \> New File \> R Markdown … (or click the green plus in the top left corner).
Let’s set up this file so we can use it for the rest of the day. I’m going to update the header with a new title and add my name, and then I’m going to delete the rest of the document so that we have a clean start.
> **Efficiency Tip**: I use Shift \- Command \- Down Arrow to highlight text from my cursor to the end of the document
```
---
title: "Creating graphs in R with `ggplot2`"
author: "Julie Lowndes"
date: "01/27/2020"
output: html_document
---
# Plots with ggplot2
We are going to make plots in R and it's going to be amazing.
```
Now, let’s save it. I’m going to call my file `plots-ggplot.Rmd`.
Notice that when we save this file, it pops up in our Git tab. Git knows that there is something new in our repo.
Let’s also knit this file. And look: Git also sees the knitted .html.
And let’s practice syncing our file to GitHub: pull, stage, commit, push
> **Troubleshooting:** What if a file doesn’t show up in the Git tab and you expect that it should? Check to make sure you’ve saved the file. If the filename is red with an asterix, there have been changes since it was saved. Remember to save before syncing to GitHub!
### 4\.9\.2 Create data and figures folders
Let’s create a few folders to be organized. Let’s have one for our the raw data, and one for the figures we will output. We can do this in RStudio, in the bottom right pane Files pane by clicking the New Folder button:
* folder called “data”
* folder called “figures”
We can press the refresh button in the top\-right of this pane (next to the “More” button) to have these show up in alphabetical order.
Now let’s go to our Finder or Windows Explorer: our new folders are there as well!
### 4\.9\.3 Move data files to data folder
You downloaded several files for this workshop from the [r\-for\-excel\-data folder](https://drive.google.com/drive/folders/1RywSUw8hxETlROdIhLIntxPsZq0tKSdS?usp=sharing), and we’ll move these data into our repo now. These data files are a mix of comma separate value (.csv) files and some as Excel spreadsheets (.xlsx):
* ca\_np.csv
* ci\_np.xlsx
* fish.csv
* inverts.xlsx
* kelp\_fronds.xlsx
* lobsters.xlsx
* lobsters2\.xlsx
* noaa\_landings.csv
* substrate.xlsx
Copy\-paste or drag all of these files into the ‘data’ subfolder of your R project. Make sure you do not also copy the original folder; we don’t need any subfolders in our data folder.
Now let’s go back to RStudio. We can click on the data folder in the Files tab and see them.
The data folder also shows up in your Git tab. But the figures folder does not. That is because GitHub cannot track an empty folder, it can only track files within a folder.
Let’s sync these data files (we will be able to sync the figures folder shortly). We can stage multiple files at once by typing Command \- A and clicking “Stage” (or using the space bar). To Sync: pull \- stage \- commit \- push!
### 4\.9\.4 Activity
Edit your README and practice syncing (pull, stage, commit, push). For example,
```
"We use the following data from the Santa Barbara Coastal Term Ecological Research and National Oceanic and Atmospheric Administration in our analyses"
```
Explore your Commit History, and discuss with your neighbor.
### 4\.9\.1 Create a new Rmd file
So let’s do this (again):
File \> New File \> R Markdown … (or click the green plus in the top left corner).
Let’s set up this file so we can use it for the rest of the day. I’m going to update the header with a new title and add my name, and then I’m going to delete the rest of the document so that we have a clean start.
> **Efficiency Tip**: I use Shift \- Command \- Down Arrow to highlight text from my cursor to the end of the document
```
---
title: "Creating graphs in R with `ggplot2`"
author: "Julie Lowndes"
date: "01/27/2020"
output: html_document
---
# Plots with ggplot2
We are going to make plots in R and it's going to be amazing.
```
Now, let’s save it. I’m going to call my file `plots-ggplot.Rmd`.
Notice that when we save this file, it pops up in our Git tab. Git knows that there is something new in our repo.
Let’s also knit this file. And look: Git also sees the knitted .html.
And let’s practice syncing our file to GitHub: pull, stage, commit, push
> **Troubleshooting:** What if a file doesn’t show up in the Git tab and you expect that it should? Check to make sure you’ve saved the file. If the filename is red with an asterix, there have been changes since it was saved. Remember to save before syncing to GitHub!
### 4\.9\.2 Create data and figures folders
Let’s create a few folders to be organized. Let’s have one for our the raw data, and one for the figures we will output. We can do this in RStudio, in the bottom right pane Files pane by clicking the New Folder button:
* folder called “data”
* folder called “figures”
We can press the refresh button in the top\-right of this pane (next to the “More” button) to have these show up in alphabetical order.
Now let’s go to our Finder or Windows Explorer: our new folders are there as well!
### 4\.9\.3 Move data files to data folder
You downloaded several files for this workshop from the [r\-for\-excel\-data folder](https://drive.google.com/drive/folders/1RywSUw8hxETlROdIhLIntxPsZq0tKSdS?usp=sharing), and we’ll move these data into our repo now. These data files are a mix of comma separate value (.csv) files and some as Excel spreadsheets (.xlsx):
* ca\_np.csv
* ci\_np.xlsx
* fish.csv
* inverts.xlsx
* kelp\_fronds.xlsx
* lobsters.xlsx
* lobsters2\.xlsx
* noaa\_landings.csv
* substrate.xlsx
Copy\-paste or drag all of these files into the ‘data’ subfolder of your R project. Make sure you do not also copy the original folder; we don’t need any subfolders in our data folder.
Now let’s go back to RStudio. We can click on the data folder in the Files tab and see them.
The data folder also shows up in your Git tab. But the figures folder does not. That is because GitHub cannot track an empty folder, it can only track files within a folder.
Let’s sync these data files (we will be able to sync the figures folder shortly). We can stage multiple files at once by typing Command \- A and clicking “Stage” (or using the space bar). To Sync: pull \- stage \- commit \- push!
### 4\.9\.4 Activity
Edit your README and practice syncing (pull, stage, commit, push). For example,
```
"We use the following data from the Santa Barbara Coastal Term Ecological Research and National Oceanic and Atmospheric Administration in our analyses"
```
Explore your Commit History, and discuss with your neighbor.
4\.10 Committing \- how often? Tracking changes in your files
-------------------------------------------------------------
Whenever you make changes to the files in Github, you will walk through the Pull \-\> Stage \-\> Commit \-\> Push steps.
I tend to do this every time I finish a task (basically when I start getting nervous that I will lose my work). Once something is committed, it is very difficult to lose it.
4\.11 Issues
------------
Let’s go back to our repo on GitHub.com, and talk about Issues.
Issues “track ideas, enhancements, tasks, or bugs for work on GitHub.” \- [GitHub help article](https://help.github.com/en/articles/about-issues).
You can create an issue for a topic, track progress, others ask questions, provide links and updates, close issue when completed.
In a public repo, anyone with a username can create and comment on issues. In a private repo, only users with permission can create and comment on issues, or see them at all.
GitHub search is awesome – will search code and issues!
### 4\.11\.1 Issues in the wild!
Here are some examples of “traditional” and “less traditional” Issues:
Bug reports, code, feature, \& help requests: [ggplot2](https://github.com/tidyverse/ggplot2/issues)
Project submissions and progress tracking: [MozillaFestival](https://github.com/MozillaFestival/mozfest-program-2018/issues)
Private conversations and archiving: [OHI Fellows (private)](https://github.com/OHI-Science/globalfellows-issues/issues/)
### 4\.11\.2 END **GitHub** session!
We’ll continue practicing GitHub throughout the rest of the book, but see Chapter [9](collaborating.html#collaborating) for explicit instructions on collaborating in GitHub.
### 4\.11\.1 Issues in the wild!
Here are some examples of “traditional” and “less traditional” Issues:
Bug reports, code, feature, \& help requests: [ggplot2](https://github.com/tidyverse/ggplot2/issues)
Project submissions and progress tracking: [MozillaFestival](https://github.com/MozillaFestival/mozfest-program-2018/issues)
Private conversations and archiving: [OHI Fellows (private)](https://github.com/OHI-Science/globalfellows-issues/issues/)
### 4\.11\.2 END **GitHub** session!
We’ll continue practicing GitHub throughout the rest of the book, but see Chapter [9](collaborating.html#collaborating) for explicit instructions on collaborating in GitHub.
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/github.html |
Chapter 4 GitHub
================
4\.1 Summary
------------
We will learn about version control and practice a workflow with GitHub and RStudio that streamlines working with our most important collaborator: Future You.
### 4\.1\.1 Objectives
Today, we’ll interface with GitHub from our local computers using RStudio.
> **Aside**: There are many other ways to interact with GitHub, including GitHub’s Desktop App and the command line ([here is Jenny Bryan’s list of git clients](http://stat545.com/git02_git-clients.html)). You have the largest suite of options if you interface through the command line, but the most common things you’ll do can be done through one of these other applications (i.e. RStudio).
Here’s what we’ll do, since we’ve already set up git on your computers in the previous session (Chapter [4](github.html#github)):
1. create a repository on Github.com (remote)
2. clone locally using RStudio
3. sync local to remote: pull, stage, commit, push
4. explore github.com files, commit history, README
5. project\-oriented workflows
6. project\-oriented workflows in action
### 4\.1\.2 Resources
* [Excuse me, do you have a moment to talk about version control?](https://peerj.com/preprints/3159/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com/) by Jenny Bryan, specifically [Detect Git from RStudio](http://happygitwithr.com/rstudio-see-git.html)
* [What They Forgot to Teach You About R](https://rstats.wtf/) by Jenny Bryan, specifically [Project\-oriented workflows](https://rstats.wtf/project-oriented-workflow.html)
* [GitHub Quickstart](https://rawgit.com/nazrug/Quickstart/master/GithubQuickstart.html) by Melanie Frazier
* [GitHub for Project Management](https://openscapes.github.io/series/github-issues.html) by Openscapes
4\.2 Why should R users use Github?
-----------------------------------
Modern R users use GitHub because it helps make coding collaborative and social while also providing huge benefits to organization, archiving, and being able to find your files easily when you need them.
One of the most compelling reasons for me is that it ends (or nearly ends) the horror of keeping track of versions.
Basically, we get away from this:
This is a nightmare not only because I have NO idea which is truly the version we used in that analysis we need to update, but because it is going to take a lot of detective work to see what actually changed between each file. Also, it is very sad to think about the amount of time everyone involved is spending on bookkeeping: is everyone downloading an attachment, dragging it to wherever they organize this on their own computers, and then renaming everything? Hours and hours of all of our lives.
But then there is GitHub.
In GitHub, in this example you will likely only see a single file, which is the most recent version. GitHub’s job is to track who made any changes and when (so no need to save a copy with your name or date at the end), and it also requires that you write something human\-readable that will be a breadcrumb for you in the future. It is also designed to be easy to compare versions, and you can easily revert to previous versions.
GitHub also supercharges you as a collaborator. First and foremost with Future You, but also sets you up to collaborate with Future Us!
GitHub, especially in combination with RStudio, is also game\-changing for publishing and distributing. You can — and we will — publish and share files openly on the internet.
### 4\.2\.1 What is Github? And Git?
OK so what is GitHub? And Git?
* **Git** is a program that you install on your computer: it is version control software that tracks changes to your files over time.
* **Github** is a website that is essentially a social media platform for your git\-versioned files. GitHub stores all your versioned files as an archive, but also as allows you to interact with other people’s files and has management tools for the social side of software projects. It has many nice features to be able visualize differences between [images](https://help.github.com/articles/rendering-and-diffing-images/), [rendering](https://help.github.com/articles/mapping-geojson-files-on-github/) \& [diffing](https://github.com/blog/1772-diffable-more-customizable-maps) map data files, [render text data files](https://help.github.com/articles/rendering-csv-and-tsv-data/), and [track changes in text](https://help.github.com/articles/rendering-differences-in-prose-documents/).
Github was developed for software development, so much of the functionality and terminology of that is exciting for professional programmers (e.g., branches and pull requests) isn’t necessarily the right place for us as new R users to get started.
So we will be learning and practicing GitHub’s features and terminology on a “need to know basis” as we start managing our projects with GitHub.
4\.3 Github Configuration
-------------------------
We’ve just configured Github this at the end of Chapter [3](rstudio.html#rstudio). So skip to the next section if you’ve just completed this! However, if you’re dropping in on this chapter to setup Github, make sure you first [configure Github with these instructions](rstudio.html#github-brief-intro-config) before continuing.
4\.4 Create a repository on Github.com
--------------------------------------
Let’s get started by going to <https://github.com> and going to our user profile. You can do this by typing your username in the URL (github.com/username), or after signing in, by clicking on the top\-right button and going to your profile.
This will have an overview of you and your work, and then you can click on the Repository tab
Repositories are the main “unit” of GitHub: they are what GitHub tracks. They are essentially project\-level folders that will contain everything associated with a project. It’s where we’ll start too.
We create a new repository (called a “repo”) by clicking “New repository.”
Choose a name. Call it whatever you want (the shorter the better), or follow me for convenience. I will call mine `r-workshop`.
Also, add a description, make it public, create a README file, and create your repo!
The *Add gitignore* option adds a document where you can identify files or file\-types you want Github to ignore. These files will stay in on the local Github folder (the one on your computer), but will not be uploaded onto the web version of Github.
The *Add a license* option adds a license that describes how other people can use your Github files (e.g., open source, but no one can profit from them, etc.). We won’t worry about this today.
Check out our new repository!
Great! So now we have our new repository that exists in the Cloud. Let’s get it established locally on our computers: that is called “cloning.”
4\.5 Clone your repository using RStudio
----------------------------------------
Let’s clone this repo to our local computer using RStudio. Unlike downloading, cloning keeps all the version control and user information bundled with the files.
### 4\.5\.1 Copy the repo address
First, copy the web address of the repository you want to clone. We will use HTTPS.
> **Aside**: HTTPS is default, but you could alternatively set up with SSH. This is more advanced than we will get into here, but allows 2\-factor authentication. See [Happy Git with R](https://happygitwithr.com/credential-caching.html#special-consideration-re-two-factor-authentication) for more information.
### 4\.5\.2 RStudio: New Project
Now go back to RStudio, and click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
### 4\.5\.3 Select Version Control
### 4\.5\.4 Select Git
Since we are using git.
### 4\.5\.5 Paste the repo address
Paste the repo address (which is still in your clipboard) into in the “Repository URL” field. The “Project directory name” should autofill; if it does not press *tab*, or type it in. It is best practice to keep the “Project directory name” THE SAME as the repository name.
When cloned, this repository is going to become a folder on your computer.
At this point you can save this repo anywhere. There are different schools of thought but we think it is useful to create a high\-level folder where you will keep your github repos to keep them organized. We call ours `github` and keep it in our root folder (`~/github`), and so that is what we will demonstrate here — you are welcome to do the same. Press “Browse…” to navigate to a folder and you have the option of creating a new folder.
Finally, click Create Project.
### 4\.5\.6 Admire your local repo
If everything went well, the repository will show up in RStudio!
The repository is also saved to the location you specified, and you can navigate to it as you normally would in Finder or Windows Explorer:
Hooray!
### 4\.5\.7 Inspect your local repo
Let’s notice a few things:
First, our working directory is set to `~/github/r-workshop`, and `r-workshop` is also named in the top right hand corner.
Second, we have a Git tab in the top right pane! Let’s click on it.
Our Git tab has 2 items:
* .gitignore file
* .Rproj file
These have been added to our repo by RStudio — we can also see them in the File pane in the bottom right of RStudio. These are helper files that RStudio has added to streamline our workflow with GitHub and R. We will talk about these a bit more soon. One thing to note about these files is that they begin with a period (`.`) which means they are hidden files: they show up in the Files pane of RStudio but won’t show up in your Finder or Windows Explorer.
Going back to the Git tab, both these files have little yellow icons with question marks `?`. This is GitHub’s way of saying: “I am responsible for tracking everything that happens in this repo, but I’m not sure what is going on with these files yet. Do you want me to track them too?”
We will handle this in a moment; first let’s look at the README.md file.
### 4\.5\.8 Edit your README file
Let’s also open up the README.md. This is a Markdown file, which is the same language we just learned with R Markdown. It’s like an R Markdown file without the abilities to run R code.
We will edit the file and illustrate how GitHub tracks files that have been modified (to complement seeing how it tracks files that have been added.
README files are common in programming; they are the first place that someone will look to see why code exists and how to run it.
In my README, I’ll write:
```
This repo is for my analyses at RStudio::conf(2020).
```
When I save this, notice how it shows up in my Git tab. It has a blue “M”: GitHub is already tracking this file, and tracking it line\-by\-line, so it knows that something is different: it’s Modified with an M.
Great. Now let’s sync back to GitHub in 4 steps.
4\.6 Sync from RStudio (local) to GitHub (remote)
-------------------------------------------------
Syncing to GitHub.com means 4 steps:
1. Pull
2. Stage
3. Commit
4. Push
We start off this whole process by clicking on the Commit section.
### 4\.6\.1 Pull
We start off by “Pulling” from the remote repository (GitHub.com) to make sure that our local copy has the most up\-to\-date information that is available online. Right now, since we just created the repo and are the only ones that have permission to work on it, we can be pretty confident that there isn’t new information available. But we pull anyways because this is a very safe habit to get into for when you start collaborating with yourself across computers or others. Best practice is to pull often: it costs nothing (other than an internet connection).
Pull by clicking the teal Down Arrow. (Notice also how when you highlight a filename, a preview of the differences displays below).
### 4\.6\.2 Stage
Let’s click the boxes next to each file. This is called “staging a file”: you are indicating that you want GitHub to track this file, and that you will be syncing it shortly. Notice:
* .Rproj and .gitignore files: the question marks turn into an A because these are new files that have been added to your repo (automatically by RStudio, not by you).
* README.md file: the M indicates that this was modified (by you)
These are the codes used to describe how the files are changed, (from the RStudio [cheatsheet](http://www.rstudio.com/wp-content/uploads/2016/01/rstudio-IDE-cheatsheet.pdf)):
### 4\.6\.3 Commit
Committing is different from saving our files (which we still have to do! RStudio will indicate a file is unsaved with red text and an asterix). We commit a single file or a group of files when we are ready to save a snapshot in time of the progress we’ve made. Maybe this is after a big part of the analysis was done, or when you’re done working for the day.
Committing our files is a 2\-step process.
First, you write a “commit message,” which is a human\-readable note about what has changed that will accompany GitHub’s non\-human\-readable alphanumeric code to track our files. I think of commit messages like breadcrumbs to my Future Self: how can I use this space to be useful for me if I’m trying to retrace my steps (and perhaps in a panic?).
Second, you press Commit.
When we have committed successfully, we get a rather unsuccessful\-looking pop\-up message. You can read this message as “Congratulations! You’ve successfully committed 3 files, 2 of which are new!” It is also providing you with that alphanumeric SHA code that GitHub is using to track these files.
If our attempt was not successful, we will see an Error. Otherwise, interpret this message as a joyous one.
> Does your pop\-up message say “Aborting commit due to empty commit message.” GitHub is really serious about writing human\-readable commit messages.
When we close this window there is going to be (in my opinion) a very subtle indication that we are not done with the syncing process.
We have successfully committed our work as a breadcrumb\-message\-approved snapshot in time, but it still only exists locally on our computer. We can commit without an internet connection; we have not done anything yet to tell GitHub that we want this pushed to the remote repo at GitHub.com. So as the last step, we push.
### 4\.6\.4 Push
The last step in the syncing process is to Push!
Awesome! We’re done here in RStudio for the moment, let’s check out the remote on GitHub.com.
4\.7 Commit history
-------------------
The files you added should be on github.com.
Notice how the README.md file we created is automatically displayed at the bottom. Since it is good practice to have a README file that identifies what code does (i.e. why it exists), GitHub will display a Markdown file called README nicely formatted.
Let’s also explore the commit history. The 2 commits we’ve made (the first was when we originally initiated the repo from GitHub.com) are there!
4\.8 Project\-oriented workflows
--------------------------------
Let’s go back to RStudio and how we set up well\-organized projects and workflows for our data analyses.
This GitHub repository is now also an RStudio Project (capital P Project). This just means that RStudio has saved this additional file with extension `.Rproj` (ours is `r-workshop.Rproj`) to store specific settings for this project. It’s a bit of technology to help us get into the good habit of having a project\-oriented workflow.
A [project\-oriented workflow](https://rstats.wtf/project-oriented-workflow.html) means that we are going to organize all of the relevant things we need for our analyses in the same place. That means that this is the place where we keep all of our data, code, figures, notes, etc.
R Projects are great for reproducibility, because our self\-contained working directory will be the **first** place R looks for files.
### 4\.8\.1 Working directory
Now that we have our Project, let’s revisit this important question: where are we? Now we are in our Project. Everything we do will by default be saved here so we can be nice and organized.
And this is important because if Allison clones this repository that you just made and saves it in `Allison/my/projects/way/over/here`, she will still be able to interact with your files as you are here.
4\.9 Project\-oriented workflows in action (aka our analytical setup)
---------------------------------------------------------------------
Let’s get a bit organized. First, let’s create our a new R Markdown file where we will do our analyses. This will be nice because you can also write notes to yourself in this document.
### 4\.9\.1 Create a new Rmd file
So let’s do this (again):
File \> New File \> R Markdown … (or click the green plus in the top left corner).
Let’s set up this file so we can use it for the rest of the day. I’m going to update the header with a new title and add my name, and then I’m going to delete the rest of the document so that we have a clean start.
> **Efficiency Tip**: I use Shift \- Command \- Down Arrow to highlight text from my cursor to the end of the document
```
---
title: "Creating graphs in R with `ggplot2`"
author: "Julie Lowndes"
date: "01/27/2020"
output: html_document
---
# Plots with ggplot2
We are going to make plots in R and it's going to be amazing.
```
Now, let’s save it. I’m going to call my file `plots-ggplot.Rmd`.
Notice that when we save this file, it pops up in our Git tab. Git knows that there is something new in our repo.
Let’s also knit this file. And look: Git also sees the knitted .html.
And let’s practice syncing our file to GitHub: pull, stage, commit, push
> **Troubleshooting:** What if a file doesn’t show up in the Git tab and you expect that it should? Check to make sure you’ve saved the file. If the filename is red with an asterix, there have been changes since it was saved. Remember to save before syncing to GitHub!
### 4\.9\.2 Create data and figures folders
Let’s create a few folders to be organized. Let’s have one for our the raw data, and one for the figures we will output. We can do this in RStudio, in the bottom right pane Files pane by clicking the New Folder button:
* folder called “data”
* folder called “figures”
We can press the refresh button in the top\-right of this pane (next to the “More” button) to have these show up in alphabetical order.
Now let’s go to our Finder or Windows Explorer: our new folders are there as well!
### 4\.9\.3 Move data files to data folder
You downloaded several files for this workshop from the [r\-for\-excel\-data folder](https://drive.google.com/drive/folders/1RywSUw8hxETlROdIhLIntxPsZq0tKSdS?usp=sharing), and we’ll move these data into our repo now. These data files are a mix of comma separate value (.csv) files and some as Excel spreadsheets (.xlsx):
* ca\_np.csv
* ci\_np.xlsx
* fish.csv
* inverts.xlsx
* kelp\_fronds.xlsx
* lobsters.xlsx
* lobsters2\.xlsx
* noaa\_landings.csv
* substrate.xlsx
Copy\-paste or drag all of these files into the ‘data’ subfolder of your R project. Make sure you do not also copy the original folder; we don’t need any subfolders in our data folder.
Now let’s go back to RStudio. We can click on the data folder in the Files tab and see them.
The data folder also shows up in your Git tab. But the figures folder does not. That is because GitHub cannot track an empty folder, it can only track files within a folder.
Let’s sync these data files (we will be able to sync the figures folder shortly). We can stage multiple files at once by typing Command \- A and clicking “Stage” (or using the space bar). To Sync: pull \- stage \- commit \- push!
### 4\.9\.4 Activity
Edit your README and practice syncing (pull, stage, commit, push). For example,
```
"We use the following data from the Santa Barbara Coastal Term Ecological Research and National Oceanic and Atmospheric Administration in our analyses"
```
Explore your Commit History, and discuss with your neighbor.
4\.10 Committing \- how often? Tracking changes in your files
-------------------------------------------------------------
Whenever you make changes to the files in Github, you will walk through the Pull \-\> Stage \-\> Commit \-\> Push steps.
I tend to do this every time I finish a task (basically when I start getting nervous that I will lose my work). Once something is committed, it is very difficult to lose it.
4\.11 Issues
------------
Let’s go back to our repo on GitHub.com, and talk about Issues.
Issues “track ideas, enhancements, tasks, or bugs for work on GitHub.” \- [GitHub help article](https://help.github.com/en/articles/about-issues).
You can create an issue for a topic, track progress, others ask questions, provide links and updates, close issue when completed.
In a public repo, anyone with a username can create and comment on issues. In a private repo, only users with permission can create and comment on issues, or see them at all.
GitHub search is awesome – will search code and issues!
### 4\.11\.1 Issues in the wild!
Here are some examples of “traditional” and “less traditional” Issues:
Bug reports, code, feature, \& help requests: [ggplot2](https://github.com/tidyverse/ggplot2/issues)
Project submissions and progress tracking: [MozillaFestival](https://github.com/MozillaFestival/mozfest-program-2018/issues)
Private conversations and archiving: [OHI Fellows (private)](https://github.com/OHI-Science/globalfellows-issues/issues/)
### 4\.11\.2 END **GitHub** session!
We’ll continue practicing GitHub throughout the rest of the book, but see Chapter [9](collaborating.html#collaborating) for explicit instructions on collaborating in GitHub.
4\.1 Summary
------------
We will learn about version control and practice a workflow with GitHub and RStudio that streamlines working with our most important collaborator: Future You.
### 4\.1\.1 Objectives
Today, we’ll interface with GitHub from our local computers using RStudio.
> **Aside**: There are many other ways to interact with GitHub, including GitHub’s Desktop App and the command line ([here is Jenny Bryan’s list of git clients](http://stat545.com/git02_git-clients.html)). You have the largest suite of options if you interface through the command line, but the most common things you’ll do can be done through one of these other applications (i.e. RStudio).
Here’s what we’ll do, since we’ve already set up git on your computers in the previous session (Chapter [4](github.html#github)):
1. create a repository on Github.com (remote)
2. clone locally using RStudio
3. sync local to remote: pull, stage, commit, push
4. explore github.com files, commit history, README
5. project\-oriented workflows
6. project\-oriented workflows in action
### 4\.1\.2 Resources
* [Excuse me, do you have a moment to talk about version control?](https://peerj.com/preprints/3159/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com/) by Jenny Bryan, specifically [Detect Git from RStudio](http://happygitwithr.com/rstudio-see-git.html)
* [What They Forgot to Teach You About R](https://rstats.wtf/) by Jenny Bryan, specifically [Project\-oriented workflows](https://rstats.wtf/project-oriented-workflow.html)
* [GitHub Quickstart](https://rawgit.com/nazrug/Quickstart/master/GithubQuickstart.html) by Melanie Frazier
* [GitHub for Project Management](https://openscapes.github.io/series/github-issues.html) by Openscapes
### 4\.1\.1 Objectives
Today, we’ll interface with GitHub from our local computers using RStudio.
> **Aside**: There are many other ways to interact with GitHub, including GitHub’s Desktop App and the command line ([here is Jenny Bryan’s list of git clients](http://stat545.com/git02_git-clients.html)). You have the largest suite of options if you interface through the command line, but the most common things you’ll do can be done through one of these other applications (i.e. RStudio).
Here’s what we’ll do, since we’ve already set up git on your computers in the previous session (Chapter [4](github.html#github)):
1. create a repository on Github.com (remote)
2. clone locally using RStudio
3. sync local to remote: pull, stage, commit, push
4. explore github.com files, commit history, README
5. project\-oriented workflows
6. project\-oriented workflows in action
### 4\.1\.2 Resources
* [Excuse me, do you have a moment to talk about version control?](https://peerj.com/preprints/3159/) by Jenny Bryan
* [Happy Git with R](http://happygitwithr.com/) by Jenny Bryan, specifically [Detect Git from RStudio](http://happygitwithr.com/rstudio-see-git.html)
* [What They Forgot to Teach You About R](https://rstats.wtf/) by Jenny Bryan, specifically [Project\-oriented workflows](https://rstats.wtf/project-oriented-workflow.html)
* [GitHub Quickstart](https://rawgit.com/nazrug/Quickstart/master/GithubQuickstart.html) by Melanie Frazier
* [GitHub for Project Management](https://openscapes.github.io/series/github-issues.html) by Openscapes
4\.2 Why should R users use Github?
-----------------------------------
Modern R users use GitHub because it helps make coding collaborative and social while also providing huge benefits to organization, archiving, and being able to find your files easily when you need them.
One of the most compelling reasons for me is that it ends (or nearly ends) the horror of keeping track of versions.
Basically, we get away from this:
This is a nightmare not only because I have NO idea which is truly the version we used in that analysis we need to update, but because it is going to take a lot of detective work to see what actually changed between each file. Also, it is very sad to think about the amount of time everyone involved is spending on bookkeeping: is everyone downloading an attachment, dragging it to wherever they organize this on their own computers, and then renaming everything? Hours and hours of all of our lives.
But then there is GitHub.
In GitHub, in this example you will likely only see a single file, which is the most recent version. GitHub’s job is to track who made any changes and when (so no need to save a copy with your name or date at the end), and it also requires that you write something human\-readable that will be a breadcrumb for you in the future. It is also designed to be easy to compare versions, and you can easily revert to previous versions.
GitHub also supercharges you as a collaborator. First and foremost with Future You, but also sets you up to collaborate with Future Us!
GitHub, especially in combination with RStudio, is also game\-changing for publishing and distributing. You can — and we will — publish and share files openly on the internet.
### 4\.2\.1 What is Github? And Git?
OK so what is GitHub? And Git?
* **Git** is a program that you install on your computer: it is version control software that tracks changes to your files over time.
* **Github** is a website that is essentially a social media platform for your git\-versioned files. GitHub stores all your versioned files as an archive, but also as allows you to interact with other people’s files and has management tools for the social side of software projects. It has many nice features to be able visualize differences between [images](https://help.github.com/articles/rendering-and-diffing-images/), [rendering](https://help.github.com/articles/mapping-geojson-files-on-github/) \& [diffing](https://github.com/blog/1772-diffable-more-customizable-maps) map data files, [render text data files](https://help.github.com/articles/rendering-csv-and-tsv-data/), and [track changes in text](https://help.github.com/articles/rendering-differences-in-prose-documents/).
Github was developed for software development, so much of the functionality and terminology of that is exciting for professional programmers (e.g., branches and pull requests) isn’t necessarily the right place for us as new R users to get started.
So we will be learning and practicing GitHub’s features and terminology on a “need to know basis” as we start managing our projects with GitHub.
### 4\.2\.1 What is Github? And Git?
OK so what is GitHub? And Git?
* **Git** is a program that you install on your computer: it is version control software that tracks changes to your files over time.
* **Github** is a website that is essentially a social media platform for your git\-versioned files. GitHub stores all your versioned files as an archive, but also as allows you to interact with other people’s files and has management tools for the social side of software projects. It has many nice features to be able visualize differences between [images](https://help.github.com/articles/rendering-and-diffing-images/), [rendering](https://help.github.com/articles/mapping-geojson-files-on-github/) \& [diffing](https://github.com/blog/1772-diffable-more-customizable-maps) map data files, [render text data files](https://help.github.com/articles/rendering-csv-and-tsv-data/), and [track changes in text](https://help.github.com/articles/rendering-differences-in-prose-documents/).
Github was developed for software development, so much of the functionality and terminology of that is exciting for professional programmers (e.g., branches and pull requests) isn’t necessarily the right place for us as new R users to get started.
So we will be learning and practicing GitHub’s features and terminology on a “need to know basis” as we start managing our projects with GitHub.
4\.3 Github Configuration
-------------------------
We’ve just configured Github this at the end of Chapter [3](rstudio.html#rstudio). So skip to the next section if you’ve just completed this! However, if you’re dropping in on this chapter to setup Github, make sure you first [configure Github with these instructions](rstudio.html#github-brief-intro-config) before continuing.
4\.4 Create a repository on Github.com
--------------------------------------
Let’s get started by going to <https://github.com> and going to our user profile. You can do this by typing your username in the URL (github.com/username), or after signing in, by clicking on the top\-right button and going to your profile.
This will have an overview of you and your work, and then you can click on the Repository tab
Repositories are the main “unit” of GitHub: they are what GitHub tracks. They are essentially project\-level folders that will contain everything associated with a project. It’s where we’ll start too.
We create a new repository (called a “repo”) by clicking “New repository.”
Choose a name. Call it whatever you want (the shorter the better), or follow me for convenience. I will call mine `r-workshop`.
Also, add a description, make it public, create a README file, and create your repo!
The *Add gitignore* option adds a document where you can identify files or file\-types you want Github to ignore. These files will stay in on the local Github folder (the one on your computer), but will not be uploaded onto the web version of Github.
The *Add a license* option adds a license that describes how other people can use your Github files (e.g., open source, but no one can profit from them, etc.). We won’t worry about this today.
Check out our new repository!
Great! So now we have our new repository that exists in the Cloud. Let’s get it established locally on our computers: that is called “cloning.”
4\.5 Clone your repository using RStudio
----------------------------------------
Let’s clone this repo to our local computer using RStudio. Unlike downloading, cloning keeps all the version control and user information bundled with the files.
### 4\.5\.1 Copy the repo address
First, copy the web address of the repository you want to clone. We will use HTTPS.
> **Aside**: HTTPS is default, but you could alternatively set up with SSH. This is more advanced than we will get into here, but allows 2\-factor authentication. See [Happy Git with R](https://happygitwithr.com/credential-caching.html#special-consideration-re-two-factor-authentication) for more information.
### 4\.5\.2 RStudio: New Project
Now go back to RStudio, and click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
### 4\.5\.3 Select Version Control
### 4\.5\.4 Select Git
Since we are using git.
### 4\.5\.5 Paste the repo address
Paste the repo address (which is still in your clipboard) into in the “Repository URL” field. The “Project directory name” should autofill; if it does not press *tab*, or type it in. It is best practice to keep the “Project directory name” THE SAME as the repository name.
When cloned, this repository is going to become a folder on your computer.
At this point you can save this repo anywhere. There are different schools of thought but we think it is useful to create a high\-level folder where you will keep your github repos to keep them organized. We call ours `github` and keep it in our root folder (`~/github`), and so that is what we will demonstrate here — you are welcome to do the same. Press “Browse…” to navigate to a folder and you have the option of creating a new folder.
Finally, click Create Project.
### 4\.5\.6 Admire your local repo
If everything went well, the repository will show up in RStudio!
The repository is also saved to the location you specified, and you can navigate to it as you normally would in Finder or Windows Explorer:
Hooray!
### 4\.5\.7 Inspect your local repo
Let’s notice a few things:
First, our working directory is set to `~/github/r-workshop`, and `r-workshop` is also named in the top right hand corner.
Second, we have a Git tab in the top right pane! Let’s click on it.
Our Git tab has 2 items:
* .gitignore file
* .Rproj file
These have been added to our repo by RStudio — we can also see them in the File pane in the bottom right of RStudio. These are helper files that RStudio has added to streamline our workflow with GitHub and R. We will talk about these a bit more soon. One thing to note about these files is that they begin with a period (`.`) which means they are hidden files: they show up in the Files pane of RStudio but won’t show up in your Finder or Windows Explorer.
Going back to the Git tab, both these files have little yellow icons with question marks `?`. This is GitHub’s way of saying: “I am responsible for tracking everything that happens in this repo, but I’m not sure what is going on with these files yet. Do you want me to track them too?”
We will handle this in a moment; first let’s look at the README.md file.
### 4\.5\.8 Edit your README file
Let’s also open up the README.md. This is a Markdown file, which is the same language we just learned with R Markdown. It’s like an R Markdown file without the abilities to run R code.
We will edit the file and illustrate how GitHub tracks files that have been modified (to complement seeing how it tracks files that have been added.
README files are common in programming; they are the first place that someone will look to see why code exists and how to run it.
In my README, I’ll write:
```
This repo is for my analyses at RStudio::conf(2020).
```
When I save this, notice how it shows up in my Git tab. It has a blue “M”: GitHub is already tracking this file, and tracking it line\-by\-line, so it knows that something is different: it’s Modified with an M.
Great. Now let’s sync back to GitHub in 4 steps.
### 4\.5\.1 Copy the repo address
First, copy the web address of the repository you want to clone. We will use HTTPS.
> **Aside**: HTTPS is default, but you could alternatively set up with SSH. This is more advanced than we will get into here, but allows 2\-factor authentication. See [Happy Git with R](https://happygitwithr.com/credential-caching.html#special-consideration-re-two-factor-authentication) for more information.
### 4\.5\.2 RStudio: New Project
Now go back to RStudio, and click on New Project. There are a few different ways; you could also go to File \> New Project…, or click the little green \+ with the R box in the top left.
also in the File menu).
### 4\.5\.3 Select Version Control
### 4\.5\.4 Select Git
Since we are using git.
### 4\.5\.5 Paste the repo address
Paste the repo address (which is still in your clipboard) into in the “Repository URL” field. The “Project directory name” should autofill; if it does not press *tab*, or type it in. It is best practice to keep the “Project directory name” THE SAME as the repository name.
When cloned, this repository is going to become a folder on your computer.
At this point you can save this repo anywhere. There are different schools of thought but we think it is useful to create a high\-level folder where you will keep your github repos to keep them organized. We call ours `github` and keep it in our root folder (`~/github`), and so that is what we will demonstrate here — you are welcome to do the same. Press “Browse…” to navigate to a folder and you have the option of creating a new folder.
Finally, click Create Project.
### 4\.5\.6 Admire your local repo
If everything went well, the repository will show up in RStudio!
The repository is also saved to the location you specified, and you can navigate to it as you normally would in Finder or Windows Explorer:
Hooray!
### 4\.5\.7 Inspect your local repo
Let’s notice a few things:
First, our working directory is set to `~/github/r-workshop`, and `r-workshop` is also named in the top right hand corner.
Second, we have a Git tab in the top right pane! Let’s click on it.
Our Git tab has 2 items:
* .gitignore file
* .Rproj file
These have been added to our repo by RStudio — we can also see them in the File pane in the bottom right of RStudio. These are helper files that RStudio has added to streamline our workflow with GitHub and R. We will talk about these a bit more soon. One thing to note about these files is that they begin with a period (`.`) which means they are hidden files: they show up in the Files pane of RStudio but won’t show up in your Finder or Windows Explorer.
Going back to the Git tab, both these files have little yellow icons with question marks `?`. This is GitHub’s way of saying: “I am responsible for tracking everything that happens in this repo, but I’m not sure what is going on with these files yet. Do you want me to track them too?”
We will handle this in a moment; first let’s look at the README.md file.
### 4\.5\.8 Edit your README file
Let’s also open up the README.md. This is a Markdown file, which is the same language we just learned with R Markdown. It’s like an R Markdown file without the abilities to run R code.
We will edit the file and illustrate how GitHub tracks files that have been modified (to complement seeing how it tracks files that have been added.
README files are common in programming; they are the first place that someone will look to see why code exists and how to run it.
In my README, I’ll write:
```
This repo is for my analyses at RStudio::conf(2020).
```
When I save this, notice how it shows up in my Git tab. It has a blue “M”: GitHub is already tracking this file, and tracking it line\-by\-line, so it knows that something is different: it’s Modified with an M.
Great. Now let’s sync back to GitHub in 4 steps.
4\.6 Sync from RStudio (local) to GitHub (remote)
-------------------------------------------------
Syncing to GitHub.com means 4 steps:
1. Pull
2. Stage
3. Commit
4. Push
We start off this whole process by clicking on the Commit section.
### 4\.6\.1 Pull
We start off by “Pulling” from the remote repository (GitHub.com) to make sure that our local copy has the most up\-to\-date information that is available online. Right now, since we just created the repo and are the only ones that have permission to work on it, we can be pretty confident that there isn’t new information available. But we pull anyways because this is a very safe habit to get into for when you start collaborating with yourself across computers or others. Best practice is to pull often: it costs nothing (other than an internet connection).
Pull by clicking the teal Down Arrow. (Notice also how when you highlight a filename, a preview of the differences displays below).
### 4\.6\.2 Stage
Let’s click the boxes next to each file. This is called “staging a file”: you are indicating that you want GitHub to track this file, and that you will be syncing it shortly. Notice:
* .Rproj and .gitignore files: the question marks turn into an A because these are new files that have been added to your repo (automatically by RStudio, not by you).
* README.md file: the M indicates that this was modified (by you)
These are the codes used to describe how the files are changed, (from the RStudio [cheatsheet](http://www.rstudio.com/wp-content/uploads/2016/01/rstudio-IDE-cheatsheet.pdf)):
### 4\.6\.3 Commit
Committing is different from saving our files (which we still have to do! RStudio will indicate a file is unsaved with red text and an asterix). We commit a single file or a group of files when we are ready to save a snapshot in time of the progress we’ve made. Maybe this is after a big part of the analysis was done, or when you’re done working for the day.
Committing our files is a 2\-step process.
First, you write a “commit message,” which is a human\-readable note about what has changed that will accompany GitHub’s non\-human\-readable alphanumeric code to track our files. I think of commit messages like breadcrumbs to my Future Self: how can I use this space to be useful for me if I’m trying to retrace my steps (and perhaps in a panic?).
Second, you press Commit.
When we have committed successfully, we get a rather unsuccessful\-looking pop\-up message. You can read this message as “Congratulations! You’ve successfully committed 3 files, 2 of which are new!” It is also providing you with that alphanumeric SHA code that GitHub is using to track these files.
If our attempt was not successful, we will see an Error. Otherwise, interpret this message as a joyous one.
> Does your pop\-up message say “Aborting commit due to empty commit message.” GitHub is really serious about writing human\-readable commit messages.
When we close this window there is going to be (in my opinion) a very subtle indication that we are not done with the syncing process.
We have successfully committed our work as a breadcrumb\-message\-approved snapshot in time, but it still only exists locally on our computer. We can commit without an internet connection; we have not done anything yet to tell GitHub that we want this pushed to the remote repo at GitHub.com. So as the last step, we push.
### 4\.6\.4 Push
The last step in the syncing process is to Push!
Awesome! We’re done here in RStudio for the moment, let’s check out the remote on GitHub.com.
### 4\.6\.1 Pull
We start off by “Pulling” from the remote repository (GitHub.com) to make sure that our local copy has the most up\-to\-date information that is available online. Right now, since we just created the repo and are the only ones that have permission to work on it, we can be pretty confident that there isn’t new information available. But we pull anyways because this is a very safe habit to get into for when you start collaborating with yourself across computers or others. Best practice is to pull often: it costs nothing (other than an internet connection).
Pull by clicking the teal Down Arrow. (Notice also how when you highlight a filename, a preview of the differences displays below).
### 4\.6\.2 Stage
Let’s click the boxes next to each file. This is called “staging a file”: you are indicating that you want GitHub to track this file, and that you will be syncing it shortly. Notice:
* .Rproj and .gitignore files: the question marks turn into an A because these are new files that have been added to your repo (automatically by RStudio, not by you).
* README.md file: the M indicates that this was modified (by you)
These are the codes used to describe how the files are changed, (from the RStudio [cheatsheet](http://www.rstudio.com/wp-content/uploads/2016/01/rstudio-IDE-cheatsheet.pdf)):
### 4\.6\.3 Commit
Committing is different from saving our files (which we still have to do! RStudio will indicate a file is unsaved with red text and an asterix). We commit a single file or a group of files when we are ready to save a snapshot in time of the progress we’ve made. Maybe this is after a big part of the analysis was done, or when you’re done working for the day.
Committing our files is a 2\-step process.
First, you write a “commit message,” which is a human\-readable note about what has changed that will accompany GitHub’s non\-human\-readable alphanumeric code to track our files. I think of commit messages like breadcrumbs to my Future Self: how can I use this space to be useful for me if I’m trying to retrace my steps (and perhaps in a panic?).
Second, you press Commit.
When we have committed successfully, we get a rather unsuccessful\-looking pop\-up message. You can read this message as “Congratulations! You’ve successfully committed 3 files, 2 of which are new!” It is also providing you with that alphanumeric SHA code that GitHub is using to track these files.
If our attempt was not successful, we will see an Error. Otherwise, interpret this message as a joyous one.
> Does your pop\-up message say “Aborting commit due to empty commit message.” GitHub is really serious about writing human\-readable commit messages.
When we close this window there is going to be (in my opinion) a very subtle indication that we are not done with the syncing process.
We have successfully committed our work as a breadcrumb\-message\-approved snapshot in time, but it still only exists locally on our computer. We can commit without an internet connection; we have not done anything yet to tell GitHub that we want this pushed to the remote repo at GitHub.com. So as the last step, we push.
### 4\.6\.4 Push
The last step in the syncing process is to Push!
Awesome! We’re done here in RStudio for the moment, let’s check out the remote on GitHub.com.
4\.7 Commit history
-------------------
The files you added should be on github.com.
Notice how the README.md file we created is automatically displayed at the bottom. Since it is good practice to have a README file that identifies what code does (i.e. why it exists), GitHub will display a Markdown file called README nicely formatted.
Let’s also explore the commit history. The 2 commits we’ve made (the first was when we originally initiated the repo from GitHub.com) are there!
4\.8 Project\-oriented workflows
--------------------------------
Let’s go back to RStudio and how we set up well\-organized projects and workflows for our data analyses.
This GitHub repository is now also an RStudio Project (capital P Project). This just means that RStudio has saved this additional file with extension `.Rproj` (ours is `r-workshop.Rproj`) to store specific settings for this project. It’s a bit of technology to help us get into the good habit of having a project\-oriented workflow.
A [project\-oriented workflow](https://rstats.wtf/project-oriented-workflow.html) means that we are going to organize all of the relevant things we need for our analyses in the same place. That means that this is the place where we keep all of our data, code, figures, notes, etc.
R Projects are great for reproducibility, because our self\-contained working directory will be the **first** place R looks for files.
### 4\.8\.1 Working directory
Now that we have our Project, let’s revisit this important question: where are we? Now we are in our Project. Everything we do will by default be saved here so we can be nice and organized.
And this is important because if Allison clones this repository that you just made and saves it in `Allison/my/projects/way/over/here`, she will still be able to interact with your files as you are here.
### 4\.8\.1 Working directory
Now that we have our Project, let’s revisit this important question: where are we? Now we are in our Project. Everything we do will by default be saved here so we can be nice and organized.
And this is important because if Allison clones this repository that you just made and saves it in `Allison/my/projects/way/over/here`, she will still be able to interact with your files as you are here.
4\.9 Project\-oriented workflows in action (aka our analytical setup)
---------------------------------------------------------------------
Let’s get a bit organized. First, let’s create our a new R Markdown file where we will do our analyses. This will be nice because you can also write notes to yourself in this document.
### 4\.9\.1 Create a new Rmd file
So let’s do this (again):
File \> New File \> R Markdown … (or click the green plus in the top left corner).
Let’s set up this file so we can use it for the rest of the day. I’m going to update the header with a new title and add my name, and then I’m going to delete the rest of the document so that we have a clean start.
> **Efficiency Tip**: I use Shift \- Command \- Down Arrow to highlight text from my cursor to the end of the document
```
---
title: "Creating graphs in R with `ggplot2`"
author: "Julie Lowndes"
date: "01/27/2020"
output: html_document
---
# Plots with ggplot2
We are going to make plots in R and it's going to be amazing.
```
Now, let’s save it. I’m going to call my file `plots-ggplot.Rmd`.
Notice that when we save this file, it pops up in our Git tab. Git knows that there is something new in our repo.
Let’s also knit this file. And look: Git also sees the knitted .html.
And let’s practice syncing our file to GitHub: pull, stage, commit, push
> **Troubleshooting:** What if a file doesn’t show up in the Git tab and you expect that it should? Check to make sure you’ve saved the file. If the filename is red with an asterix, there have been changes since it was saved. Remember to save before syncing to GitHub!
### 4\.9\.2 Create data and figures folders
Let’s create a few folders to be organized. Let’s have one for our the raw data, and one for the figures we will output. We can do this in RStudio, in the bottom right pane Files pane by clicking the New Folder button:
* folder called “data”
* folder called “figures”
We can press the refresh button in the top\-right of this pane (next to the “More” button) to have these show up in alphabetical order.
Now let’s go to our Finder or Windows Explorer: our new folders are there as well!
### 4\.9\.3 Move data files to data folder
You downloaded several files for this workshop from the [r\-for\-excel\-data folder](https://drive.google.com/drive/folders/1RywSUw8hxETlROdIhLIntxPsZq0tKSdS?usp=sharing), and we’ll move these data into our repo now. These data files are a mix of comma separate value (.csv) files and some as Excel spreadsheets (.xlsx):
* ca\_np.csv
* ci\_np.xlsx
* fish.csv
* inverts.xlsx
* kelp\_fronds.xlsx
* lobsters.xlsx
* lobsters2\.xlsx
* noaa\_landings.csv
* substrate.xlsx
Copy\-paste or drag all of these files into the ‘data’ subfolder of your R project. Make sure you do not also copy the original folder; we don’t need any subfolders in our data folder.
Now let’s go back to RStudio. We can click on the data folder in the Files tab and see them.
The data folder also shows up in your Git tab. But the figures folder does not. That is because GitHub cannot track an empty folder, it can only track files within a folder.
Let’s sync these data files (we will be able to sync the figures folder shortly). We can stage multiple files at once by typing Command \- A and clicking “Stage” (or using the space bar). To Sync: pull \- stage \- commit \- push!
### 4\.9\.4 Activity
Edit your README and practice syncing (pull, stage, commit, push). For example,
```
"We use the following data from the Santa Barbara Coastal Term Ecological Research and National Oceanic and Atmospheric Administration in our analyses"
```
Explore your Commit History, and discuss with your neighbor.
### 4\.9\.1 Create a new Rmd file
So let’s do this (again):
File \> New File \> R Markdown … (or click the green plus in the top left corner).
Let’s set up this file so we can use it for the rest of the day. I’m going to update the header with a new title and add my name, and then I’m going to delete the rest of the document so that we have a clean start.
> **Efficiency Tip**: I use Shift \- Command \- Down Arrow to highlight text from my cursor to the end of the document
```
---
title: "Creating graphs in R with `ggplot2`"
author: "Julie Lowndes"
date: "01/27/2020"
output: html_document
---
# Plots with ggplot2
We are going to make plots in R and it's going to be amazing.
```
Now, let’s save it. I’m going to call my file `plots-ggplot.Rmd`.
Notice that when we save this file, it pops up in our Git tab. Git knows that there is something new in our repo.
Let’s also knit this file. And look: Git also sees the knitted .html.
And let’s practice syncing our file to GitHub: pull, stage, commit, push
> **Troubleshooting:** What if a file doesn’t show up in the Git tab and you expect that it should? Check to make sure you’ve saved the file. If the filename is red with an asterix, there have been changes since it was saved. Remember to save before syncing to GitHub!
### 4\.9\.2 Create data and figures folders
Let’s create a few folders to be organized. Let’s have one for our the raw data, and one for the figures we will output. We can do this in RStudio, in the bottom right pane Files pane by clicking the New Folder button:
* folder called “data”
* folder called “figures”
We can press the refresh button in the top\-right of this pane (next to the “More” button) to have these show up in alphabetical order.
Now let’s go to our Finder or Windows Explorer: our new folders are there as well!
### 4\.9\.3 Move data files to data folder
You downloaded several files for this workshop from the [r\-for\-excel\-data folder](https://drive.google.com/drive/folders/1RywSUw8hxETlROdIhLIntxPsZq0tKSdS?usp=sharing), and we’ll move these data into our repo now. These data files are a mix of comma separate value (.csv) files and some as Excel spreadsheets (.xlsx):
* ca\_np.csv
* ci\_np.xlsx
* fish.csv
* inverts.xlsx
* kelp\_fronds.xlsx
* lobsters.xlsx
* lobsters2\.xlsx
* noaa\_landings.csv
* substrate.xlsx
Copy\-paste or drag all of these files into the ‘data’ subfolder of your R project. Make sure you do not also copy the original folder; we don’t need any subfolders in our data folder.
Now let’s go back to RStudio. We can click on the data folder in the Files tab and see them.
The data folder also shows up in your Git tab. But the figures folder does not. That is because GitHub cannot track an empty folder, it can only track files within a folder.
Let’s sync these data files (we will be able to sync the figures folder shortly). We can stage multiple files at once by typing Command \- A and clicking “Stage” (or using the space bar). To Sync: pull \- stage \- commit \- push!
### 4\.9\.4 Activity
Edit your README and practice syncing (pull, stage, commit, push). For example,
```
"We use the following data from the Santa Barbara Coastal Term Ecological Research and National Oceanic and Atmospheric Administration in our analyses"
```
Explore your Commit History, and discuss with your neighbor.
4\.10 Committing \- how often? Tracking changes in your files
-------------------------------------------------------------
Whenever you make changes to the files in Github, you will walk through the Pull \-\> Stage \-\> Commit \-\> Push steps.
I tend to do this every time I finish a task (basically when I start getting nervous that I will lose my work). Once something is committed, it is very difficult to lose it.
4\.11 Issues
------------
Let’s go back to our repo on GitHub.com, and talk about Issues.
Issues “track ideas, enhancements, tasks, or bugs for work on GitHub.” \- [GitHub help article](https://help.github.com/en/articles/about-issues).
You can create an issue for a topic, track progress, others ask questions, provide links and updates, close issue when completed.
In a public repo, anyone with a username can create and comment on issues. In a private repo, only users with permission can create and comment on issues, or see them at all.
GitHub search is awesome – will search code and issues!
### 4\.11\.1 Issues in the wild!
Here are some examples of “traditional” and “less traditional” Issues:
Bug reports, code, feature, \& help requests: [ggplot2](https://github.com/tidyverse/ggplot2/issues)
Project submissions and progress tracking: [MozillaFestival](https://github.com/MozillaFestival/mozfest-program-2018/issues)
Private conversations and archiving: [OHI Fellows (private)](https://github.com/OHI-Science/globalfellows-issues/issues/)
### 4\.11\.2 END **GitHub** session!
We’ll continue practicing GitHub throughout the rest of the book, but see Chapter [9](collaborating.html#collaborating) for explicit instructions on collaborating in GitHub.
### 4\.11\.1 Issues in the wild!
Here are some examples of “traditional” and “less traditional” Issues:
Bug reports, code, feature, \& help requests: [ggplot2](https://github.com/tidyverse/ggplot2/issues)
Project submissions and progress tracking: [MozillaFestival](https://github.com/MozillaFestival/mozfest-program-2018/issues)
Private conversations and archiving: [OHI Fellows (private)](https://github.com/OHI-Science/globalfellows-issues/issues/)
### 4\.11\.2 END **GitHub** session!
We’ll continue practicing GitHub throughout the rest of the book, but see Chapter [9](collaborating.html#collaborating) for explicit instructions on collaborating in GitHub.
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/ggplot2.html |
Chapter 5 Graphs with ggplot2
=============================
5\.1 Summary
------------
In this session, we’ll first learn how to read some external data (from .xls, .xlsx, and CSV files) into R with the `readr` and `readxl` packages (both part of the `tidyverse`).
Then, we’ll write reproducible code to build graphs piece\-by\-piece. In Excel, graphs are made by manually selecting options \- which, as we’ve discussed previously, may not be the best option for reproducibility. Also, if we haven’t built a graph with reproducible code, then we might not be able to easily recreate a graph *or* use that code again to make the same style graph with different data.
Using `ggplot2`, the graphics package within the `tidyverse`, we’ll write reproducible code to manually and thoughtfully build our graphs.
> “ggplot2 implements the grammar of graphics, a coherent system for describing and building graphs. With ggplot2, you can do more faster by learning one system and applying it in many places.” \- [R4DS](http://r4ds.had.co.nz/data-visualisation.html)
So yeah…that `gg` is from “grammar of graphics.”
We’ll use the `ggplot2` package, but the function we use to initialize a graph will be `ggplot`, which works best for data in tidy format (i.e., a column for every variable, and a row for every observation). Graphics with `ggplot` are built step\-by\-step, adding new elements as layers with a plus sign (`+`) between layers (note: this is different from the pipe operator, `%>%`. Adding layers in this fashion allows for extensive flexibility and customization of plots.
### 5\.1\.1 Objectives
* Read in external data (Excel files, CSVs) with `readr` and `readxl`
* Initial data exploration
* Build several common types of graphs (scatterplot, column, line) in ggplot2
* Customize gg\-graph aesthetics (color, style, themes, etc.)
* Update axis labels and titles
* Combine compatible graph types (geoms)
* Build multiseries graphs
* Split up data into faceted graphs
* Export figures with `ggsave()`
### 5\.1\.2 Resources
* [readr documentation](https://readr.tidyverse.org/) from tidyverse.org
* [readxl documentation](https://readxl.tidyverse.org/) from tidyverse.org
* [readxl workflows](https://readxl.tidyverse.org/articles/articles/readxl-workflows.html) from tidyverse.org
* [Chapter 3 \- Data Visualization in R for Data Science by Grolemund and Wickham](https://r4ds.had.co.nz/data-visualisation.html)
* [ggplot2\-cheatsheet\-2\.1\.pdf](https://www.rstudio.com/wp-content/uploads/2016/11/ggplot2-cheatsheet-2.1.pdf)
* [Graphs with ggplot2 \- Cookbook for R](http://www.cookbook-r.com/Graphs/#graphs-with-ggplot2)
* [“Why I use ggplot2” \- David Robinson Blog Post](http://varianceexplained.org/r/why-I-use-ggplot2/)
5\.2 Getting started \- In existing .Rmd, attach packages
---------------------------------------------------------
In your existing `plots-ggplot.Rmd` from Session 2, remove everything below the first code chunk.
The `ggplot2` package is part of the `tidyverse`, so we don’t need to attach it separately. Attach the `tidyverse`, `readxl` and `here` packages in the top\-most code chunk of your .Rmd.
```
library(tidyverse)
library(readxl)
library(here)
```
5\.3 Read in .xlsx and .csv files
---------------------------------
In this session, we’ll use data for parks visitation from two files:
* A comma\-separated\-value (CSV) file containing visitation data for all National Parks in California (ca\_np.csv)
* A single Excel worksheet containing only visitation for Channel Islands National Park (ci\_np.xlsx)
### 5\.3\.1 `read_csv()` to read in comma\-separated\-value (.csv) files
There are many types of files containing data that you might want to work with in R. A common one is a comma separated value (CSV) file, which contains values with each column entry separated by a comma delimiter. CSVs can be opened, viewed, and worked with in Excel just like an .xls or .xlsx file \- but let’s learn how to get data directly from a CSV into R where we can work with it more reproducibly.
To read in the ca\_np.csv file, we need to:
* insert a new code chunk
* use `read_csv()` to read in the file
* use `here()` within `read_csv()` to tell it where to look
* assign the stored data an object name (we’ll store ours as ca\_np)
```
ca_np <- read_csv(here("data", "ca_np.csv"))
```
Look in your Environment to see that **ca\_np** now shows up. Click on the object in the Environment, and R will automatically run the `View()` function for you to pull up your data in a separate viewing tab. Now we can look at it in the spreadsheet format we’re used to.
We can explore our data frame a bit more to see what it contains. For example:
* `names()`: to see the variable (column) names
* `head()`: to see the first *x* rows (6 is the default)
* `summary()`: see a quick summary of each variable
(Remember that `names()` is the name of the function but `names(ca_np)` is how we use it on the data.)
Cool! Next, let’s read in ci\_np.xlsx an Excel file) using `read_excel()`.
### 5\.3\.2 `readxl` to read in Excel files
We also have an Excel file (ci\_np.xlsx) that contains observations **only** for Channel Islands National Park visitation. Both `readr` and `readxl` are part of the `tidyverse`, which means we should expect their functions to have similar syntax and structure.
*Note: If `readxl` is part of the `tidyverse`, then why did I have to attach it separately?* Great question! Too keep the `tidyverse` manageable, there are “core” packages (`readr`, `dplyr`, `tidyr`, `ggplot2`, `forcats`, `purrr`, `tibble`, `stringr`) that you would expect to use frequently, and those are automatically attached when you use `library(tidyverse)`. But there are *also* more specialized `tidyverse` packages (e.g. `readxl`, `reprex`, `lubridate`, `rvest`) that are built with similar design philosophy, but are not automatically attached with `library(tidyverse)`. Those specialized packages are **installed** along with the `tidyverse`, but need to be attached individually (e.g. with `library(readxl)`).
Use `read_excel()` to get the ci\_np.xlsx data into R:
```
ci_np <- read_excel(here("data", "ci_np.xlsx"))
```
*Note: If you want to explicitly read in an .xlsx or .xls file, you can use `read_xlsx()` or `read_xls()` instead. `read_excel()` will make its best guess about which type of Excel file you’re reading in, so is the generic form.*
Explore the ci\_np data frame as above using functions like `View()`, `names()`, `head()`, and `summary()`.
Now that we have read in the National Parks visitation data, let’s use `ggplot2` to visualize it.
5\.4 Our first ggplot graph: Visitors to Channel Islands NP
-----------------------------------------------------------
To create a bare\-bones ggplot graph, we need to tell R three basic things:
1. We’re using `ggplot2::ggplot()`
2. Data we’re using \& variables we’re plotting (i.e., what is x and/or y?)
3. What type of graph we’re making (the type of *geom*)
Generally, that structure will look like this:
```
ggplot(data = df_name, aes(x = x_var_name, y = y_var_name)) +
geom_type()
```
Breaking that down:
* First, tell R you’re using `ggplot()`
* Then, tell it the object name where variables exist (`data = df_name`)
* Next, tell it the aesthetics `aes()` to specify which variables you want to plot
* Then add a layer for the type of geom (graph type) with `geom_*()` \- for example, `geom_point()` is a scatterplot, `geom_line()` is a line graph, `geom_col()` is a column graph, etc.
Let’s do that to create a line graph of visitors to Channel Islands National Park:
```
ggplot(data = ci_np, aes(x = year, y = visitors)) +
geom_line()
```
We’re going to be doing a lot of plot variations with those same variables. Let’s store the first line as object `gg_base` so that we don’t need to retype it each time:
```
gg_base <- ggplot(data = ci_np, aes(x = year, y = visitors))
```
Or, we could change that to a scatterplot just by updating the `geom_*`:
```
gg_base +
geom_point()
```
We could even do that for a column graph:
```
gg_base +
geom_col()
```
Or an area plot…
```
gg_base +
geom_area()
```
We can see that updating to different `geom_*` types is quick, so long as the types of graphs we’re switching between are compatible.
The data are there, now let’s do some data viz customization.
5\.5 Intro to customizing `ggplot` graphs
-----------------------------------------
First, we’ll customize some aesthetics (e.g. colors, styles, axis labels, etc.) of our graphs based on non\-variable values.
> We can change the aesthetics of elements in a ggplot graph by adding arguments within the layer where that element is created.
Some common arguments we’ll use first are:
* `color =` or `colour =`: update point or line colors
* `fill =`: update fill color for objects with areas
* `linetype =`: update the line type (dashed, long dash, etc.)
* `pch =`: update the point style
* `size =`: update the element size (e.g. of points or line thickness)
* `alpha =`: update element opacity (1 \= opaque, 0 \= transparent)
Building on our first line graph, let’s update the line color to “purple” and make the line type “dashed”:
```
gg_base +
geom_line(
color = "purple",
linetype = "dashed"
)
```
How do we know which color names ggplot will recognize? If you google “R colors ggplot2” you’ll find a lot of good resources. Here’s one: [SAPE ggplot2 colors quick reference guide](http://sape.inf.usi.ch/quick-reference/ggplot2/colour)
Now let’s update the point, style and size of points on our previous scatterplot graph using `color =`, `size =`, and `pch =` (see `?pch` for the different point styles, which can be further customized).
```
gg_base +
geom_point(color = "purple",
pch = 17,
size = 4,
alpha = 0.5)
```
### 5\.5\.1 Activity: customize your own ggplot graph
Update one of the example graphs you created above to customize **at least** an element color and size!
5\.6 Mapping variables onto aesthetics
--------------------------------------
In the examples above, we have customized aesthetics based on constants that we input as arguments (e.g., the color / style / size isn’t changing based on a variable characteristic or value). Sometimes, however, we **do** want the aesthetics of a graph to depend on a variable. To do that, we’ll **map variables onto graph aesthetics**, meaning we’ll change how an element on the graph looks based on a variable characteristic (usually, character or value).
> When we want to customize a graph element based on a variable’s characteristic or value, add the argument within `aes()` in the appropriate `geom_*()` layer
In short, if updating aesthetics based on a variable, make sure to put that argument inside of `aes()`.
**Example:** Create a ggplot scatterplot graph where the **size** and **color** of the points change based on the **number of visitors**, and make all points the same level of opacity (`alpha = 0.5`). Notice the `aes()` around the `size =` and `color =` arguments.
Also: this is overmapped and unnecessary. Avoid excessive / overcomplicated aesthetic mapping in data visualization.
```
gg_base +
geom_point(
aes(size = visitors,
color = visitors),
alpha = 0.5
)
```
In the example above, notice that the two arguments that **do** depend on variables are within `aes()`, but since `alpha = 0.5` doesn’t depend on a variable then it is *outside the `aes()` but still within the `geom_point()` layer*.
### 5\.6\.1 Activity: map variables onto graph aesthetics
Create a column plot of Channel Islands National Park visitation over time, where the **fill color** (argument: `fill =`) changes based on the number of **visitors**.
```
gg_base +
geom_col(aes(fill = visitors))
```
**Sync your project with your GitHub repo.**
5\.7 ggplot2 complete themes
----------------------------
While every element of a ggplot graph is manually customizable, there are also built\-in themes (`theme_*()`) that you can add to your ggplot code to make some major headway before making smaller tweaks manually.
Here are a few to try today (but also notice all the options that appear as we start typing `theme_` into our ggplot graph code!):
* `theme_light()`
* `theme_minimal()`
* `theme_bw()`
Here, let’s update our previous graph with `theme_minimal()`:
```
gg_base +
geom_point(
aes(size = visitors,
color = visitors),
alpha = 0.5
) +
theme_minimal()
```
5\.8 Updating axis labels and titles
------------------------------------
Use `labs()` to update axis labels, and add a title and/or subtitle to your ggplot graph.
```
gg_base +
geom_line(linetype = "dotted") +
theme_bw() +
labs(
x = "Year",
y = "Annual park visitors",
title = "Channel Islands NP Visitation",
subtitle = "(1963 - 2016)"
)
```
**Note**: If you want to update the formatting of axis values (for example, to convert to comma format instead of scientific format above), you can use the `scales` package options (see more from the [R Cookbook](http://www.cookbook-r.com/Graphs/Axes_(ggplot2)/)).
5\.9 Combining compatible geoms
-------------------------------
As long as the geoms are compatible, we can layer them on top of one another to further customize a graph.
For example, adding points to a line graph:
```
gg_base +
geom_line(color = "purple") +
geom_point(color = "orange",
aes(size = year),
alpha = 0.5)
```
Or, combine a column and line graph (not sure why you’d want to do this, but you can):
```
gg_base +
geom_col(fill = "orange",
color = "purple") +
geom_line(color = "green")
```
5\.10 Multi\-series ggplot graphs
---------------------------------
In the examples above, we only had a single series \- visitation at Channel Islands National Park. Often we’ll want to visualize multiple series. For example, from the `ca_np` object we have stored, we might want to plot visitation for *all* California National Parks.
To do that, we need to add an aesthetic that lets `ggplot` know how things are going to be grouped. A demonstration of why that’s important \- what happens if we *don’t* let ggplot know how to group things?
```
ggplot(data = ca_np, aes(x = year, y = visitors)) +
geom_line()
```
Well that’s definitely a mess, and it’s because ggplot has no idea that these **should be different series based on the different parks that appear in the ‘park\_name’ column**.
We can make sure R does know by adding an explicit grouping argument (`group =`), or by updating an aesthetic based on *park\_name*:
```
ggplot(data = ca_np, aes(x = year, y = visitors, group = park_name)) +
geom_line()
```
**Note**: You could also add an aesthetic (`color = park_name`) in the `geom_line()` layer to create groupings, instead of in the topmost `ggplot()` layer.
Let’s store that topmost line so that we can use it more quickly later on in the lesson:
```
gg_np <- ggplot(data = ca_np, aes(x = year, y = visitors, group = park_name))
```
5\.11 Faceting ggplot graphs
----------------------------
When we facet graphs, we split them up into multiple plotting panels, where each panel contains a subset of the data. In our case, we’ll split the graph above into different panels, each containing visitation data for a single park.
Also notice that any general theme changes made will be applied to *all* of the graphs.
```
gg_np +
geom_line(show.legend = FALSE) +
theme_light() +
labs(x = "year", y = "annual visitors") +
facet_wrap(~ park_name)
```
5\.12 Exporting a ggplot graph with `ggsave()`
----------------------------------------------
If we want our graph to appear in a knitted html, then we don’t need to do anything else. But often we’ll need a saved image file, of specific size and resolution, to share or for publication.
`ggsave()` will export the *most recently run* ggplot graph by default (`plot = last_plot()`), unless you give it the name of a different saved ggplot object. Some common arguments for `ggsave()`:
* `width =`: set exported image width (default inches)
* `height =`: set exported image height (default height)
* `dpi =`: set dpi (dots per inch)
So to export the faceted graph above at 180 dpi, width a width of 8" and a height of 7", we can use:
```
ggsave(here("figures", "np_graph.jpg"), dpi = 180, width = 8, height = 7)
```
Notice that a .jpg image of that name and size is now stored in the `figures\` folder within your working directory. You can change the type of exported image, too (e.g. pdf, tiff, eps, png, mmp, svg).
**Sync your project with your GitHub repo.**
* Stage
* Commit
* Pull (to check for remote changes)
* Push!
### 5\.12\.1 End `ggplot` session!
5\.1 Summary
------------
In this session, we’ll first learn how to read some external data (from .xls, .xlsx, and CSV files) into R with the `readr` and `readxl` packages (both part of the `tidyverse`).
Then, we’ll write reproducible code to build graphs piece\-by\-piece. In Excel, graphs are made by manually selecting options \- which, as we’ve discussed previously, may not be the best option for reproducibility. Also, if we haven’t built a graph with reproducible code, then we might not be able to easily recreate a graph *or* use that code again to make the same style graph with different data.
Using `ggplot2`, the graphics package within the `tidyverse`, we’ll write reproducible code to manually and thoughtfully build our graphs.
> “ggplot2 implements the grammar of graphics, a coherent system for describing and building graphs. With ggplot2, you can do more faster by learning one system and applying it in many places.” \- [R4DS](http://r4ds.had.co.nz/data-visualisation.html)
So yeah…that `gg` is from “grammar of graphics.”
We’ll use the `ggplot2` package, but the function we use to initialize a graph will be `ggplot`, which works best for data in tidy format (i.e., a column for every variable, and a row for every observation). Graphics with `ggplot` are built step\-by\-step, adding new elements as layers with a plus sign (`+`) between layers (note: this is different from the pipe operator, `%>%`. Adding layers in this fashion allows for extensive flexibility and customization of plots.
### 5\.1\.1 Objectives
* Read in external data (Excel files, CSVs) with `readr` and `readxl`
* Initial data exploration
* Build several common types of graphs (scatterplot, column, line) in ggplot2
* Customize gg\-graph aesthetics (color, style, themes, etc.)
* Update axis labels and titles
* Combine compatible graph types (geoms)
* Build multiseries graphs
* Split up data into faceted graphs
* Export figures with `ggsave()`
### 5\.1\.2 Resources
* [readr documentation](https://readr.tidyverse.org/) from tidyverse.org
* [readxl documentation](https://readxl.tidyverse.org/) from tidyverse.org
* [readxl workflows](https://readxl.tidyverse.org/articles/articles/readxl-workflows.html) from tidyverse.org
* [Chapter 3 \- Data Visualization in R for Data Science by Grolemund and Wickham](https://r4ds.had.co.nz/data-visualisation.html)
* [ggplot2\-cheatsheet\-2\.1\.pdf](https://www.rstudio.com/wp-content/uploads/2016/11/ggplot2-cheatsheet-2.1.pdf)
* [Graphs with ggplot2 \- Cookbook for R](http://www.cookbook-r.com/Graphs/#graphs-with-ggplot2)
* [“Why I use ggplot2” \- David Robinson Blog Post](http://varianceexplained.org/r/why-I-use-ggplot2/)
### 5\.1\.1 Objectives
* Read in external data (Excel files, CSVs) with `readr` and `readxl`
* Initial data exploration
* Build several common types of graphs (scatterplot, column, line) in ggplot2
* Customize gg\-graph aesthetics (color, style, themes, etc.)
* Update axis labels and titles
* Combine compatible graph types (geoms)
* Build multiseries graphs
* Split up data into faceted graphs
* Export figures with `ggsave()`
### 5\.1\.2 Resources
* [readr documentation](https://readr.tidyverse.org/) from tidyverse.org
* [readxl documentation](https://readxl.tidyverse.org/) from tidyverse.org
* [readxl workflows](https://readxl.tidyverse.org/articles/articles/readxl-workflows.html) from tidyverse.org
* [Chapter 3 \- Data Visualization in R for Data Science by Grolemund and Wickham](https://r4ds.had.co.nz/data-visualisation.html)
* [ggplot2\-cheatsheet\-2\.1\.pdf](https://www.rstudio.com/wp-content/uploads/2016/11/ggplot2-cheatsheet-2.1.pdf)
* [Graphs with ggplot2 \- Cookbook for R](http://www.cookbook-r.com/Graphs/#graphs-with-ggplot2)
* [“Why I use ggplot2” \- David Robinson Blog Post](http://varianceexplained.org/r/why-I-use-ggplot2/)
5\.2 Getting started \- In existing .Rmd, attach packages
---------------------------------------------------------
In your existing `plots-ggplot.Rmd` from Session 2, remove everything below the first code chunk.
The `ggplot2` package is part of the `tidyverse`, so we don’t need to attach it separately. Attach the `tidyverse`, `readxl` and `here` packages in the top\-most code chunk of your .Rmd.
```
library(tidyverse)
library(readxl)
library(here)
```
5\.3 Read in .xlsx and .csv files
---------------------------------
In this session, we’ll use data for parks visitation from two files:
* A comma\-separated\-value (CSV) file containing visitation data for all National Parks in California (ca\_np.csv)
* A single Excel worksheet containing only visitation for Channel Islands National Park (ci\_np.xlsx)
### 5\.3\.1 `read_csv()` to read in comma\-separated\-value (.csv) files
There are many types of files containing data that you might want to work with in R. A common one is a comma separated value (CSV) file, which contains values with each column entry separated by a comma delimiter. CSVs can be opened, viewed, and worked with in Excel just like an .xls or .xlsx file \- but let’s learn how to get data directly from a CSV into R where we can work with it more reproducibly.
To read in the ca\_np.csv file, we need to:
* insert a new code chunk
* use `read_csv()` to read in the file
* use `here()` within `read_csv()` to tell it where to look
* assign the stored data an object name (we’ll store ours as ca\_np)
```
ca_np <- read_csv(here("data", "ca_np.csv"))
```
Look in your Environment to see that **ca\_np** now shows up. Click on the object in the Environment, and R will automatically run the `View()` function for you to pull up your data in a separate viewing tab. Now we can look at it in the spreadsheet format we’re used to.
We can explore our data frame a bit more to see what it contains. For example:
* `names()`: to see the variable (column) names
* `head()`: to see the first *x* rows (6 is the default)
* `summary()`: see a quick summary of each variable
(Remember that `names()` is the name of the function but `names(ca_np)` is how we use it on the data.)
Cool! Next, let’s read in ci\_np.xlsx an Excel file) using `read_excel()`.
### 5\.3\.2 `readxl` to read in Excel files
We also have an Excel file (ci\_np.xlsx) that contains observations **only** for Channel Islands National Park visitation. Both `readr` and `readxl` are part of the `tidyverse`, which means we should expect their functions to have similar syntax and structure.
*Note: If `readxl` is part of the `tidyverse`, then why did I have to attach it separately?* Great question! Too keep the `tidyverse` manageable, there are “core” packages (`readr`, `dplyr`, `tidyr`, `ggplot2`, `forcats`, `purrr`, `tibble`, `stringr`) that you would expect to use frequently, and those are automatically attached when you use `library(tidyverse)`. But there are *also* more specialized `tidyverse` packages (e.g. `readxl`, `reprex`, `lubridate`, `rvest`) that are built with similar design philosophy, but are not automatically attached with `library(tidyverse)`. Those specialized packages are **installed** along with the `tidyverse`, but need to be attached individually (e.g. with `library(readxl)`).
Use `read_excel()` to get the ci\_np.xlsx data into R:
```
ci_np <- read_excel(here("data", "ci_np.xlsx"))
```
*Note: If you want to explicitly read in an .xlsx or .xls file, you can use `read_xlsx()` or `read_xls()` instead. `read_excel()` will make its best guess about which type of Excel file you’re reading in, so is the generic form.*
Explore the ci\_np data frame as above using functions like `View()`, `names()`, `head()`, and `summary()`.
Now that we have read in the National Parks visitation data, let’s use `ggplot2` to visualize it.
### 5\.3\.1 `read_csv()` to read in comma\-separated\-value (.csv) files
There are many types of files containing data that you might want to work with in R. A common one is a comma separated value (CSV) file, which contains values with each column entry separated by a comma delimiter. CSVs can be opened, viewed, and worked with in Excel just like an .xls or .xlsx file \- but let’s learn how to get data directly from a CSV into R where we can work with it more reproducibly.
To read in the ca\_np.csv file, we need to:
* insert a new code chunk
* use `read_csv()` to read in the file
* use `here()` within `read_csv()` to tell it where to look
* assign the stored data an object name (we’ll store ours as ca\_np)
```
ca_np <- read_csv(here("data", "ca_np.csv"))
```
Look in your Environment to see that **ca\_np** now shows up. Click on the object in the Environment, and R will automatically run the `View()` function for you to pull up your data in a separate viewing tab. Now we can look at it in the spreadsheet format we’re used to.
We can explore our data frame a bit more to see what it contains. For example:
* `names()`: to see the variable (column) names
* `head()`: to see the first *x* rows (6 is the default)
* `summary()`: see a quick summary of each variable
(Remember that `names()` is the name of the function but `names(ca_np)` is how we use it on the data.)
Cool! Next, let’s read in ci\_np.xlsx an Excel file) using `read_excel()`.
### 5\.3\.2 `readxl` to read in Excel files
We also have an Excel file (ci\_np.xlsx) that contains observations **only** for Channel Islands National Park visitation. Both `readr` and `readxl` are part of the `tidyverse`, which means we should expect their functions to have similar syntax and structure.
*Note: If `readxl` is part of the `tidyverse`, then why did I have to attach it separately?* Great question! Too keep the `tidyverse` manageable, there are “core” packages (`readr`, `dplyr`, `tidyr`, `ggplot2`, `forcats`, `purrr`, `tibble`, `stringr`) that you would expect to use frequently, and those are automatically attached when you use `library(tidyverse)`. But there are *also* more specialized `tidyverse` packages (e.g. `readxl`, `reprex`, `lubridate`, `rvest`) that are built with similar design philosophy, but are not automatically attached with `library(tidyverse)`. Those specialized packages are **installed** along with the `tidyverse`, but need to be attached individually (e.g. with `library(readxl)`).
Use `read_excel()` to get the ci\_np.xlsx data into R:
```
ci_np <- read_excel(here("data", "ci_np.xlsx"))
```
*Note: If you want to explicitly read in an .xlsx or .xls file, you can use `read_xlsx()` or `read_xls()` instead. `read_excel()` will make its best guess about which type of Excel file you’re reading in, so is the generic form.*
Explore the ci\_np data frame as above using functions like `View()`, `names()`, `head()`, and `summary()`.
Now that we have read in the National Parks visitation data, let’s use `ggplot2` to visualize it.
5\.4 Our first ggplot graph: Visitors to Channel Islands NP
-----------------------------------------------------------
To create a bare\-bones ggplot graph, we need to tell R three basic things:
1. We’re using `ggplot2::ggplot()`
2. Data we’re using \& variables we’re plotting (i.e., what is x and/or y?)
3. What type of graph we’re making (the type of *geom*)
Generally, that structure will look like this:
```
ggplot(data = df_name, aes(x = x_var_name, y = y_var_name)) +
geom_type()
```
Breaking that down:
* First, tell R you’re using `ggplot()`
* Then, tell it the object name where variables exist (`data = df_name`)
* Next, tell it the aesthetics `aes()` to specify which variables you want to plot
* Then add a layer for the type of geom (graph type) with `geom_*()` \- for example, `geom_point()` is a scatterplot, `geom_line()` is a line graph, `geom_col()` is a column graph, etc.
Let’s do that to create a line graph of visitors to Channel Islands National Park:
```
ggplot(data = ci_np, aes(x = year, y = visitors)) +
geom_line()
```
We’re going to be doing a lot of plot variations with those same variables. Let’s store the first line as object `gg_base` so that we don’t need to retype it each time:
```
gg_base <- ggplot(data = ci_np, aes(x = year, y = visitors))
```
Or, we could change that to a scatterplot just by updating the `geom_*`:
```
gg_base +
geom_point()
```
We could even do that for a column graph:
```
gg_base +
geom_col()
```
Or an area plot…
```
gg_base +
geom_area()
```
We can see that updating to different `geom_*` types is quick, so long as the types of graphs we’re switching between are compatible.
The data are there, now let’s do some data viz customization.
5\.5 Intro to customizing `ggplot` graphs
-----------------------------------------
First, we’ll customize some aesthetics (e.g. colors, styles, axis labels, etc.) of our graphs based on non\-variable values.
> We can change the aesthetics of elements in a ggplot graph by adding arguments within the layer where that element is created.
Some common arguments we’ll use first are:
* `color =` or `colour =`: update point or line colors
* `fill =`: update fill color for objects with areas
* `linetype =`: update the line type (dashed, long dash, etc.)
* `pch =`: update the point style
* `size =`: update the element size (e.g. of points or line thickness)
* `alpha =`: update element opacity (1 \= opaque, 0 \= transparent)
Building on our first line graph, let’s update the line color to “purple” and make the line type “dashed”:
```
gg_base +
geom_line(
color = "purple",
linetype = "dashed"
)
```
How do we know which color names ggplot will recognize? If you google “R colors ggplot2” you’ll find a lot of good resources. Here’s one: [SAPE ggplot2 colors quick reference guide](http://sape.inf.usi.ch/quick-reference/ggplot2/colour)
Now let’s update the point, style and size of points on our previous scatterplot graph using `color =`, `size =`, and `pch =` (see `?pch` for the different point styles, which can be further customized).
```
gg_base +
geom_point(color = "purple",
pch = 17,
size = 4,
alpha = 0.5)
```
### 5\.5\.1 Activity: customize your own ggplot graph
Update one of the example graphs you created above to customize **at least** an element color and size!
### 5\.5\.1 Activity: customize your own ggplot graph
Update one of the example graphs you created above to customize **at least** an element color and size!
5\.6 Mapping variables onto aesthetics
--------------------------------------
In the examples above, we have customized aesthetics based on constants that we input as arguments (e.g., the color / style / size isn’t changing based on a variable characteristic or value). Sometimes, however, we **do** want the aesthetics of a graph to depend on a variable. To do that, we’ll **map variables onto graph aesthetics**, meaning we’ll change how an element on the graph looks based on a variable characteristic (usually, character or value).
> When we want to customize a graph element based on a variable’s characteristic or value, add the argument within `aes()` in the appropriate `geom_*()` layer
In short, if updating aesthetics based on a variable, make sure to put that argument inside of `aes()`.
**Example:** Create a ggplot scatterplot graph where the **size** and **color** of the points change based on the **number of visitors**, and make all points the same level of opacity (`alpha = 0.5`). Notice the `aes()` around the `size =` and `color =` arguments.
Also: this is overmapped and unnecessary. Avoid excessive / overcomplicated aesthetic mapping in data visualization.
```
gg_base +
geom_point(
aes(size = visitors,
color = visitors),
alpha = 0.5
)
```
In the example above, notice that the two arguments that **do** depend on variables are within `aes()`, but since `alpha = 0.5` doesn’t depend on a variable then it is *outside the `aes()` but still within the `geom_point()` layer*.
### 5\.6\.1 Activity: map variables onto graph aesthetics
Create a column plot of Channel Islands National Park visitation over time, where the **fill color** (argument: `fill =`) changes based on the number of **visitors**.
```
gg_base +
geom_col(aes(fill = visitors))
```
**Sync your project with your GitHub repo.**
### 5\.6\.1 Activity: map variables onto graph aesthetics
Create a column plot of Channel Islands National Park visitation over time, where the **fill color** (argument: `fill =`) changes based on the number of **visitors**.
```
gg_base +
geom_col(aes(fill = visitors))
```
**Sync your project with your GitHub repo.**
5\.7 ggplot2 complete themes
----------------------------
While every element of a ggplot graph is manually customizable, there are also built\-in themes (`theme_*()`) that you can add to your ggplot code to make some major headway before making smaller tweaks manually.
Here are a few to try today (but also notice all the options that appear as we start typing `theme_` into our ggplot graph code!):
* `theme_light()`
* `theme_minimal()`
* `theme_bw()`
Here, let’s update our previous graph with `theme_minimal()`:
```
gg_base +
geom_point(
aes(size = visitors,
color = visitors),
alpha = 0.5
) +
theme_minimal()
```
5\.8 Updating axis labels and titles
------------------------------------
Use `labs()` to update axis labels, and add a title and/or subtitle to your ggplot graph.
```
gg_base +
geom_line(linetype = "dotted") +
theme_bw() +
labs(
x = "Year",
y = "Annual park visitors",
title = "Channel Islands NP Visitation",
subtitle = "(1963 - 2016)"
)
```
**Note**: If you want to update the formatting of axis values (for example, to convert to comma format instead of scientific format above), you can use the `scales` package options (see more from the [R Cookbook](http://www.cookbook-r.com/Graphs/Axes_(ggplot2)/)).
5\.9 Combining compatible geoms
-------------------------------
As long as the geoms are compatible, we can layer them on top of one another to further customize a graph.
For example, adding points to a line graph:
```
gg_base +
geom_line(color = "purple") +
geom_point(color = "orange",
aes(size = year),
alpha = 0.5)
```
Or, combine a column and line graph (not sure why you’d want to do this, but you can):
```
gg_base +
geom_col(fill = "orange",
color = "purple") +
geom_line(color = "green")
```
5\.10 Multi\-series ggplot graphs
---------------------------------
In the examples above, we only had a single series \- visitation at Channel Islands National Park. Often we’ll want to visualize multiple series. For example, from the `ca_np` object we have stored, we might want to plot visitation for *all* California National Parks.
To do that, we need to add an aesthetic that lets `ggplot` know how things are going to be grouped. A demonstration of why that’s important \- what happens if we *don’t* let ggplot know how to group things?
```
ggplot(data = ca_np, aes(x = year, y = visitors)) +
geom_line()
```
Well that’s definitely a mess, and it’s because ggplot has no idea that these **should be different series based on the different parks that appear in the ‘park\_name’ column**.
We can make sure R does know by adding an explicit grouping argument (`group =`), or by updating an aesthetic based on *park\_name*:
```
ggplot(data = ca_np, aes(x = year, y = visitors, group = park_name)) +
geom_line()
```
**Note**: You could also add an aesthetic (`color = park_name`) in the `geom_line()` layer to create groupings, instead of in the topmost `ggplot()` layer.
Let’s store that topmost line so that we can use it more quickly later on in the lesson:
```
gg_np <- ggplot(data = ca_np, aes(x = year, y = visitors, group = park_name))
```
5\.11 Faceting ggplot graphs
----------------------------
When we facet graphs, we split them up into multiple plotting panels, where each panel contains a subset of the data. In our case, we’ll split the graph above into different panels, each containing visitation data for a single park.
Also notice that any general theme changes made will be applied to *all* of the graphs.
```
gg_np +
geom_line(show.legend = FALSE) +
theme_light() +
labs(x = "year", y = "annual visitors") +
facet_wrap(~ park_name)
```
5\.12 Exporting a ggplot graph with `ggsave()`
----------------------------------------------
If we want our graph to appear in a knitted html, then we don’t need to do anything else. But often we’ll need a saved image file, of specific size and resolution, to share or for publication.
`ggsave()` will export the *most recently run* ggplot graph by default (`plot = last_plot()`), unless you give it the name of a different saved ggplot object. Some common arguments for `ggsave()`:
* `width =`: set exported image width (default inches)
* `height =`: set exported image height (default height)
* `dpi =`: set dpi (dots per inch)
So to export the faceted graph above at 180 dpi, width a width of 8" and a height of 7", we can use:
```
ggsave(here("figures", "np_graph.jpg"), dpi = 180, width = 8, height = 7)
```
Notice that a .jpg image of that name and size is now stored in the `figures\` folder within your working directory. You can change the type of exported image, too (e.g. pdf, tiff, eps, png, mmp, svg).
**Sync your project with your GitHub repo.**
* Stage
* Commit
* Pull (to check for remote changes)
* Push!
### 5\.12\.1 End `ggplot` session!
### 5\.12\.1 End `ggplot` session!
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/ggplot2.html |
Chapter 5 Graphs with ggplot2
=============================
5\.1 Summary
------------
In this session, we’ll first learn how to read some external data (from .xls, .xlsx, and CSV files) into R with the `readr` and `readxl` packages (both part of the `tidyverse`).
Then, we’ll write reproducible code to build graphs piece\-by\-piece. In Excel, graphs are made by manually selecting options \- which, as we’ve discussed previously, may not be the best option for reproducibility. Also, if we haven’t built a graph with reproducible code, then we might not be able to easily recreate a graph *or* use that code again to make the same style graph with different data.
Using `ggplot2`, the graphics package within the `tidyverse`, we’ll write reproducible code to manually and thoughtfully build our graphs.
> “ggplot2 implements the grammar of graphics, a coherent system for describing and building graphs. With ggplot2, you can do more faster by learning one system and applying it in many places.” \- [R4DS](http://r4ds.had.co.nz/data-visualisation.html)
So yeah…that `gg` is from “grammar of graphics.”
We’ll use the `ggplot2` package, but the function we use to initialize a graph will be `ggplot`, which works best for data in tidy format (i.e., a column for every variable, and a row for every observation). Graphics with `ggplot` are built step\-by\-step, adding new elements as layers with a plus sign (`+`) between layers (note: this is different from the pipe operator, `%>%`. Adding layers in this fashion allows for extensive flexibility and customization of plots.
### 5\.1\.1 Objectives
* Read in external data (Excel files, CSVs) with `readr` and `readxl`
* Initial data exploration
* Build several common types of graphs (scatterplot, column, line) in ggplot2
* Customize gg\-graph aesthetics (color, style, themes, etc.)
* Update axis labels and titles
* Combine compatible graph types (geoms)
* Build multiseries graphs
* Split up data into faceted graphs
* Export figures with `ggsave()`
### 5\.1\.2 Resources
* [readr documentation](https://readr.tidyverse.org/) from tidyverse.org
* [readxl documentation](https://readxl.tidyverse.org/) from tidyverse.org
* [readxl workflows](https://readxl.tidyverse.org/articles/articles/readxl-workflows.html) from tidyverse.org
* [Chapter 3 \- Data Visualization in R for Data Science by Grolemund and Wickham](https://r4ds.had.co.nz/data-visualisation.html)
* [ggplot2\-cheatsheet\-2\.1\.pdf](https://www.rstudio.com/wp-content/uploads/2016/11/ggplot2-cheatsheet-2.1.pdf)
* [Graphs with ggplot2 \- Cookbook for R](http://www.cookbook-r.com/Graphs/#graphs-with-ggplot2)
* [“Why I use ggplot2” \- David Robinson Blog Post](http://varianceexplained.org/r/why-I-use-ggplot2/)
5\.2 Getting started \- In existing .Rmd, attach packages
---------------------------------------------------------
In your existing `plots-ggplot.Rmd` from Session 2, remove everything below the first code chunk.
The `ggplot2` package is part of the `tidyverse`, so we don’t need to attach it separately. Attach the `tidyverse`, `readxl` and `here` packages in the top\-most code chunk of your .Rmd.
```
library(tidyverse)
library(readxl)
library(here)
```
5\.3 Read in .xlsx and .csv files
---------------------------------
In this session, we’ll use data for parks visitation from two files:
* A comma\-separated\-value (CSV) file containing visitation data for all National Parks in California (ca\_np.csv)
* A single Excel worksheet containing only visitation for Channel Islands National Park (ci\_np.xlsx)
### 5\.3\.1 `read_csv()` to read in comma\-separated\-value (.csv) files
There are many types of files containing data that you might want to work with in R. A common one is a comma separated value (CSV) file, which contains values with each column entry separated by a comma delimiter. CSVs can be opened, viewed, and worked with in Excel just like an .xls or .xlsx file \- but let’s learn how to get data directly from a CSV into R where we can work with it more reproducibly.
To read in the ca\_np.csv file, we need to:
* insert a new code chunk
* use `read_csv()` to read in the file
* use `here()` within `read_csv()` to tell it where to look
* assign the stored data an object name (we’ll store ours as ca\_np)
```
ca_np <- read_csv(here("data", "ca_np.csv"))
```
Look in your Environment to see that **ca\_np** now shows up. Click on the object in the Environment, and R will automatically run the `View()` function for you to pull up your data in a separate viewing tab. Now we can look at it in the spreadsheet format we’re used to.
We can explore our data frame a bit more to see what it contains. For example:
* `names()`: to see the variable (column) names
* `head()`: to see the first *x* rows (6 is the default)
* `summary()`: see a quick summary of each variable
(Remember that `names()` is the name of the function but `names(ca_np)` is how we use it on the data.)
Cool! Next, let’s read in ci\_np.xlsx an Excel file) using `read_excel()`.
### 5\.3\.2 `readxl` to read in Excel files
We also have an Excel file (ci\_np.xlsx) that contains observations **only** for Channel Islands National Park visitation. Both `readr` and `readxl` are part of the `tidyverse`, which means we should expect their functions to have similar syntax and structure.
*Note: If `readxl` is part of the `tidyverse`, then why did I have to attach it separately?* Great question! Too keep the `tidyverse` manageable, there are “core” packages (`readr`, `dplyr`, `tidyr`, `ggplot2`, `forcats`, `purrr`, `tibble`, `stringr`) that you would expect to use frequently, and those are automatically attached when you use `library(tidyverse)`. But there are *also* more specialized `tidyverse` packages (e.g. `readxl`, `reprex`, `lubridate`, `rvest`) that are built with similar design philosophy, but are not automatically attached with `library(tidyverse)`. Those specialized packages are **installed** along with the `tidyverse`, but need to be attached individually (e.g. with `library(readxl)`).
Use `read_excel()` to get the ci\_np.xlsx data into R:
```
ci_np <- read_excel(here("data", "ci_np.xlsx"))
```
*Note: If you want to explicitly read in an .xlsx or .xls file, you can use `read_xlsx()` or `read_xls()` instead. `read_excel()` will make its best guess about which type of Excel file you’re reading in, so is the generic form.*
Explore the ci\_np data frame as above using functions like `View()`, `names()`, `head()`, and `summary()`.
Now that we have read in the National Parks visitation data, let’s use `ggplot2` to visualize it.
5\.4 Our first ggplot graph: Visitors to Channel Islands NP
-----------------------------------------------------------
To create a bare\-bones ggplot graph, we need to tell R three basic things:
1. We’re using `ggplot2::ggplot()`
2. Data we’re using \& variables we’re plotting (i.e., what is x and/or y?)
3. What type of graph we’re making (the type of *geom*)
Generally, that structure will look like this:
```
ggplot(data = df_name, aes(x = x_var_name, y = y_var_name)) +
geom_type()
```
Breaking that down:
* First, tell R you’re using `ggplot()`
* Then, tell it the object name where variables exist (`data = df_name`)
* Next, tell it the aesthetics `aes()` to specify which variables you want to plot
* Then add a layer for the type of geom (graph type) with `geom_*()` \- for example, `geom_point()` is a scatterplot, `geom_line()` is a line graph, `geom_col()` is a column graph, etc.
Let’s do that to create a line graph of visitors to Channel Islands National Park:
```
ggplot(data = ci_np, aes(x = year, y = visitors)) +
geom_line()
```
We’re going to be doing a lot of plot variations with those same variables. Let’s store the first line as object `gg_base` so that we don’t need to retype it each time:
```
gg_base <- ggplot(data = ci_np, aes(x = year, y = visitors))
```
Or, we could change that to a scatterplot just by updating the `geom_*`:
```
gg_base +
geom_point()
```
We could even do that for a column graph:
```
gg_base +
geom_col()
```
Or an area plot…
```
gg_base +
geom_area()
```
We can see that updating to different `geom_*` types is quick, so long as the types of graphs we’re switching between are compatible.
The data are there, now let’s do some data viz customization.
5\.5 Intro to customizing `ggplot` graphs
-----------------------------------------
First, we’ll customize some aesthetics (e.g. colors, styles, axis labels, etc.) of our graphs based on non\-variable values.
> We can change the aesthetics of elements in a ggplot graph by adding arguments within the layer where that element is created.
Some common arguments we’ll use first are:
* `color =` or `colour =`: update point or line colors
* `fill =`: update fill color for objects with areas
* `linetype =`: update the line type (dashed, long dash, etc.)
* `pch =`: update the point style
* `size =`: update the element size (e.g. of points or line thickness)
* `alpha =`: update element opacity (1 \= opaque, 0 \= transparent)
Building on our first line graph, let’s update the line color to “purple” and make the line type “dashed”:
```
gg_base +
geom_line(
color = "purple",
linetype = "dashed"
)
```
How do we know which color names ggplot will recognize? If you google “R colors ggplot2” you’ll find a lot of good resources. Here’s one: [SAPE ggplot2 colors quick reference guide](http://sape.inf.usi.ch/quick-reference/ggplot2/colour)
Now let’s update the point, style and size of points on our previous scatterplot graph using `color =`, `size =`, and `pch =` (see `?pch` for the different point styles, which can be further customized).
```
gg_base +
geom_point(color = "purple",
pch = 17,
size = 4,
alpha = 0.5)
```
### 5\.5\.1 Activity: customize your own ggplot graph
Update one of the example graphs you created above to customize **at least** an element color and size!
5\.6 Mapping variables onto aesthetics
--------------------------------------
In the examples above, we have customized aesthetics based on constants that we input as arguments (e.g., the color / style / size isn’t changing based on a variable characteristic or value). Sometimes, however, we **do** want the aesthetics of a graph to depend on a variable. To do that, we’ll **map variables onto graph aesthetics**, meaning we’ll change how an element on the graph looks based on a variable characteristic (usually, character or value).
> When we want to customize a graph element based on a variable’s characteristic or value, add the argument within `aes()` in the appropriate `geom_*()` layer
In short, if updating aesthetics based on a variable, make sure to put that argument inside of `aes()`.
**Example:** Create a ggplot scatterplot graph where the **size** and **color** of the points change based on the **number of visitors**, and make all points the same level of opacity (`alpha = 0.5`). Notice the `aes()` around the `size =` and `color =` arguments.
Also: this is overmapped and unnecessary. Avoid excessive / overcomplicated aesthetic mapping in data visualization.
```
gg_base +
geom_point(
aes(size = visitors,
color = visitors),
alpha = 0.5
)
```
In the example above, notice that the two arguments that **do** depend on variables are within `aes()`, but since `alpha = 0.5` doesn’t depend on a variable then it is *outside the `aes()` but still within the `geom_point()` layer*.
### 5\.6\.1 Activity: map variables onto graph aesthetics
Create a column plot of Channel Islands National Park visitation over time, where the **fill color** (argument: `fill =`) changes based on the number of **visitors**.
```
gg_base +
geom_col(aes(fill = visitors))
```
**Sync your project with your GitHub repo.**
5\.7 ggplot2 complete themes
----------------------------
While every element of a ggplot graph is manually customizable, there are also built\-in themes (`theme_*()`) that you can add to your ggplot code to make some major headway before making smaller tweaks manually.
Here are a few to try today (but also notice all the options that appear as we start typing `theme_` into our ggplot graph code!):
* `theme_light()`
* `theme_minimal()`
* `theme_bw()`
Here, let’s update our previous graph with `theme_minimal()`:
```
gg_base +
geom_point(
aes(size = visitors,
color = visitors),
alpha = 0.5
) +
theme_minimal()
```
5\.8 Updating axis labels and titles
------------------------------------
Use `labs()` to update axis labels, and add a title and/or subtitle to your ggplot graph.
```
gg_base +
geom_line(linetype = "dotted") +
theme_bw() +
labs(
x = "Year",
y = "Annual park visitors",
title = "Channel Islands NP Visitation",
subtitle = "(1963 - 2016)"
)
```
**Note**: If you want to update the formatting of axis values (for example, to convert to comma format instead of scientific format above), you can use the `scales` package options (see more from the [R Cookbook](http://www.cookbook-r.com/Graphs/Axes_(ggplot2)/)).
5\.9 Combining compatible geoms
-------------------------------
As long as the geoms are compatible, we can layer them on top of one another to further customize a graph.
For example, adding points to a line graph:
```
gg_base +
geom_line(color = "purple") +
geom_point(color = "orange",
aes(size = year),
alpha = 0.5)
```
Or, combine a column and line graph (not sure why you’d want to do this, but you can):
```
gg_base +
geom_col(fill = "orange",
color = "purple") +
geom_line(color = "green")
```
5\.10 Multi\-series ggplot graphs
---------------------------------
In the examples above, we only had a single series \- visitation at Channel Islands National Park. Often we’ll want to visualize multiple series. For example, from the `ca_np` object we have stored, we might want to plot visitation for *all* California National Parks.
To do that, we need to add an aesthetic that lets `ggplot` know how things are going to be grouped. A demonstration of why that’s important \- what happens if we *don’t* let ggplot know how to group things?
```
ggplot(data = ca_np, aes(x = year, y = visitors)) +
geom_line()
```
Well that’s definitely a mess, and it’s because ggplot has no idea that these **should be different series based on the different parks that appear in the ‘park\_name’ column**.
We can make sure R does know by adding an explicit grouping argument (`group =`), or by updating an aesthetic based on *park\_name*:
```
ggplot(data = ca_np, aes(x = year, y = visitors, group = park_name)) +
geom_line()
```
**Note**: You could also add an aesthetic (`color = park_name`) in the `geom_line()` layer to create groupings, instead of in the topmost `ggplot()` layer.
Let’s store that topmost line so that we can use it more quickly later on in the lesson:
```
gg_np <- ggplot(data = ca_np, aes(x = year, y = visitors, group = park_name))
```
5\.11 Faceting ggplot graphs
----------------------------
When we facet graphs, we split them up into multiple plotting panels, where each panel contains a subset of the data. In our case, we’ll split the graph above into different panels, each containing visitation data for a single park.
Also notice that any general theme changes made will be applied to *all* of the graphs.
```
gg_np +
geom_line(show.legend = FALSE) +
theme_light() +
labs(x = "year", y = "annual visitors") +
facet_wrap(~ park_name)
```
5\.12 Exporting a ggplot graph with `ggsave()`
----------------------------------------------
If we want our graph to appear in a knitted html, then we don’t need to do anything else. But often we’ll need a saved image file, of specific size and resolution, to share or for publication.
`ggsave()` will export the *most recently run* ggplot graph by default (`plot = last_plot()`), unless you give it the name of a different saved ggplot object. Some common arguments for `ggsave()`:
* `width =`: set exported image width (default inches)
* `height =`: set exported image height (default height)
* `dpi =`: set dpi (dots per inch)
So to export the faceted graph above at 180 dpi, width a width of 8" and a height of 7", we can use:
```
ggsave(here("figures", "np_graph.jpg"), dpi = 180, width = 8, height = 7)
```
Notice that a .jpg image of that name and size is now stored in the `figures\` folder within your working directory. You can change the type of exported image, too (e.g. pdf, tiff, eps, png, mmp, svg).
**Sync your project with your GitHub repo.**
* Stage
* Commit
* Pull (to check for remote changes)
* Push!
### 5\.12\.1 End `ggplot` session!
5\.1 Summary
------------
In this session, we’ll first learn how to read some external data (from .xls, .xlsx, and CSV files) into R with the `readr` and `readxl` packages (both part of the `tidyverse`).
Then, we’ll write reproducible code to build graphs piece\-by\-piece. In Excel, graphs are made by manually selecting options \- which, as we’ve discussed previously, may not be the best option for reproducibility. Also, if we haven’t built a graph with reproducible code, then we might not be able to easily recreate a graph *or* use that code again to make the same style graph with different data.
Using `ggplot2`, the graphics package within the `tidyverse`, we’ll write reproducible code to manually and thoughtfully build our graphs.
> “ggplot2 implements the grammar of graphics, a coherent system for describing and building graphs. With ggplot2, you can do more faster by learning one system and applying it in many places.” \- [R4DS](http://r4ds.had.co.nz/data-visualisation.html)
So yeah…that `gg` is from “grammar of graphics.”
We’ll use the `ggplot2` package, but the function we use to initialize a graph will be `ggplot`, which works best for data in tidy format (i.e., a column for every variable, and a row for every observation). Graphics with `ggplot` are built step\-by\-step, adding new elements as layers with a plus sign (`+`) between layers (note: this is different from the pipe operator, `%>%`. Adding layers in this fashion allows for extensive flexibility and customization of plots.
### 5\.1\.1 Objectives
* Read in external data (Excel files, CSVs) with `readr` and `readxl`
* Initial data exploration
* Build several common types of graphs (scatterplot, column, line) in ggplot2
* Customize gg\-graph aesthetics (color, style, themes, etc.)
* Update axis labels and titles
* Combine compatible graph types (geoms)
* Build multiseries graphs
* Split up data into faceted graphs
* Export figures with `ggsave()`
### 5\.1\.2 Resources
* [readr documentation](https://readr.tidyverse.org/) from tidyverse.org
* [readxl documentation](https://readxl.tidyverse.org/) from tidyverse.org
* [readxl workflows](https://readxl.tidyverse.org/articles/articles/readxl-workflows.html) from tidyverse.org
* [Chapter 3 \- Data Visualization in R for Data Science by Grolemund and Wickham](https://r4ds.had.co.nz/data-visualisation.html)
* [ggplot2\-cheatsheet\-2\.1\.pdf](https://www.rstudio.com/wp-content/uploads/2016/11/ggplot2-cheatsheet-2.1.pdf)
* [Graphs with ggplot2 \- Cookbook for R](http://www.cookbook-r.com/Graphs/#graphs-with-ggplot2)
* [“Why I use ggplot2” \- David Robinson Blog Post](http://varianceexplained.org/r/why-I-use-ggplot2/)
### 5\.1\.1 Objectives
* Read in external data (Excel files, CSVs) with `readr` and `readxl`
* Initial data exploration
* Build several common types of graphs (scatterplot, column, line) in ggplot2
* Customize gg\-graph aesthetics (color, style, themes, etc.)
* Update axis labels and titles
* Combine compatible graph types (geoms)
* Build multiseries graphs
* Split up data into faceted graphs
* Export figures with `ggsave()`
### 5\.1\.2 Resources
* [readr documentation](https://readr.tidyverse.org/) from tidyverse.org
* [readxl documentation](https://readxl.tidyverse.org/) from tidyverse.org
* [readxl workflows](https://readxl.tidyverse.org/articles/articles/readxl-workflows.html) from tidyverse.org
* [Chapter 3 \- Data Visualization in R for Data Science by Grolemund and Wickham](https://r4ds.had.co.nz/data-visualisation.html)
* [ggplot2\-cheatsheet\-2\.1\.pdf](https://www.rstudio.com/wp-content/uploads/2016/11/ggplot2-cheatsheet-2.1.pdf)
* [Graphs with ggplot2 \- Cookbook for R](http://www.cookbook-r.com/Graphs/#graphs-with-ggplot2)
* [“Why I use ggplot2” \- David Robinson Blog Post](http://varianceexplained.org/r/why-I-use-ggplot2/)
5\.2 Getting started \- In existing .Rmd, attach packages
---------------------------------------------------------
In your existing `plots-ggplot.Rmd` from Session 2, remove everything below the first code chunk.
The `ggplot2` package is part of the `tidyverse`, so we don’t need to attach it separately. Attach the `tidyverse`, `readxl` and `here` packages in the top\-most code chunk of your .Rmd.
```
library(tidyverse)
library(readxl)
library(here)
```
5\.3 Read in .xlsx and .csv files
---------------------------------
In this session, we’ll use data for parks visitation from two files:
* A comma\-separated\-value (CSV) file containing visitation data for all National Parks in California (ca\_np.csv)
* A single Excel worksheet containing only visitation for Channel Islands National Park (ci\_np.xlsx)
### 5\.3\.1 `read_csv()` to read in comma\-separated\-value (.csv) files
There are many types of files containing data that you might want to work with in R. A common one is a comma separated value (CSV) file, which contains values with each column entry separated by a comma delimiter. CSVs can be opened, viewed, and worked with in Excel just like an .xls or .xlsx file \- but let’s learn how to get data directly from a CSV into R where we can work with it more reproducibly.
To read in the ca\_np.csv file, we need to:
* insert a new code chunk
* use `read_csv()` to read in the file
* use `here()` within `read_csv()` to tell it where to look
* assign the stored data an object name (we’ll store ours as ca\_np)
```
ca_np <- read_csv(here("data", "ca_np.csv"))
```
Look in your Environment to see that **ca\_np** now shows up. Click on the object in the Environment, and R will automatically run the `View()` function for you to pull up your data in a separate viewing tab. Now we can look at it in the spreadsheet format we’re used to.
We can explore our data frame a bit more to see what it contains. For example:
* `names()`: to see the variable (column) names
* `head()`: to see the first *x* rows (6 is the default)
* `summary()`: see a quick summary of each variable
(Remember that `names()` is the name of the function but `names(ca_np)` is how we use it on the data.)
Cool! Next, let’s read in ci\_np.xlsx an Excel file) using `read_excel()`.
### 5\.3\.2 `readxl` to read in Excel files
We also have an Excel file (ci\_np.xlsx) that contains observations **only** for Channel Islands National Park visitation. Both `readr` and `readxl` are part of the `tidyverse`, which means we should expect their functions to have similar syntax and structure.
*Note: If `readxl` is part of the `tidyverse`, then why did I have to attach it separately?* Great question! Too keep the `tidyverse` manageable, there are “core” packages (`readr`, `dplyr`, `tidyr`, `ggplot2`, `forcats`, `purrr`, `tibble`, `stringr`) that you would expect to use frequently, and those are automatically attached when you use `library(tidyverse)`. But there are *also* more specialized `tidyverse` packages (e.g. `readxl`, `reprex`, `lubridate`, `rvest`) that are built with similar design philosophy, but are not automatically attached with `library(tidyverse)`. Those specialized packages are **installed** along with the `tidyverse`, but need to be attached individually (e.g. with `library(readxl)`).
Use `read_excel()` to get the ci\_np.xlsx data into R:
```
ci_np <- read_excel(here("data", "ci_np.xlsx"))
```
*Note: If you want to explicitly read in an .xlsx or .xls file, you can use `read_xlsx()` or `read_xls()` instead. `read_excel()` will make its best guess about which type of Excel file you’re reading in, so is the generic form.*
Explore the ci\_np data frame as above using functions like `View()`, `names()`, `head()`, and `summary()`.
Now that we have read in the National Parks visitation data, let’s use `ggplot2` to visualize it.
### 5\.3\.1 `read_csv()` to read in comma\-separated\-value (.csv) files
There are many types of files containing data that you might want to work with in R. A common one is a comma separated value (CSV) file, which contains values with each column entry separated by a comma delimiter. CSVs can be opened, viewed, and worked with in Excel just like an .xls or .xlsx file \- but let’s learn how to get data directly from a CSV into R where we can work with it more reproducibly.
To read in the ca\_np.csv file, we need to:
* insert a new code chunk
* use `read_csv()` to read in the file
* use `here()` within `read_csv()` to tell it where to look
* assign the stored data an object name (we’ll store ours as ca\_np)
```
ca_np <- read_csv(here("data", "ca_np.csv"))
```
Look in your Environment to see that **ca\_np** now shows up. Click on the object in the Environment, and R will automatically run the `View()` function for you to pull up your data in a separate viewing tab. Now we can look at it in the spreadsheet format we’re used to.
We can explore our data frame a bit more to see what it contains. For example:
* `names()`: to see the variable (column) names
* `head()`: to see the first *x* rows (6 is the default)
* `summary()`: see a quick summary of each variable
(Remember that `names()` is the name of the function but `names(ca_np)` is how we use it on the data.)
Cool! Next, let’s read in ci\_np.xlsx an Excel file) using `read_excel()`.
### 5\.3\.2 `readxl` to read in Excel files
We also have an Excel file (ci\_np.xlsx) that contains observations **only** for Channel Islands National Park visitation. Both `readr` and `readxl` are part of the `tidyverse`, which means we should expect their functions to have similar syntax and structure.
*Note: If `readxl` is part of the `tidyverse`, then why did I have to attach it separately?* Great question! Too keep the `tidyverse` manageable, there are “core” packages (`readr`, `dplyr`, `tidyr`, `ggplot2`, `forcats`, `purrr`, `tibble`, `stringr`) that you would expect to use frequently, and those are automatically attached when you use `library(tidyverse)`. But there are *also* more specialized `tidyverse` packages (e.g. `readxl`, `reprex`, `lubridate`, `rvest`) that are built with similar design philosophy, but are not automatically attached with `library(tidyverse)`. Those specialized packages are **installed** along with the `tidyverse`, but need to be attached individually (e.g. with `library(readxl)`).
Use `read_excel()` to get the ci\_np.xlsx data into R:
```
ci_np <- read_excel(here("data", "ci_np.xlsx"))
```
*Note: If you want to explicitly read in an .xlsx or .xls file, you can use `read_xlsx()` or `read_xls()` instead. `read_excel()` will make its best guess about which type of Excel file you’re reading in, so is the generic form.*
Explore the ci\_np data frame as above using functions like `View()`, `names()`, `head()`, and `summary()`.
Now that we have read in the National Parks visitation data, let’s use `ggplot2` to visualize it.
5\.4 Our first ggplot graph: Visitors to Channel Islands NP
-----------------------------------------------------------
To create a bare\-bones ggplot graph, we need to tell R three basic things:
1. We’re using `ggplot2::ggplot()`
2. Data we’re using \& variables we’re plotting (i.e., what is x and/or y?)
3. What type of graph we’re making (the type of *geom*)
Generally, that structure will look like this:
```
ggplot(data = df_name, aes(x = x_var_name, y = y_var_name)) +
geom_type()
```
Breaking that down:
* First, tell R you’re using `ggplot()`
* Then, tell it the object name where variables exist (`data = df_name`)
* Next, tell it the aesthetics `aes()` to specify which variables you want to plot
* Then add a layer for the type of geom (graph type) with `geom_*()` \- for example, `geom_point()` is a scatterplot, `geom_line()` is a line graph, `geom_col()` is a column graph, etc.
Let’s do that to create a line graph of visitors to Channel Islands National Park:
```
ggplot(data = ci_np, aes(x = year, y = visitors)) +
geom_line()
```
We’re going to be doing a lot of plot variations with those same variables. Let’s store the first line as object `gg_base` so that we don’t need to retype it each time:
```
gg_base <- ggplot(data = ci_np, aes(x = year, y = visitors))
```
Or, we could change that to a scatterplot just by updating the `geom_*`:
```
gg_base +
geom_point()
```
We could even do that for a column graph:
```
gg_base +
geom_col()
```
Or an area plot…
```
gg_base +
geom_area()
```
We can see that updating to different `geom_*` types is quick, so long as the types of graphs we’re switching between are compatible.
The data are there, now let’s do some data viz customization.
5\.5 Intro to customizing `ggplot` graphs
-----------------------------------------
First, we’ll customize some aesthetics (e.g. colors, styles, axis labels, etc.) of our graphs based on non\-variable values.
> We can change the aesthetics of elements in a ggplot graph by adding arguments within the layer where that element is created.
Some common arguments we’ll use first are:
* `color =` or `colour =`: update point or line colors
* `fill =`: update fill color for objects with areas
* `linetype =`: update the line type (dashed, long dash, etc.)
* `pch =`: update the point style
* `size =`: update the element size (e.g. of points or line thickness)
* `alpha =`: update element opacity (1 \= opaque, 0 \= transparent)
Building on our first line graph, let’s update the line color to “purple” and make the line type “dashed”:
```
gg_base +
geom_line(
color = "purple",
linetype = "dashed"
)
```
How do we know which color names ggplot will recognize? If you google “R colors ggplot2” you’ll find a lot of good resources. Here’s one: [SAPE ggplot2 colors quick reference guide](http://sape.inf.usi.ch/quick-reference/ggplot2/colour)
Now let’s update the point, style and size of points on our previous scatterplot graph using `color =`, `size =`, and `pch =` (see `?pch` for the different point styles, which can be further customized).
```
gg_base +
geom_point(color = "purple",
pch = 17,
size = 4,
alpha = 0.5)
```
### 5\.5\.1 Activity: customize your own ggplot graph
Update one of the example graphs you created above to customize **at least** an element color and size!
### 5\.5\.1 Activity: customize your own ggplot graph
Update one of the example graphs you created above to customize **at least** an element color and size!
5\.6 Mapping variables onto aesthetics
--------------------------------------
In the examples above, we have customized aesthetics based on constants that we input as arguments (e.g., the color / style / size isn’t changing based on a variable characteristic or value). Sometimes, however, we **do** want the aesthetics of a graph to depend on a variable. To do that, we’ll **map variables onto graph aesthetics**, meaning we’ll change how an element on the graph looks based on a variable characteristic (usually, character or value).
> When we want to customize a graph element based on a variable’s characteristic or value, add the argument within `aes()` in the appropriate `geom_*()` layer
In short, if updating aesthetics based on a variable, make sure to put that argument inside of `aes()`.
**Example:** Create a ggplot scatterplot graph where the **size** and **color** of the points change based on the **number of visitors**, and make all points the same level of opacity (`alpha = 0.5`). Notice the `aes()` around the `size =` and `color =` arguments.
Also: this is overmapped and unnecessary. Avoid excessive / overcomplicated aesthetic mapping in data visualization.
```
gg_base +
geom_point(
aes(size = visitors,
color = visitors),
alpha = 0.5
)
```
In the example above, notice that the two arguments that **do** depend on variables are within `aes()`, but since `alpha = 0.5` doesn’t depend on a variable then it is *outside the `aes()` but still within the `geom_point()` layer*.
### 5\.6\.1 Activity: map variables onto graph aesthetics
Create a column plot of Channel Islands National Park visitation over time, where the **fill color** (argument: `fill =`) changes based on the number of **visitors**.
```
gg_base +
geom_col(aes(fill = visitors))
```
**Sync your project with your GitHub repo.**
### 5\.6\.1 Activity: map variables onto graph aesthetics
Create a column plot of Channel Islands National Park visitation over time, where the **fill color** (argument: `fill =`) changes based on the number of **visitors**.
```
gg_base +
geom_col(aes(fill = visitors))
```
**Sync your project with your GitHub repo.**
5\.7 ggplot2 complete themes
----------------------------
While every element of a ggplot graph is manually customizable, there are also built\-in themes (`theme_*()`) that you can add to your ggplot code to make some major headway before making smaller tweaks manually.
Here are a few to try today (but also notice all the options that appear as we start typing `theme_` into our ggplot graph code!):
* `theme_light()`
* `theme_minimal()`
* `theme_bw()`
Here, let’s update our previous graph with `theme_minimal()`:
```
gg_base +
geom_point(
aes(size = visitors,
color = visitors),
alpha = 0.5
) +
theme_minimal()
```
5\.8 Updating axis labels and titles
------------------------------------
Use `labs()` to update axis labels, and add a title and/or subtitle to your ggplot graph.
```
gg_base +
geom_line(linetype = "dotted") +
theme_bw() +
labs(
x = "Year",
y = "Annual park visitors",
title = "Channel Islands NP Visitation",
subtitle = "(1963 - 2016)"
)
```
**Note**: If you want to update the formatting of axis values (for example, to convert to comma format instead of scientific format above), you can use the `scales` package options (see more from the [R Cookbook](http://www.cookbook-r.com/Graphs/Axes_(ggplot2)/)).
5\.9 Combining compatible geoms
-------------------------------
As long as the geoms are compatible, we can layer them on top of one another to further customize a graph.
For example, adding points to a line graph:
```
gg_base +
geom_line(color = "purple") +
geom_point(color = "orange",
aes(size = year),
alpha = 0.5)
```
Or, combine a column and line graph (not sure why you’d want to do this, but you can):
```
gg_base +
geom_col(fill = "orange",
color = "purple") +
geom_line(color = "green")
```
5\.10 Multi\-series ggplot graphs
---------------------------------
In the examples above, we only had a single series \- visitation at Channel Islands National Park. Often we’ll want to visualize multiple series. For example, from the `ca_np` object we have stored, we might want to plot visitation for *all* California National Parks.
To do that, we need to add an aesthetic that lets `ggplot` know how things are going to be grouped. A demonstration of why that’s important \- what happens if we *don’t* let ggplot know how to group things?
```
ggplot(data = ca_np, aes(x = year, y = visitors)) +
geom_line()
```
Well that’s definitely a mess, and it’s because ggplot has no idea that these **should be different series based on the different parks that appear in the ‘park\_name’ column**.
We can make sure R does know by adding an explicit grouping argument (`group =`), or by updating an aesthetic based on *park\_name*:
```
ggplot(data = ca_np, aes(x = year, y = visitors, group = park_name)) +
geom_line()
```
**Note**: You could also add an aesthetic (`color = park_name`) in the `geom_line()` layer to create groupings, instead of in the topmost `ggplot()` layer.
Let’s store that topmost line so that we can use it more quickly later on in the lesson:
```
gg_np <- ggplot(data = ca_np, aes(x = year, y = visitors, group = park_name))
```
5\.11 Faceting ggplot graphs
----------------------------
When we facet graphs, we split them up into multiple plotting panels, where each panel contains a subset of the data. In our case, we’ll split the graph above into different panels, each containing visitation data for a single park.
Also notice that any general theme changes made will be applied to *all* of the graphs.
```
gg_np +
geom_line(show.legend = FALSE) +
theme_light() +
labs(x = "year", y = "annual visitors") +
facet_wrap(~ park_name)
```
5\.12 Exporting a ggplot graph with `ggsave()`
----------------------------------------------
If we want our graph to appear in a knitted html, then we don’t need to do anything else. But often we’ll need a saved image file, of specific size and resolution, to share or for publication.
`ggsave()` will export the *most recently run* ggplot graph by default (`plot = last_plot()`), unless you give it the name of a different saved ggplot object. Some common arguments for `ggsave()`:
* `width =`: set exported image width (default inches)
* `height =`: set exported image height (default height)
* `dpi =`: set dpi (dots per inch)
So to export the faceted graph above at 180 dpi, width a width of 8" and a height of 7", we can use:
```
ggsave(here("figures", "np_graph.jpg"), dpi = 180, width = 8, height = 7)
```
Notice that a .jpg image of that name and size is now stored in the `figures\` folder within your working directory. You can change the type of exported image, too (e.g. pdf, tiff, eps, png, mmp, svg).
**Sync your project with your GitHub repo.**
* Stage
* Commit
* Pull (to check for remote changes)
* Push!
### 5\.12\.1 End `ggplot` session!
### 5\.12\.1 End `ggplot` session!
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/pivot-tables.html |
Chapter 6 Pivot Tables with `dplyr`
===================================
6\.1 Summary
------------
Pivot tables are powerful tools in Excel for summarizing data in different ways. We will create these tables using the `group_by` and `summarize` functions from the `dplyr` package (part of the Tidyverse). We will also learn how to format tables and practice creating a reproducible report using RMarkdown and sharing it with GitHub.
**Data used in the synthesis section:**
* File name: lobsters.xlsx and lobsters2\.xlsx
* Description: Lobster size, abundance and fishing pressure (Santa Barbara coast)
* Link: [https://portal.edirepository.org/nis/mapbrowse?scope\=knb\-lter\-sbc\&identifier\=77\&revision\=newest](https://portal.edirepository.org/nis/mapbrowse?scope=knb-lter-sbc&identifier=77&revision=newest)
* Citation: Reed D. 2019\. SBC LTER: Reef: Abundance, size and fishing effort for California Spiny Lobster (Panulirus interruptus), ongoing since 2012\. Environmental Data Initiative. [doi](https://doi.org/10.6073/pasta/a593a675d644fdefb736750b291579a0).
### 6\.1\.1 Objectives
In R, we can use the `dplyr` package for pivot tables by using 2 functions `group_by` and `summarize` together with the pipe operator `%>%`. We will also continue to emphasize reproducibility in all our analyses.
* Discuss pivot tables in Excel
* Introduce `group_by() %>% summarize()` from the `dplyr` package
* Learn `mutate()` and `select()` to work column\-wise
* Practice our reproducible workflow with RMarkdown and GitHub
### 6\.1\.2 Resources
* [`dplyr` website: dplyr.tidyverse.org](https://dplyr.tidyverse.org/)
* [R for Data Science: Transform Chapter](https://r4ds.had.co.nz/transform.html) by Hadley Wickham \& Garrett Grolemund
* [Intro to Pivot Tables I\-III videos](https://youtu.be/g530cnFfk8Y) by Excel Campus
* [Data organization in spreadsheets](https://peerj.com/preprints/3183/) by Karl Broman \& Kara Woo
6\.2 Overview \& setup
----------------------
[Wikipedia describes a pivot table](https://en.wikipedia.org/wiki/Pivot_table) as a “table of statistics that summarizes the data of a more extensive table…this summary might include sums, averages, or other statistics, which the pivot table groups together in a meaningful way.”
> **Aside:** Wikipedia also says that “Although pivot table is a generic term, Microsoft trademarked PivotTable in the United States in 1994\.”
Pivot tables are a really powerful tool for summarizing data, and we can have similar functionality in R — as well as nicely automating and reporting these tables.
We will first have a look at our data, demo using pivot tables in Excel, and then create reproducible tables in R.
### 6\.2\.1 View data in Excel
When reading in Excel files (or really any data that isn’t yours), it can be a good idea to open the data and look at it so you know what you’re up against.
Let’s open the lobsters.xlsx data in Excel.
It’s one sheet, and it’s rectangular. In this data set, every row is a unique observation. This is called “uncounted” data; you’ll see there is no row for how many lobsters were seen because each row is an observation, or an “n of 1\.”
But also notice that the data doesn’t start until line 5; there are 4 lines of metadata — data about the data that is super important! — that we don’t want to muddy our analyses.
Now your first idea might be to delete these 4 rows from this Excel sheet and save them on another, but we also know that we need to keep the raw data raw. So let’s not touch this data in Excel, we’ll remove these lines in R. Let’s do that first so then we’ll be all set.
### 6\.2\.2 RMarkdown setup
Let’s start a new RMarkdown file in our repo, at the top\-level (where it will be created by default in our Project). I’ll call mine `pivot_lobsters.Rmd`.
In the setup chunk, let’s attach our libraries and read in our lobster data. In addition to the `tidyverse` package we will also use the `skimr` package. You will have to install it, but don’t want it to be installed every time you write your code. The following is a nice convention for having the install instructions available (on the same line) as the `library()` call.
```
## attach libraries
library(tidyverse)
library(readxl)
library(here)
library(skimr) # install.packages('skimr')
library(kableExtra) # install.packages('kableExtra')
```
We used the `read_excel()` before, which is the generic function that reads both .xls and .xlsx files. Since we know that this is a .xlsx file, we will demo using the `read_xlsx()` function.
We can expect that someone in the history of R and especially the history of the `readxl` package has needed to skip lines at the top of an Excel file before. So let’s look at the help pages `?read_xlsx`: there is an argument called `skip` that we can set to 4 to skip 4 lines.
```
## read in data
lobsters <- read_xlsx(here("data/lobsters.xlsx"), skip=4)
```
Great. We’ve seen this data in Excel so I don’t feel the need to use `head()` here like we’ve done before, but I do like having a look at summary statistics and classes.
#### 6\.2\.2\.1 `skimr::skim`
To look at summary statistics we’ve used `summary`, which is good for numeric columns, but it doesn’t give a lot of useful information for non\-numeric data. So it means it wouldn’t tell us how many unique sites there are in this dataset. To have a look there I like using the `skimr` package:
```
# explore data
skimr::skim(lobsters)
```
This `skimr::` notation is a reminder to me that `skim` is from the `skimr` package. It is a nice convention: it’s a reminder to others (especially you!).
`skim` lets us look more at each variable. Here we can look at our character variables and see that there are 5 unique sites (in the `n_unique` output). Also, I particularly like looking at missing data. There are 6 missing values in the `size_mm` variable.
### 6\.2\.3 Our task
So now we have an idea of our data. But now we have a task: we’ve been asked by a colleague to report about how the average size of lobsters has changed for each site across time.
We will complete this task with R by using the `dplyr` package for data wrangling, which we will do after demoing how this would do it with pivot tables in Excel.
6\.3 Pivot table demo
---------------------
I will demo how we will make a pivot table with our lobster data. You are welcome to sit back and watch rather than following along.
First let’s summarize how many lobsters were counted each year. This means I want to count of rows by year.
So to do this in Excel we would initiate the Pivot Table Process:
Excel will ask what data I would like to include, and it will do its best to suggest coordinates for my data within the spreadsheet (it can have difficulty with non\-rectangular or “non\-tidy” data). It does a good job here of ignoring those top lines of data description.
It will also suggest we make our PivotTable in a new worksheet.
And then we’ll see our new sheet and a little wizard to help us create the PivotTable.
### 6\.3\.1 pivot one variable
I want to start by summarizing by year, so I first drag the `year` variable down into the “Rows” box. What I see at this point are the years listed: this confirms that I’m going to group by years.
And then, to summarize the counts for each year, I actually drag the same `year` variable into the “Values” box. And it will create a Pivot Table for me! But “sum” as the default summary statistic; this doesn’t make a whole lot of sense for summarizing years. I can click the little “I” icon to change this summary statistic to what I want: Count of year.
A few things to note:
* The pivot table is separate entity from our data (it’s on a different sheet); the original data has not been affected. This “keeps the raw data raw,” which is great practice.
* The pivot table summarizes on the variables you request meaning that we don’t see other columns (like date, month, or site).
* Excel also calculates the Grand total for all sites (in bold). This is nice for communicating about data. But it can be problematic in the future, because it might not be clear that this is a calculation and not data. It could be easy to take a total of this column and introduce errors by doubling the total count.
So pivot tables are great because they summarize the data and keep the raw data raw — they even promote good practice because they by default ask you if you’d like to present the data in a new sheet rather than in the same sheet.
### 6\.3\.2 pivot two variables
We can include multiple variables in our PivotTable. If we want to add site as a second variable, we can drag it down:
But this is comparing sites within a year; we want to compare years within a site. We can reverse the order easily enough by dragging (you just have to remember to do all of these steps the next time you’d want to repeat this):
So in terms of our full task, which is to compare the average lobster size by site and year, we are on our way! I’ll leave this as a cliff\-hanger here in Excel and we will carry forward in R.
Just to recap what we did here: we told Excel we wanted to group by something (here: `year` and `site`) and then summarize by something (here: count, not sum!)
6\.4 `group_by()` %\>% `summarize()`
------------------------------------
In R, we can create the functionality of pivot tables with the same logic: we will tell R to group by something and then summarize by something. Visually, it looks like this:
This graphic is from [RStudio’s old\-school data wrangling cheatsheet](http://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf); all cheatsheets available from <https://rstudio.com/resources/cheatsheets>). It’s incredibly powerful to visualize what we are talking about with our data when do do these kinds of operations.
And in code, it looks like this:
```
data %>%
group_by() %>%
summarize()
```
It reads: “Take the data and then group by something and then summarize by something.”
The pipe operator `%>%` is a really critical feature of the `dplyr` package, originally created for the `magrittr` package. It lets us chain together steps of our data wrangling, enabling us to tell a clear story about our entire data analysis. This is not only a written story to archive what we’ve done, but it will be a reproducible story that can be rerun and remixed. It is not difficult to read as a human, and it is not a series of clicks to remember.
Let’s try it out!
### 6\.4\.1 `group_by` one variable
Let’s use `group_by() %>% summarize()` with our `lobsters` data, just like we did in Excel. We will first group\_by year and then summarize by count, using the function `n()` (in the `dplyr` package). `n()` counts the number of times an observation shows up, and since this is uncounted data, this will count each row.
We can say this out loud while we write it: “take the lobsters data and then group\_by year and then summarize by count in a new column we’ll call `count_by_year`.”
```
lobsters %>%
group_by(year) %>%
summarize(count_by_year = n())
```
Notice how together, `group_by` and `summarize` minimize the amount of information we see. We also saw this with the pivot table. We lose the other columns that aren’t involved here.
Question: What if you *don’t* group\_by first? Let’s try it and discuss what’s going on.
```
lobsters %>%
summarize(count = n())
```
```
## # A tibble: 1 x 1
## count
## <int>
## 1 2893
```
So if we don’t `group_by` first, we will get a single summary statistic (sum in this case) for the whole dataset.
Another question: what if we *only* group\_by?
```
lobsters %>%
group_by(year)
```
```
## # A tibble: 2,893 x 7
## # Groups: year [5]
## year month date site transect replicate size_mm
## <dbl> <dbl> <chr> <chr> <dbl> <chr> <dbl>
## 1 2012 8 8/20/12 ivee 3 A 70
## 2 2012 8 8/20/12 ivee 3 B 60
## 3 2012 8 8/20/12 ivee 3 B 65
## 4 2012 8 8/20/12 ivee 3 B 70
## 5 2012 8 8/20/12 ivee 3 B 85
## 6 2012 8 8/20/12 ivee 3 C 60
## 7 2012 8 8/20/12 ivee 3 C 65
## 8 2012 8 8/20/12 ivee 3 C 67
## 9 2012 8 8/20/12 ivee 3 D 70
## 10 2012 8 8/20/12 ivee 4 B 85
## # … with 2,883 more rows
```
R doesn’t summarize our data, but you can see from the output that it is indeed grouped. However, we haven’t done anything to the original data: we are only exploring. We are keeping the raw data raw.
To convince ourselves, let’s now check the `lobsters` variable. We can do this by clicking on `lobsters` in the Environment pane in RStudio.
We see that we haven’t changed any of our original data that was stored in this variable. (Just like how the pivot table didn’t affect the raw data on the original sheet).
> ***Aside***: You’ll also see that when you click on the variable name in the Environment pane, `View(lobsters)` shows up in your Console. `View()` (capital V) is the R function to view any variable in the viewer. So this is something that you can write in your RMarkdown script, although RMarkdown will not be able to knit this view feature into the formatted document. So, if you want include `View()` in your RMarkdown document you will need to either comment it out `#View()` or add `eval=FALSE` to the top of the code chunk so that the full line reads `{r, eval=FALSE}`.
### 6\.4\.2 `group_by` multiple variables
Great. Now let’s summarize by both year and site like we did in the pivot table. We are able to `group_by` more than one variable. Let’s do this together:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n())
```
We put the site first because that is what we want as an end product. But we could easily have put year first. We saw visually what would happen when we did this in the Pivot Table.
Great.
### 6\.4\.3 `summarize` multiple variables
We can summarize multiple variables at a time.
So far we’ve summarized the count of lobster observations. Let’s also calculate the mean and standard deviation. First let’s use the `mean()` function to calculate the mean. We do this within the same `summarize()` function, but we can add a new line to make it easier to read. Notice how when you put your curser within the parenthesis and hit return, the indentation will automatically align.
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm))
```
> ***Aside*** Command\-I will properly indent selected lines.
Great! But this will actually calculate some of the means as NA because one or more values in that year are NA. So we can pass an argument that says to remove NAs first before calculating the average. Let’s do that, and then also calculate the standard deviation with the `sd()` function:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm=TRUE),
sd_size_mm = sd(size_mm, na.rm=TRUE))
```
So we can make the equivalent of Excel’s pivot table in R with `group_by() %>% summarize()`.
Now we are at the point where we actually want to save this summary information as a variable so we can use it in further analyses and formatting.
So let’s add a variable assignment to that first line:
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## inspect our new variable
siteyear_summary
```
### 6\.4\.4 Table formatting with `kable()`
There are several options for formatting tables in RMarkdown; we’ll show one here from the `kableExtra` package and learn more about it tomorrow.
It works nicely with the pipe operator, so we can build do this from our new object:
```
## make a table with our new variable
siteyear_summary %>%
kable()
```
### 6\.4\.5 R code in\-line in RMarkdown
Before we let you try this on your own, let’s go outside of our code chunk and write in Markdown.
I want to demo something that is a really powerful RMarkdown feature that we can already leverage with what we know in R.
Write this **in Markdown** but replace the \# with a backtick (\`): “There are \#r nrow(lobsters)\# total lobsters included in this report.” Let’s knit to see what happens.
I hope you can start to imagine the possibilities. If you wanted to write which year had the most observations, or which site had a decreasing trend, you would be able to.
### 6\.4\.6 Activity
1. Build from our analysis and calculate the median lobster size for each site year. Your calculation will use the `size_mm` variable and function to calculate the median (Hint: ?median)
2. create and ggsave() a plot.
Then, save, commit, and push your .Rmd, .html, and .png.
Solution (no peeking):
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE),
median_size_mm = median(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## a ggplot option:
ggplot(data = siteyear_summary, aes(x = year, y = median_size_mm, color = site)) +
geom_line()
```
```
ggsave(here("figures", "lobsters-line.png"))
```
```
## Saving 7 x 5 in image
```
```
## another option:
ggplot(siteyear_summary, aes(x = year, y = median_size_mm)) +
geom_col() +
facet_wrap(~site)
```
```
ggsave(here("figures", "lobsters-col.png"))
```
```
## Saving 7 x 5 in image
```
Don’t forget to knit, commit, and push!
Nice work everybody.
6\.5 Oh no, they sent the wrong data!
-------------------------------------
Oh no! After all our analyses and everything we’ve done, our colleague just emailed us at 4:30pm on Friday that he sent the wrong data and we need to redo all our analyses with a new .xlsx file: `lobsters2.xlsx`, not `lobsters.xlsx`. Aaaaah!
If we were doing this in Excel, this would be a bummer; we’d have to rebuild our pivot table and click through all of our logic again. And then export our figures and save them into our report.
But, since we did it in R, we are much safer. R’s power is not only in analytical power, but in automation and reproducibility.
This means we can go back to the top of our RMarkdown file, and read in this new data file, and then re\-knit. We will still need to check that everything outputs correctly, (and that column headers haven’t been renamed), but our first pass will be to update the filename and re\-knit:
```
## read in data
lobsters <- read_xlsx(here("data/lobsters2.xlsx"), skip=4)
```
And now we can see that our plot updated as well:
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE),
median_size_mm = median(size_mm, na.rm = TRUE), )
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
siteyear_summary
```
```
## a ggplot option:
ggplot(data = siteyear_summary, aes(x = year, y = median_size_mm, color = site)) +
geom_line()
```
```
ggsave(here("figures", "lobsters-line.png"))
## another option:
ggplot(siteyear_summary, aes(x = year, y = median_size_mm)) +
geom_col() +
facet_wrap(~site)
```
```
ggsave(here("figures", "lobsters-col.png"))
```
### 6\.5\.1 Knit, push, \& show differences on GitHub
So cool.
### 6\.5\.2 `dplyr::count()`
Now that we’ve spent time with group\_by %\>% summarize, there is a shortcut if you only want to summarize by count. This is with a function called `count()`, and it will group\_by your selected variable, count, and then also ungroup. It looks like this:
```
lobsters %>%
count(site, year)
## This is the same as:
lobsters %>%
group_by(site, year) %>%
summarize(n = n()) %>%
ungroup()
```
Hey, we could update our RMarkdown text knowing this: There are \#r count(lobsters)\# total lobsters included in this summary.
Switching gears…
6\.6 `mutate()`
---------------
There are a lot of times where you don’t want to summarize your data, but you do want to operate beyond the original data. This is often done by adding a column. We do this with the `mutate()` function from `dplyr`. Let’s try this with our original lobsters data. The sizes are in millimeters but let’s say it was important for them to be in meters. We can add a column with this calculation:
```
lobsters %>%
mutate(size_m = size_mm / 1000)
```
If we want to add a column that has the same value repeated, we can pass it just one value, either a number or a character string (in quotes). And let’s save this as a variable called `lobsters_detailed`
```
lobsters_detailed <- lobsters %>%
mutate(size_m = size_mm / 1000,
millenia = 2000,
observer = "Allison Horst")
```
6\.7 `select()`
---------------
We will end with one final function, `select`. This is how to choose, retain, and move your data by columns:
Let’s say that we want to present this data finally with only columns for date, site, and size in meters. We would do this:
```
lobsters_detailed %>%
select(date, site, size_m)
```
One last time, let’s knit, save, commit, and push to GitHub.
### 6\.7\.1 END **dplyr\-pivot\-tables** session!
6\.1 Summary
------------
Pivot tables are powerful tools in Excel for summarizing data in different ways. We will create these tables using the `group_by` and `summarize` functions from the `dplyr` package (part of the Tidyverse). We will also learn how to format tables and practice creating a reproducible report using RMarkdown and sharing it with GitHub.
**Data used in the synthesis section:**
* File name: lobsters.xlsx and lobsters2\.xlsx
* Description: Lobster size, abundance and fishing pressure (Santa Barbara coast)
* Link: [https://portal.edirepository.org/nis/mapbrowse?scope\=knb\-lter\-sbc\&identifier\=77\&revision\=newest](https://portal.edirepository.org/nis/mapbrowse?scope=knb-lter-sbc&identifier=77&revision=newest)
* Citation: Reed D. 2019\. SBC LTER: Reef: Abundance, size and fishing effort for California Spiny Lobster (Panulirus interruptus), ongoing since 2012\. Environmental Data Initiative. [doi](https://doi.org/10.6073/pasta/a593a675d644fdefb736750b291579a0).
### 6\.1\.1 Objectives
In R, we can use the `dplyr` package for pivot tables by using 2 functions `group_by` and `summarize` together with the pipe operator `%>%`. We will also continue to emphasize reproducibility in all our analyses.
* Discuss pivot tables in Excel
* Introduce `group_by() %>% summarize()` from the `dplyr` package
* Learn `mutate()` and `select()` to work column\-wise
* Practice our reproducible workflow with RMarkdown and GitHub
### 6\.1\.2 Resources
* [`dplyr` website: dplyr.tidyverse.org](https://dplyr.tidyverse.org/)
* [R for Data Science: Transform Chapter](https://r4ds.had.co.nz/transform.html) by Hadley Wickham \& Garrett Grolemund
* [Intro to Pivot Tables I\-III videos](https://youtu.be/g530cnFfk8Y) by Excel Campus
* [Data organization in spreadsheets](https://peerj.com/preprints/3183/) by Karl Broman \& Kara Woo
### 6\.1\.1 Objectives
In R, we can use the `dplyr` package for pivot tables by using 2 functions `group_by` and `summarize` together with the pipe operator `%>%`. We will also continue to emphasize reproducibility in all our analyses.
* Discuss pivot tables in Excel
* Introduce `group_by() %>% summarize()` from the `dplyr` package
* Learn `mutate()` and `select()` to work column\-wise
* Practice our reproducible workflow with RMarkdown and GitHub
### 6\.1\.2 Resources
* [`dplyr` website: dplyr.tidyverse.org](https://dplyr.tidyverse.org/)
* [R for Data Science: Transform Chapter](https://r4ds.had.co.nz/transform.html) by Hadley Wickham \& Garrett Grolemund
* [Intro to Pivot Tables I\-III videos](https://youtu.be/g530cnFfk8Y) by Excel Campus
* [Data organization in spreadsheets](https://peerj.com/preprints/3183/) by Karl Broman \& Kara Woo
6\.2 Overview \& setup
----------------------
[Wikipedia describes a pivot table](https://en.wikipedia.org/wiki/Pivot_table) as a “table of statistics that summarizes the data of a more extensive table…this summary might include sums, averages, or other statistics, which the pivot table groups together in a meaningful way.”
> **Aside:** Wikipedia also says that “Although pivot table is a generic term, Microsoft trademarked PivotTable in the United States in 1994\.”
Pivot tables are a really powerful tool for summarizing data, and we can have similar functionality in R — as well as nicely automating and reporting these tables.
We will first have a look at our data, demo using pivot tables in Excel, and then create reproducible tables in R.
### 6\.2\.1 View data in Excel
When reading in Excel files (or really any data that isn’t yours), it can be a good idea to open the data and look at it so you know what you’re up against.
Let’s open the lobsters.xlsx data in Excel.
It’s one sheet, and it’s rectangular. In this data set, every row is a unique observation. This is called “uncounted” data; you’ll see there is no row for how many lobsters were seen because each row is an observation, or an “n of 1\.”
But also notice that the data doesn’t start until line 5; there are 4 lines of metadata — data about the data that is super important! — that we don’t want to muddy our analyses.
Now your first idea might be to delete these 4 rows from this Excel sheet and save them on another, but we also know that we need to keep the raw data raw. So let’s not touch this data in Excel, we’ll remove these lines in R. Let’s do that first so then we’ll be all set.
### 6\.2\.2 RMarkdown setup
Let’s start a new RMarkdown file in our repo, at the top\-level (where it will be created by default in our Project). I’ll call mine `pivot_lobsters.Rmd`.
In the setup chunk, let’s attach our libraries and read in our lobster data. In addition to the `tidyverse` package we will also use the `skimr` package. You will have to install it, but don’t want it to be installed every time you write your code. The following is a nice convention for having the install instructions available (on the same line) as the `library()` call.
```
## attach libraries
library(tidyverse)
library(readxl)
library(here)
library(skimr) # install.packages('skimr')
library(kableExtra) # install.packages('kableExtra')
```
We used the `read_excel()` before, which is the generic function that reads both .xls and .xlsx files. Since we know that this is a .xlsx file, we will demo using the `read_xlsx()` function.
We can expect that someone in the history of R and especially the history of the `readxl` package has needed to skip lines at the top of an Excel file before. So let’s look at the help pages `?read_xlsx`: there is an argument called `skip` that we can set to 4 to skip 4 lines.
```
## read in data
lobsters <- read_xlsx(here("data/lobsters.xlsx"), skip=4)
```
Great. We’ve seen this data in Excel so I don’t feel the need to use `head()` here like we’ve done before, but I do like having a look at summary statistics and classes.
#### 6\.2\.2\.1 `skimr::skim`
To look at summary statistics we’ve used `summary`, which is good for numeric columns, but it doesn’t give a lot of useful information for non\-numeric data. So it means it wouldn’t tell us how many unique sites there are in this dataset. To have a look there I like using the `skimr` package:
```
# explore data
skimr::skim(lobsters)
```
This `skimr::` notation is a reminder to me that `skim` is from the `skimr` package. It is a nice convention: it’s a reminder to others (especially you!).
`skim` lets us look more at each variable. Here we can look at our character variables and see that there are 5 unique sites (in the `n_unique` output). Also, I particularly like looking at missing data. There are 6 missing values in the `size_mm` variable.
### 6\.2\.3 Our task
So now we have an idea of our data. But now we have a task: we’ve been asked by a colleague to report about how the average size of lobsters has changed for each site across time.
We will complete this task with R by using the `dplyr` package for data wrangling, which we will do after demoing how this would do it with pivot tables in Excel.
### 6\.2\.1 View data in Excel
When reading in Excel files (or really any data that isn’t yours), it can be a good idea to open the data and look at it so you know what you’re up against.
Let’s open the lobsters.xlsx data in Excel.
It’s one sheet, and it’s rectangular. In this data set, every row is a unique observation. This is called “uncounted” data; you’ll see there is no row for how many lobsters were seen because each row is an observation, or an “n of 1\.”
But also notice that the data doesn’t start until line 5; there are 4 lines of metadata — data about the data that is super important! — that we don’t want to muddy our analyses.
Now your first idea might be to delete these 4 rows from this Excel sheet and save them on another, but we also know that we need to keep the raw data raw. So let’s not touch this data in Excel, we’ll remove these lines in R. Let’s do that first so then we’ll be all set.
### 6\.2\.2 RMarkdown setup
Let’s start a new RMarkdown file in our repo, at the top\-level (where it will be created by default in our Project). I’ll call mine `pivot_lobsters.Rmd`.
In the setup chunk, let’s attach our libraries and read in our lobster data. In addition to the `tidyverse` package we will also use the `skimr` package. You will have to install it, but don’t want it to be installed every time you write your code. The following is a nice convention for having the install instructions available (on the same line) as the `library()` call.
```
## attach libraries
library(tidyverse)
library(readxl)
library(here)
library(skimr) # install.packages('skimr')
library(kableExtra) # install.packages('kableExtra')
```
We used the `read_excel()` before, which is the generic function that reads both .xls and .xlsx files. Since we know that this is a .xlsx file, we will demo using the `read_xlsx()` function.
We can expect that someone in the history of R and especially the history of the `readxl` package has needed to skip lines at the top of an Excel file before. So let’s look at the help pages `?read_xlsx`: there is an argument called `skip` that we can set to 4 to skip 4 lines.
```
## read in data
lobsters <- read_xlsx(here("data/lobsters.xlsx"), skip=4)
```
Great. We’ve seen this data in Excel so I don’t feel the need to use `head()` here like we’ve done before, but I do like having a look at summary statistics and classes.
#### 6\.2\.2\.1 `skimr::skim`
To look at summary statistics we’ve used `summary`, which is good for numeric columns, but it doesn’t give a lot of useful information for non\-numeric data. So it means it wouldn’t tell us how many unique sites there are in this dataset. To have a look there I like using the `skimr` package:
```
# explore data
skimr::skim(lobsters)
```
This `skimr::` notation is a reminder to me that `skim` is from the `skimr` package. It is a nice convention: it’s a reminder to others (especially you!).
`skim` lets us look more at each variable. Here we can look at our character variables and see that there are 5 unique sites (in the `n_unique` output). Also, I particularly like looking at missing data. There are 6 missing values in the `size_mm` variable.
#### 6\.2\.2\.1 `skimr::skim`
To look at summary statistics we’ve used `summary`, which is good for numeric columns, but it doesn’t give a lot of useful information for non\-numeric data. So it means it wouldn’t tell us how many unique sites there are in this dataset. To have a look there I like using the `skimr` package:
```
# explore data
skimr::skim(lobsters)
```
This `skimr::` notation is a reminder to me that `skim` is from the `skimr` package. It is a nice convention: it’s a reminder to others (especially you!).
`skim` lets us look more at each variable. Here we can look at our character variables and see that there are 5 unique sites (in the `n_unique` output). Also, I particularly like looking at missing data. There are 6 missing values in the `size_mm` variable.
### 6\.2\.3 Our task
So now we have an idea of our data. But now we have a task: we’ve been asked by a colleague to report about how the average size of lobsters has changed for each site across time.
We will complete this task with R by using the `dplyr` package for data wrangling, which we will do after demoing how this would do it with pivot tables in Excel.
6\.3 Pivot table demo
---------------------
I will demo how we will make a pivot table with our lobster data. You are welcome to sit back and watch rather than following along.
First let’s summarize how many lobsters were counted each year. This means I want to count of rows by year.
So to do this in Excel we would initiate the Pivot Table Process:
Excel will ask what data I would like to include, and it will do its best to suggest coordinates for my data within the spreadsheet (it can have difficulty with non\-rectangular or “non\-tidy” data). It does a good job here of ignoring those top lines of data description.
It will also suggest we make our PivotTable in a new worksheet.
And then we’ll see our new sheet and a little wizard to help us create the PivotTable.
### 6\.3\.1 pivot one variable
I want to start by summarizing by year, so I first drag the `year` variable down into the “Rows” box. What I see at this point are the years listed: this confirms that I’m going to group by years.
And then, to summarize the counts for each year, I actually drag the same `year` variable into the “Values” box. And it will create a Pivot Table for me! But “sum” as the default summary statistic; this doesn’t make a whole lot of sense for summarizing years. I can click the little “I” icon to change this summary statistic to what I want: Count of year.
A few things to note:
* The pivot table is separate entity from our data (it’s on a different sheet); the original data has not been affected. This “keeps the raw data raw,” which is great practice.
* The pivot table summarizes on the variables you request meaning that we don’t see other columns (like date, month, or site).
* Excel also calculates the Grand total for all sites (in bold). This is nice for communicating about data. But it can be problematic in the future, because it might not be clear that this is a calculation and not data. It could be easy to take a total of this column and introduce errors by doubling the total count.
So pivot tables are great because they summarize the data and keep the raw data raw — they even promote good practice because they by default ask you if you’d like to present the data in a new sheet rather than in the same sheet.
### 6\.3\.2 pivot two variables
We can include multiple variables in our PivotTable. If we want to add site as a second variable, we can drag it down:
But this is comparing sites within a year; we want to compare years within a site. We can reverse the order easily enough by dragging (you just have to remember to do all of these steps the next time you’d want to repeat this):
So in terms of our full task, which is to compare the average lobster size by site and year, we are on our way! I’ll leave this as a cliff\-hanger here in Excel and we will carry forward in R.
Just to recap what we did here: we told Excel we wanted to group by something (here: `year` and `site`) and then summarize by something (here: count, not sum!)
### 6\.3\.1 pivot one variable
I want to start by summarizing by year, so I first drag the `year` variable down into the “Rows” box. What I see at this point are the years listed: this confirms that I’m going to group by years.
And then, to summarize the counts for each year, I actually drag the same `year` variable into the “Values” box. And it will create a Pivot Table for me! But “sum” as the default summary statistic; this doesn’t make a whole lot of sense for summarizing years. I can click the little “I” icon to change this summary statistic to what I want: Count of year.
A few things to note:
* The pivot table is separate entity from our data (it’s on a different sheet); the original data has not been affected. This “keeps the raw data raw,” which is great practice.
* The pivot table summarizes on the variables you request meaning that we don’t see other columns (like date, month, or site).
* Excel also calculates the Grand total for all sites (in bold). This is nice for communicating about data. But it can be problematic in the future, because it might not be clear that this is a calculation and not data. It could be easy to take a total of this column and introduce errors by doubling the total count.
So pivot tables are great because they summarize the data and keep the raw data raw — they even promote good practice because they by default ask you if you’d like to present the data in a new sheet rather than in the same sheet.
### 6\.3\.2 pivot two variables
We can include multiple variables in our PivotTable. If we want to add site as a second variable, we can drag it down:
But this is comparing sites within a year; we want to compare years within a site. We can reverse the order easily enough by dragging (you just have to remember to do all of these steps the next time you’d want to repeat this):
So in terms of our full task, which is to compare the average lobster size by site and year, we are on our way! I’ll leave this as a cliff\-hanger here in Excel and we will carry forward in R.
Just to recap what we did here: we told Excel we wanted to group by something (here: `year` and `site`) and then summarize by something (here: count, not sum!)
6\.4 `group_by()` %\>% `summarize()`
------------------------------------
In R, we can create the functionality of pivot tables with the same logic: we will tell R to group by something and then summarize by something. Visually, it looks like this:
This graphic is from [RStudio’s old\-school data wrangling cheatsheet](http://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf); all cheatsheets available from <https://rstudio.com/resources/cheatsheets>). It’s incredibly powerful to visualize what we are talking about with our data when do do these kinds of operations.
And in code, it looks like this:
```
data %>%
group_by() %>%
summarize()
```
It reads: “Take the data and then group by something and then summarize by something.”
The pipe operator `%>%` is a really critical feature of the `dplyr` package, originally created for the `magrittr` package. It lets us chain together steps of our data wrangling, enabling us to tell a clear story about our entire data analysis. This is not only a written story to archive what we’ve done, but it will be a reproducible story that can be rerun and remixed. It is not difficult to read as a human, and it is not a series of clicks to remember.
Let’s try it out!
### 6\.4\.1 `group_by` one variable
Let’s use `group_by() %>% summarize()` with our `lobsters` data, just like we did in Excel. We will first group\_by year and then summarize by count, using the function `n()` (in the `dplyr` package). `n()` counts the number of times an observation shows up, and since this is uncounted data, this will count each row.
We can say this out loud while we write it: “take the lobsters data and then group\_by year and then summarize by count in a new column we’ll call `count_by_year`.”
```
lobsters %>%
group_by(year) %>%
summarize(count_by_year = n())
```
Notice how together, `group_by` and `summarize` minimize the amount of information we see. We also saw this with the pivot table. We lose the other columns that aren’t involved here.
Question: What if you *don’t* group\_by first? Let’s try it and discuss what’s going on.
```
lobsters %>%
summarize(count = n())
```
```
## # A tibble: 1 x 1
## count
## <int>
## 1 2893
```
So if we don’t `group_by` first, we will get a single summary statistic (sum in this case) for the whole dataset.
Another question: what if we *only* group\_by?
```
lobsters %>%
group_by(year)
```
```
## # A tibble: 2,893 x 7
## # Groups: year [5]
## year month date site transect replicate size_mm
## <dbl> <dbl> <chr> <chr> <dbl> <chr> <dbl>
## 1 2012 8 8/20/12 ivee 3 A 70
## 2 2012 8 8/20/12 ivee 3 B 60
## 3 2012 8 8/20/12 ivee 3 B 65
## 4 2012 8 8/20/12 ivee 3 B 70
## 5 2012 8 8/20/12 ivee 3 B 85
## 6 2012 8 8/20/12 ivee 3 C 60
## 7 2012 8 8/20/12 ivee 3 C 65
## 8 2012 8 8/20/12 ivee 3 C 67
## 9 2012 8 8/20/12 ivee 3 D 70
## 10 2012 8 8/20/12 ivee 4 B 85
## # … with 2,883 more rows
```
R doesn’t summarize our data, but you can see from the output that it is indeed grouped. However, we haven’t done anything to the original data: we are only exploring. We are keeping the raw data raw.
To convince ourselves, let’s now check the `lobsters` variable. We can do this by clicking on `lobsters` in the Environment pane in RStudio.
We see that we haven’t changed any of our original data that was stored in this variable. (Just like how the pivot table didn’t affect the raw data on the original sheet).
> ***Aside***: You’ll also see that when you click on the variable name in the Environment pane, `View(lobsters)` shows up in your Console. `View()` (capital V) is the R function to view any variable in the viewer. So this is something that you can write in your RMarkdown script, although RMarkdown will not be able to knit this view feature into the formatted document. So, if you want include `View()` in your RMarkdown document you will need to either comment it out `#View()` or add `eval=FALSE` to the top of the code chunk so that the full line reads `{r, eval=FALSE}`.
### 6\.4\.2 `group_by` multiple variables
Great. Now let’s summarize by both year and site like we did in the pivot table. We are able to `group_by` more than one variable. Let’s do this together:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n())
```
We put the site first because that is what we want as an end product. But we could easily have put year first. We saw visually what would happen when we did this in the Pivot Table.
Great.
### 6\.4\.3 `summarize` multiple variables
We can summarize multiple variables at a time.
So far we’ve summarized the count of lobster observations. Let’s also calculate the mean and standard deviation. First let’s use the `mean()` function to calculate the mean. We do this within the same `summarize()` function, but we can add a new line to make it easier to read. Notice how when you put your curser within the parenthesis and hit return, the indentation will automatically align.
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm))
```
> ***Aside*** Command\-I will properly indent selected lines.
Great! But this will actually calculate some of the means as NA because one or more values in that year are NA. So we can pass an argument that says to remove NAs first before calculating the average. Let’s do that, and then also calculate the standard deviation with the `sd()` function:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm=TRUE),
sd_size_mm = sd(size_mm, na.rm=TRUE))
```
So we can make the equivalent of Excel’s pivot table in R with `group_by() %>% summarize()`.
Now we are at the point where we actually want to save this summary information as a variable so we can use it in further analyses and formatting.
So let’s add a variable assignment to that first line:
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## inspect our new variable
siteyear_summary
```
### 6\.4\.4 Table formatting with `kable()`
There are several options for formatting tables in RMarkdown; we’ll show one here from the `kableExtra` package and learn more about it tomorrow.
It works nicely with the pipe operator, so we can build do this from our new object:
```
## make a table with our new variable
siteyear_summary %>%
kable()
```
### 6\.4\.5 R code in\-line in RMarkdown
Before we let you try this on your own, let’s go outside of our code chunk and write in Markdown.
I want to demo something that is a really powerful RMarkdown feature that we can already leverage with what we know in R.
Write this **in Markdown** but replace the \# with a backtick (\`): “There are \#r nrow(lobsters)\# total lobsters included in this report.” Let’s knit to see what happens.
I hope you can start to imagine the possibilities. If you wanted to write which year had the most observations, or which site had a decreasing trend, you would be able to.
### 6\.4\.6 Activity
1. Build from our analysis and calculate the median lobster size for each site year. Your calculation will use the `size_mm` variable and function to calculate the median (Hint: ?median)
2. create and ggsave() a plot.
Then, save, commit, and push your .Rmd, .html, and .png.
Solution (no peeking):
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE),
median_size_mm = median(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## a ggplot option:
ggplot(data = siteyear_summary, aes(x = year, y = median_size_mm, color = site)) +
geom_line()
```
```
ggsave(here("figures", "lobsters-line.png"))
```
```
## Saving 7 x 5 in image
```
```
## another option:
ggplot(siteyear_summary, aes(x = year, y = median_size_mm)) +
geom_col() +
facet_wrap(~site)
```
```
ggsave(here("figures", "lobsters-col.png"))
```
```
## Saving 7 x 5 in image
```
Don’t forget to knit, commit, and push!
Nice work everybody.
### 6\.4\.1 `group_by` one variable
Let’s use `group_by() %>% summarize()` with our `lobsters` data, just like we did in Excel. We will first group\_by year and then summarize by count, using the function `n()` (in the `dplyr` package). `n()` counts the number of times an observation shows up, and since this is uncounted data, this will count each row.
We can say this out loud while we write it: “take the lobsters data and then group\_by year and then summarize by count in a new column we’ll call `count_by_year`.”
```
lobsters %>%
group_by(year) %>%
summarize(count_by_year = n())
```
Notice how together, `group_by` and `summarize` minimize the amount of information we see. We also saw this with the pivot table. We lose the other columns that aren’t involved here.
Question: What if you *don’t* group\_by first? Let’s try it and discuss what’s going on.
```
lobsters %>%
summarize(count = n())
```
```
## # A tibble: 1 x 1
## count
## <int>
## 1 2893
```
So if we don’t `group_by` first, we will get a single summary statistic (sum in this case) for the whole dataset.
Another question: what if we *only* group\_by?
```
lobsters %>%
group_by(year)
```
```
## # A tibble: 2,893 x 7
## # Groups: year [5]
## year month date site transect replicate size_mm
## <dbl> <dbl> <chr> <chr> <dbl> <chr> <dbl>
## 1 2012 8 8/20/12 ivee 3 A 70
## 2 2012 8 8/20/12 ivee 3 B 60
## 3 2012 8 8/20/12 ivee 3 B 65
## 4 2012 8 8/20/12 ivee 3 B 70
## 5 2012 8 8/20/12 ivee 3 B 85
## 6 2012 8 8/20/12 ivee 3 C 60
## 7 2012 8 8/20/12 ivee 3 C 65
## 8 2012 8 8/20/12 ivee 3 C 67
## 9 2012 8 8/20/12 ivee 3 D 70
## 10 2012 8 8/20/12 ivee 4 B 85
## # … with 2,883 more rows
```
R doesn’t summarize our data, but you can see from the output that it is indeed grouped. However, we haven’t done anything to the original data: we are only exploring. We are keeping the raw data raw.
To convince ourselves, let’s now check the `lobsters` variable. We can do this by clicking on `lobsters` in the Environment pane in RStudio.
We see that we haven’t changed any of our original data that was stored in this variable. (Just like how the pivot table didn’t affect the raw data on the original sheet).
> ***Aside***: You’ll also see that when you click on the variable name in the Environment pane, `View(lobsters)` shows up in your Console. `View()` (capital V) is the R function to view any variable in the viewer. So this is something that you can write in your RMarkdown script, although RMarkdown will not be able to knit this view feature into the formatted document. So, if you want include `View()` in your RMarkdown document you will need to either comment it out `#View()` or add `eval=FALSE` to the top of the code chunk so that the full line reads `{r, eval=FALSE}`.
### 6\.4\.2 `group_by` multiple variables
Great. Now let’s summarize by both year and site like we did in the pivot table. We are able to `group_by` more than one variable. Let’s do this together:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n())
```
We put the site first because that is what we want as an end product. But we could easily have put year first. We saw visually what would happen when we did this in the Pivot Table.
Great.
### 6\.4\.3 `summarize` multiple variables
We can summarize multiple variables at a time.
So far we’ve summarized the count of lobster observations. Let’s also calculate the mean and standard deviation. First let’s use the `mean()` function to calculate the mean. We do this within the same `summarize()` function, but we can add a new line to make it easier to read. Notice how when you put your curser within the parenthesis and hit return, the indentation will automatically align.
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm))
```
> ***Aside*** Command\-I will properly indent selected lines.
Great! But this will actually calculate some of the means as NA because one or more values in that year are NA. So we can pass an argument that says to remove NAs first before calculating the average. Let’s do that, and then also calculate the standard deviation with the `sd()` function:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm=TRUE),
sd_size_mm = sd(size_mm, na.rm=TRUE))
```
So we can make the equivalent of Excel’s pivot table in R with `group_by() %>% summarize()`.
Now we are at the point where we actually want to save this summary information as a variable so we can use it in further analyses and formatting.
So let’s add a variable assignment to that first line:
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## inspect our new variable
siteyear_summary
```
### 6\.4\.4 Table formatting with `kable()`
There are several options for formatting tables in RMarkdown; we’ll show one here from the `kableExtra` package and learn more about it tomorrow.
It works nicely with the pipe operator, so we can build do this from our new object:
```
## make a table with our new variable
siteyear_summary %>%
kable()
```
### 6\.4\.5 R code in\-line in RMarkdown
Before we let you try this on your own, let’s go outside of our code chunk and write in Markdown.
I want to demo something that is a really powerful RMarkdown feature that we can already leverage with what we know in R.
Write this **in Markdown** but replace the \# with a backtick (\`): “There are \#r nrow(lobsters)\# total lobsters included in this report.” Let’s knit to see what happens.
I hope you can start to imagine the possibilities. If you wanted to write which year had the most observations, or which site had a decreasing trend, you would be able to.
### 6\.4\.6 Activity
1. Build from our analysis and calculate the median lobster size for each site year. Your calculation will use the `size_mm` variable and function to calculate the median (Hint: ?median)
2. create and ggsave() a plot.
Then, save, commit, and push your .Rmd, .html, and .png.
Solution (no peeking):
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE),
median_size_mm = median(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## a ggplot option:
ggplot(data = siteyear_summary, aes(x = year, y = median_size_mm, color = site)) +
geom_line()
```
```
ggsave(here("figures", "lobsters-line.png"))
```
```
## Saving 7 x 5 in image
```
```
## another option:
ggplot(siteyear_summary, aes(x = year, y = median_size_mm)) +
geom_col() +
facet_wrap(~site)
```
```
ggsave(here("figures", "lobsters-col.png"))
```
```
## Saving 7 x 5 in image
```
Don’t forget to knit, commit, and push!
Nice work everybody.
6\.5 Oh no, they sent the wrong data!
-------------------------------------
Oh no! After all our analyses and everything we’ve done, our colleague just emailed us at 4:30pm on Friday that he sent the wrong data and we need to redo all our analyses with a new .xlsx file: `lobsters2.xlsx`, not `lobsters.xlsx`. Aaaaah!
If we were doing this in Excel, this would be a bummer; we’d have to rebuild our pivot table and click through all of our logic again. And then export our figures and save them into our report.
But, since we did it in R, we are much safer. R’s power is not only in analytical power, but in automation and reproducibility.
This means we can go back to the top of our RMarkdown file, and read in this new data file, and then re\-knit. We will still need to check that everything outputs correctly, (and that column headers haven’t been renamed), but our first pass will be to update the filename and re\-knit:
```
## read in data
lobsters <- read_xlsx(here("data/lobsters2.xlsx"), skip=4)
```
And now we can see that our plot updated as well:
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE),
median_size_mm = median(size_mm, na.rm = TRUE), )
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
siteyear_summary
```
```
## a ggplot option:
ggplot(data = siteyear_summary, aes(x = year, y = median_size_mm, color = site)) +
geom_line()
```
```
ggsave(here("figures", "lobsters-line.png"))
## another option:
ggplot(siteyear_summary, aes(x = year, y = median_size_mm)) +
geom_col() +
facet_wrap(~site)
```
```
ggsave(here("figures", "lobsters-col.png"))
```
### 6\.5\.1 Knit, push, \& show differences on GitHub
So cool.
### 6\.5\.2 `dplyr::count()`
Now that we’ve spent time with group\_by %\>% summarize, there is a shortcut if you only want to summarize by count. This is with a function called `count()`, and it will group\_by your selected variable, count, and then also ungroup. It looks like this:
```
lobsters %>%
count(site, year)
## This is the same as:
lobsters %>%
group_by(site, year) %>%
summarize(n = n()) %>%
ungroup()
```
Hey, we could update our RMarkdown text knowing this: There are \#r count(lobsters)\# total lobsters included in this summary.
Switching gears…
### 6\.5\.1 Knit, push, \& show differences on GitHub
So cool.
### 6\.5\.2 `dplyr::count()`
Now that we’ve spent time with group\_by %\>% summarize, there is a shortcut if you only want to summarize by count. This is with a function called `count()`, and it will group\_by your selected variable, count, and then also ungroup. It looks like this:
```
lobsters %>%
count(site, year)
## This is the same as:
lobsters %>%
group_by(site, year) %>%
summarize(n = n()) %>%
ungroup()
```
Hey, we could update our RMarkdown text knowing this: There are \#r count(lobsters)\# total lobsters included in this summary.
Switching gears…
6\.6 `mutate()`
---------------
There are a lot of times where you don’t want to summarize your data, but you do want to operate beyond the original data. This is often done by adding a column. We do this with the `mutate()` function from `dplyr`. Let’s try this with our original lobsters data. The sizes are in millimeters but let’s say it was important for them to be in meters. We can add a column with this calculation:
```
lobsters %>%
mutate(size_m = size_mm / 1000)
```
If we want to add a column that has the same value repeated, we can pass it just one value, either a number or a character string (in quotes). And let’s save this as a variable called `lobsters_detailed`
```
lobsters_detailed <- lobsters %>%
mutate(size_m = size_mm / 1000,
millenia = 2000,
observer = "Allison Horst")
```
6\.7 `select()`
---------------
We will end with one final function, `select`. This is how to choose, retain, and move your data by columns:
Let’s say that we want to present this data finally with only columns for date, site, and size in meters. We would do this:
```
lobsters_detailed %>%
select(date, site, size_m)
```
One last time, let’s knit, save, commit, and push to GitHub.
### 6\.7\.1 END **dplyr\-pivot\-tables** session!
### 6\.7\.1 END **dplyr\-pivot\-tables** session!
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/pivot-tables.html |
Chapter 6 Pivot Tables with `dplyr`
===================================
6\.1 Summary
------------
Pivot tables are powerful tools in Excel for summarizing data in different ways. We will create these tables using the `group_by` and `summarize` functions from the `dplyr` package (part of the Tidyverse). We will also learn how to format tables and practice creating a reproducible report using RMarkdown and sharing it with GitHub.
**Data used in the synthesis section:**
* File name: lobsters.xlsx and lobsters2\.xlsx
* Description: Lobster size, abundance and fishing pressure (Santa Barbara coast)
* Link: [https://portal.edirepository.org/nis/mapbrowse?scope\=knb\-lter\-sbc\&identifier\=77\&revision\=newest](https://portal.edirepository.org/nis/mapbrowse?scope=knb-lter-sbc&identifier=77&revision=newest)
* Citation: Reed D. 2019\. SBC LTER: Reef: Abundance, size and fishing effort for California Spiny Lobster (Panulirus interruptus), ongoing since 2012\. Environmental Data Initiative. [doi](https://doi.org/10.6073/pasta/a593a675d644fdefb736750b291579a0).
### 6\.1\.1 Objectives
In R, we can use the `dplyr` package for pivot tables by using 2 functions `group_by` and `summarize` together with the pipe operator `%>%`. We will also continue to emphasize reproducibility in all our analyses.
* Discuss pivot tables in Excel
* Introduce `group_by() %>% summarize()` from the `dplyr` package
* Learn `mutate()` and `select()` to work column\-wise
* Practice our reproducible workflow with RMarkdown and GitHub
### 6\.1\.2 Resources
* [`dplyr` website: dplyr.tidyverse.org](https://dplyr.tidyverse.org/)
* [R for Data Science: Transform Chapter](https://r4ds.had.co.nz/transform.html) by Hadley Wickham \& Garrett Grolemund
* [Intro to Pivot Tables I\-III videos](https://youtu.be/g530cnFfk8Y) by Excel Campus
* [Data organization in spreadsheets](https://peerj.com/preprints/3183/) by Karl Broman \& Kara Woo
6\.2 Overview \& setup
----------------------
[Wikipedia describes a pivot table](https://en.wikipedia.org/wiki/Pivot_table) as a “table of statistics that summarizes the data of a more extensive table…this summary might include sums, averages, or other statistics, which the pivot table groups together in a meaningful way.”
> **Aside:** Wikipedia also says that “Although pivot table is a generic term, Microsoft trademarked PivotTable in the United States in 1994\.”
Pivot tables are a really powerful tool for summarizing data, and we can have similar functionality in R — as well as nicely automating and reporting these tables.
We will first have a look at our data, demo using pivot tables in Excel, and then create reproducible tables in R.
### 6\.2\.1 View data in Excel
When reading in Excel files (or really any data that isn’t yours), it can be a good idea to open the data and look at it so you know what you’re up against.
Let’s open the lobsters.xlsx data in Excel.
It’s one sheet, and it’s rectangular. In this data set, every row is a unique observation. This is called “uncounted” data; you’ll see there is no row for how many lobsters were seen because each row is an observation, or an “n of 1\.”
But also notice that the data doesn’t start until line 5; there are 4 lines of metadata — data about the data that is super important! — that we don’t want to muddy our analyses.
Now your first idea might be to delete these 4 rows from this Excel sheet and save them on another, but we also know that we need to keep the raw data raw. So let’s not touch this data in Excel, we’ll remove these lines in R. Let’s do that first so then we’ll be all set.
### 6\.2\.2 RMarkdown setup
Let’s start a new RMarkdown file in our repo, at the top\-level (where it will be created by default in our Project). I’ll call mine `pivot_lobsters.Rmd`.
In the setup chunk, let’s attach our libraries and read in our lobster data. In addition to the `tidyverse` package we will also use the `skimr` package. You will have to install it, but don’t want it to be installed every time you write your code. The following is a nice convention for having the install instructions available (on the same line) as the `library()` call.
```
## attach libraries
library(tidyverse)
library(readxl)
library(here)
library(skimr) # install.packages('skimr')
library(kableExtra) # install.packages('kableExtra')
```
We used the `read_excel()` before, which is the generic function that reads both .xls and .xlsx files. Since we know that this is a .xlsx file, we will demo using the `read_xlsx()` function.
We can expect that someone in the history of R and especially the history of the `readxl` package has needed to skip lines at the top of an Excel file before. So let’s look at the help pages `?read_xlsx`: there is an argument called `skip` that we can set to 4 to skip 4 lines.
```
## read in data
lobsters <- read_xlsx(here("data/lobsters.xlsx"), skip=4)
```
Great. We’ve seen this data in Excel so I don’t feel the need to use `head()` here like we’ve done before, but I do like having a look at summary statistics and classes.
#### 6\.2\.2\.1 `skimr::skim`
To look at summary statistics we’ve used `summary`, which is good for numeric columns, but it doesn’t give a lot of useful information for non\-numeric data. So it means it wouldn’t tell us how many unique sites there are in this dataset. To have a look there I like using the `skimr` package:
```
# explore data
skimr::skim(lobsters)
```
This `skimr::` notation is a reminder to me that `skim` is from the `skimr` package. It is a nice convention: it’s a reminder to others (especially you!).
`skim` lets us look more at each variable. Here we can look at our character variables and see that there are 5 unique sites (in the `n_unique` output). Also, I particularly like looking at missing data. There are 6 missing values in the `size_mm` variable.
### 6\.2\.3 Our task
So now we have an idea of our data. But now we have a task: we’ve been asked by a colleague to report about how the average size of lobsters has changed for each site across time.
We will complete this task with R by using the `dplyr` package for data wrangling, which we will do after demoing how this would do it with pivot tables in Excel.
6\.3 Pivot table demo
---------------------
I will demo how we will make a pivot table with our lobster data. You are welcome to sit back and watch rather than following along.
First let’s summarize how many lobsters were counted each year. This means I want to count of rows by year.
So to do this in Excel we would initiate the Pivot Table Process:
Excel will ask what data I would like to include, and it will do its best to suggest coordinates for my data within the spreadsheet (it can have difficulty with non\-rectangular or “non\-tidy” data). It does a good job here of ignoring those top lines of data description.
It will also suggest we make our PivotTable in a new worksheet.
And then we’ll see our new sheet and a little wizard to help us create the PivotTable.
### 6\.3\.1 pivot one variable
I want to start by summarizing by year, so I first drag the `year` variable down into the “Rows” box. What I see at this point are the years listed: this confirms that I’m going to group by years.
And then, to summarize the counts for each year, I actually drag the same `year` variable into the “Values” box. And it will create a Pivot Table for me! But “sum” as the default summary statistic; this doesn’t make a whole lot of sense for summarizing years. I can click the little “I” icon to change this summary statistic to what I want: Count of year.
A few things to note:
* The pivot table is separate entity from our data (it’s on a different sheet); the original data has not been affected. This “keeps the raw data raw,” which is great practice.
* The pivot table summarizes on the variables you request meaning that we don’t see other columns (like date, month, or site).
* Excel also calculates the Grand total for all sites (in bold). This is nice for communicating about data. But it can be problematic in the future, because it might not be clear that this is a calculation and not data. It could be easy to take a total of this column and introduce errors by doubling the total count.
So pivot tables are great because they summarize the data and keep the raw data raw — they even promote good practice because they by default ask you if you’d like to present the data in a new sheet rather than in the same sheet.
### 6\.3\.2 pivot two variables
We can include multiple variables in our PivotTable. If we want to add site as a second variable, we can drag it down:
But this is comparing sites within a year; we want to compare years within a site. We can reverse the order easily enough by dragging (you just have to remember to do all of these steps the next time you’d want to repeat this):
So in terms of our full task, which is to compare the average lobster size by site and year, we are on our way! I’ll leave this as a cliff\-hanger here in Excel and we will carry forward in R.
Just to recap what we did here: we told Excel we wanted to group by something (here: `year` and `site`) and then summarize by something (here: count, not sum!)
6\.4 `group_by()` %\>% `summarize()`
------------------------------------
In R, we can create the functionality of pivot tables with the same logic: we will tell R to group by something and then summarize by something. Visually, it looks like this:
This graphic is from [RStudio’s old\-school data wrangling cheatsheet](http://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf); all cheatsheets available from <https://rstudio.com/resources/cheatsheets>). It’s incredibly powerful to visualize what we are talking about with our data when do do these kinds of operations.
And in code, it looks like this:
```
data %>%
group_by() %>%
summarize()
```
It reads: “Take the data and then group by something and then summarize by something.”
The pipe operator `%>%` is a really critical feature of the `dplyr` package, originally created for the `magrittr` package. It lets us chain together steps of our data wrangling, enabling us to tell a clear story about our entire data analysis. This is not only a written story to archive what we’ve done, but it will be a reproducible story that can be rerun and remixed. It is not difficult to read as a human, and it is not a series of clicks to remember.
Let’s try it out!
### 6\.4\.1 `group_by` one variable
Let’s use `group_by() %>% summarize()` with our `lobsters` data, just like we did in Excel. We will first group\_by year and then summarize by count, using the function `n()` (in the `dplyr` package). `n()` counts the number of times an observation shows up, and since this is uncounted data, this will count each row.
We can say this out loud while we write it: “take the lobsters data and then group\_by year and then summarize by count in a new column we’ll call `count_by_year`.”
```
lobsters %>%
group_by(year) %>%
summarize(count_by_year = n())
```
Notice how together, `group_by` and `summarize` minimize the amount of information we see. We also saw this with the pivot table. We lose the other columns that aren’t involved here.
Question: What if you *don’t* group\_by first? Let’s try it and discuss what’s going on.
```
lobsters %>%
summarize(count = n())
```
```
## # A tibble: 1 x 1
## count
## <int>
## 1 2893
```
So if we don’t `group_by` first, we will get a single summary statistic (sum in this case) for the whole dataset.
Another question: what if we *only* group\_by?
```
lobsters %>%
group_by(year)
```
```
## # A tibble: 2,893 x 7
## # Groups: year [5]
## year month date site transect replicate size_mm
## <dbl> <dbl> <chr> <chr> <dbl> <chr> <dbl>
## 1 2012 8 8/20/12 ivee 3 A 70
## 2 2012 8 8/20/12 ivee 3 B 60
## 3 2012 8 8/20/12 ivee 3 B 65
## 4 2012 8 8/20/12 ivee 3 B 70
## 5 2012 8 8/20/12 ivee 3 B 85
## 6 2012 8 8/20/12 ivee 3 C 60
## 7 2012 8 8/20/12 ivee 3 C 65
## 8 2012 8 8/20/12 ivee 3 C 67
## 9 2012 8 8/20/12 ivee 3 D 70
## 10 2012 8 8/20/12 ivee 4 B 85
## # … with 2,883 more rows
```
R doesn’t summarize our data, but you can see from the output that it is indeed grouped. However, we haven’t done anything to the original data: we are only exploring. We are keeping the raw data raw.
To convince ourselves, let’s now check the `lobsters` variable. We can do this by clicking on `lobsters` in the Environment pane in RStudio.
We see that we haven’t changed any of our original data that was stored in this variable. (Just like how the pivot table didn’t affect the raw data on the original sheet).
> ***Aside***: You’ll also see that when you click on the variable name in the Environment pane, `View(lobsters)` shows up in your Console. `View()` (capital V) is the R function to view any variable in the viewer. So this is something that you can write in your RMarkdown script, although RMarkdown will not be able to knit this view feature into the formatted document. So, if you want include `View()` in your RMarkdown document you will need to either comment it out `#View()` or add `eval=FALSE` to the top of the code chunk so that the full line reads `{r, eval=FALSE}`.
### 6\.4\.2 `group_by` multiple variables
Great. Now let’s summarize by both year and site like we did in the pivot table. We are able to `group_by` more than one variable. Let’s do this together:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n())
```
We put the site first because that is what we want as an end product. But we could easily have put year first. We saw visually what would happen when we did this in the Pivot Table.
Great.
### 6\.4\.3 `summarize` multiple variables
We can summarize multiple variables at a time.
So far we’ve summarized the count of lobster observations. Let’s also calculate the mean and standard deviation. First let’s use the `mean()` function to calculate the mean. We do this within the same `summarize()` function, but we can add a new line to make it easier to read. Notice how when you put your curser within the parenthesis and hit return, the indentation will automatically align.
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm))
```
> ***Aside*** Command\-I will properly indent selected lines.
Great! But this will actually calculate some of the means as NA because one or more values in that year are NA. So we can pass an argument that says to remove NAs first before calculating the average. Let’s do that, and then also calculate the standard deviation with the `sd()` function:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm=TRUE),
sd_size_mm = sd(size_mm, na.rm=TRUE))
```
So we can make the equivalent of Excel’s pivot table in R with `group_by() %>% summarize()`.
Now we are at the point where we actually want to save this summary information as a variable so we can use it in further analyses and formatting.
So let’s add a variable assignment to that first line:
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## inspect our new variable
siteyear_summary
```
### 6\.4\.4 Table formatting with `kable()`
There are several options for formatting tables in RMarkdown; we’ll show one here from the `kableExtra` package and learn more about it tomorrow.
It works nicely with the pipe operator, so we can build do this from our new object:
```
## make a table with our new variable
siteyear_summary %>%
kable()
```
### 6\.4\.5 R code in\-line in RMarkdown
Before we let you try this on your own, let’s go outside of our code chunk and write in Markdown.
I want to demo something that is a really powerful RMarkdown feature that we can already leverage with what we know in R.
Write this **in Markdown** but replace the \# with a backtick (\`): “There are \#r nrow(lobsters)\# total lobsters included in this report.” Let’s knit to see what happens.
I hope you can start to imagine the possibilities. If you wanted to write which year had the most observations, or which site had a decreasing trend, you would be able to.
### 6\.4\.6 Activity
1. Build from our analysis and calculate the median lobster size for each site year. Your calculation will use the `size_mm` variable and function to calculate the median (Hint: ?median)
2. create and ggsave() a plot.
Then, save, commit, and push your .Rmd, .html, and .png.
Solution (no peeking):
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE),
median_size_mm = median(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## a ggplot option:
ggplot(data = siteyear_summary, aes(x = year, y = median_size_mm, color = site)) +
geom_line()
```
```
ggsave(here("figures", "lobsters-line.png"))
```
```
## Saving 7 x 5 in image
```
```
## another option:
ggplot(siteyear_summary, aes(x = year, y = median_size_mm)) +
geom_col() +
facet_wrap(~site)
```
```
ggsave(here("figures", "lobsters-col.png"))
```
```
## Saving 7 x 5 in image
```
Don’t forget to knit, commit, and push!
Nice work everybody.
6\.5 Oh no, they sent the wrong data!
-------------------------------------
Oh no! After all our analyses and everything we’ve done, our colleague just emailed us at 4:30pm on Friday that he sent the wrong data and we need to redo all our analyses with a new .xlsx file: `lobsters2.xlsx`, not `lobsters.xlsx`. Aaaaah!
If we were doing this in Excel, this would be a bummer; we’d have to rebuild our pivot table and click through all of our logic again. And then export our figures and save them into our report.
But, since we did it in R, we are much safer. R’s power is not only in analytical power, but in automation and reproducibility.
This means we can go back to the top of our RMarkdown file, and read in this new data file, and then re\-knit. We will still need to check that everything outputs correctly, (and that column headers haven’t been renamed), but our first pass will be to update the filename and re\-knit:
```
## read in data
lobsters <- read_xlsx(here("data/lobsters2.xlsx"), skip=4)
```
And now we can see that our plot updated as well:
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE),
median_size_mm = median(size_mm, na.rm = TRUE), )
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
siteyear_summary
```
```
## a ggplot option:
ggplot(data = siteyear_summary, aes(x = year, y = median_size_mm, color = site)) +
geom_line()
```
```
ggsave(here("figures", "lobsters-line.png"))
## another option:
ggplot(siteyear_summary, aes(x = year, y = median_size_mm)) +
geom_col() +
facet_wrap(~site)
```
```
ggsave(here("figures", "lobsters-col.png"))
```
### 6\.5\.1 Knit, push, \& show differences on GitHub
So cool.
### 6\.5\.2 `dplyr::count()`
Now that we’ve spent time with group\_by %\>% summarize, there is a shortcut if you only want to summarize by count. This is with a function called `count()`, and it will group\_by your selected variable, count, and then also ungroup. It looks like this:
```
lobsters %>%
count(site, year)
## This is the same as:
lobsters %>%
group_by(site, year) %>%
summarize(n = n()) %>%
ungroup()
```
Hey, we could update our RMarkdown text knowing this: There are \#r count(lobsters)\# total lobsters included in this summary.
Switching gears…
6\.6 `mutate()`
---------------
There are a lot of times where you don’t want to summarize your data, but you do want to operate beyond the original data. This is often done by adding a column. We do this with the `mutate()` function from `dplyr`. Let’s try this with our original lobsters data. The sizes are in millimeters but let’s say it was important for them to be in meters. We can add a column with this calculation:
```
lobsters %>%
mutate(size_m = size_mm / 1000)
```
If we want to add a column that has the same value repeated, we can pass it just one value, either a number or a character string (in quotes). And let’s save this as a variable called `lobsters_detailed`
```
lobsters_detailed <- lobsters %>%
mutate(size_m = size_mm / 1000,
millenia = 2000,
observer = "Allison Horst")
```
6\.7 `select()`
---------------
We will end with one final function, `select`. This is how to choose, retain, and move your data by columns:
Let’s say that we want to present this data finally with only columns for date, site, and size in meters. We would do this:
```
lobsters_detailed %>%
select(date, site, size_m)
```
One last time, let’s knit, save, commit, and push to GitHub.
### 6\.7\.1 END **dplyr\-pivot\-tables** session!
6\.1 Summary
------------
Pivot tables are powerful tools in Excel for summarizing data in different ways. We will create these tables using the `group_by` and `summarize` functions from the `dplyr` package (part of the Tidyverse). We will also learn how to format tables and practice creating a reproducible report using RMarkdown and sharing it with GitHub.
**Data used in the synthesis section:**
* File name: lobsters.xlsx and lobsters2\.xlsx
* Description: Lobster size, abundance and fishing pressure (Santa Barbara coast)
* Link: [https://portal.edirepository.org/nis/mapbrowse?scope\=knb\-lter\-sbc\&identifier\=77\&revision\=newest](https://portal.edirepository.org/nis/mapbrowse?scope=knb-lter-sbc&identifier=77&revision=newest)
* Citation: Reed D. 2019\. SBC LTER: Reef: Abundance, size and fishing effort for California Spiny Lobster (Panulirus interruptus), ongoing since 2012\. Environmental Data Initiative. [doi](https://doi.org/10.6073/pasta/a593a675d644fdefb736750b291579a0).
### 6\.1\.1 Objectives
In R, we can use the `dplyr` package for pivot tables by using 2 functions `group_by` and `summarize` together with the pipe operator `%>%`. We will also continue to emphasize reproducibility in all our analyses.
* Discuss pivot tables in Excel
* Introduce `group_by() %>% summarize()` from the `dplyr` package
* Learn `mutate()` and `select()` to work column\-wise
* Practice our reproducible workflow with RMarkdown and GitHub
### 6\.1\.2 Resources
* [`dplyr` website: dplyr.tidyverse.org](https://dplyr.tidyverse.org/)
* [R for Data Science: Transform Chapter](https://r4ds.had.co.nz/transform.html) by Hadley Wickham \& Garrett Grolemund
* [Intro to Pivot Tables I\-III videos](https://youtu.be/g530cnFfk8Y) by Excel Campus
* [Data organization in spreadsheets](https://peerj.com/preprints/3183/) by Karl Broman \& Kara Woo
### 6\.1\.1 Objectives
In R, we can use the `dplyr` package for pivot tables by using 2 functions `group_by` and `summarize` together with the pipe operator `%>%`. We will also continue to emphasize reproducibility in all our analyses.
* Discuss pivot tables in Excel
* Introduce `group_by() %>% summarize()` from the `dplyr` package
* Learn `mutate()` and `select()` to work column\-wise
* Practice our reproducible workflow with RMarkdown and GitHub
### 6\.1\.2 Resources
* [`dplyr` website: dplyr.tidyverse.org](https://dplyr.tidyverse.org/)
* [R for Data Science: Transform Chapter](https://r4ds.had.co.nz/transform.html) by Hadley Wickham \& Garrett Grolemund
* [Intro to Pivot Tables I\-III videos](https://youtu.be/g530cnFfk8Y) by Excel Campus
* [Data organization in spreadsheets](https://peerj.com/preprints/3183/) by Karl Broman \& Kara Woo
6\.2 Overview \& setup
----------------------
[Wikipedia describes a pivot table](https://en.wikipedia.org/wiki/Pivot_table) as a “table of statistics that summarizes the data of a more extensive table…this summary might include sums, averages, or other statistics, which the pivot table groups together in a meaningful way.”
> **Aside:** Wikipedia also says that “Although pivot table is a generic term, Microsoft trademarked PivotTable in the United States in 1994\.”
Pivot tables are a really powerful tool for summarizing data, and we can have similar functionality in R — as well as nicely automating and reporting these tables.
We will first have a look at our data, demo using pivot tables in Excel, and then create reproducible tables in R.
### 6\.2\.1 View data in Excel
When reading in Excel files (or really any data that isn’t yours), it can be a good idea to open the data and look at it so you know what you’re up against.
Let’s open the lobsters.xlsx data in Excel.
It’s one sheet, and it’s rectangular. In this data set, every row is a unique observation. This is called “uncounted” data; you’ll see there is no row for how many lobsters were seen because each row is an observation, or an “n of 1\.”
But also notice that the data doesn’t start until line 5; there are 4 lines of metadata — data about the data that is super important! — that we don’t want to muddy our analyses.
Now your first idea might be to delete these 4 rows from this Excel sheet and save them on another, but we also know that we need to keep the raw data raw. So let’s not touch this data in Excel, we’ll remove these lines in R. Let’s do that first so then we’ll be all set.
### 6\.2\.2 RMarkdown setup
Let’s start a new RMarkdown file in our repo, at the top\-level (where it will be created by default in our Project). I’ll call mine `pivot_lobsters.Rmd`.
In the setup chunk, let’s attach our libraries and read in our lobster data. In addition to the `tidyverse` package we will also use the `skimr` package. You will have to install it, but don’t want it to be installed every time you write your code. The following is a nice convention for having the install instructions available (on the same line) as the `library()` call.
```
## attach libraries
library(tidyverse)
library(readxl)
library(here)
library(skimr) # install.packages('skimr')
library(kableExtra) # install.packages('kableExtra')
```
We used the `read_excel()` before, which is the generic function that reads both .xls and .xlsx files. Since we know that this is a .xlsx file, we will demo using the `read_xlsx()` function.
We can expect that someone in the history of R and especially the history of the `readxl` package has needed to skip lines at the top of an Excel file before. So let’s look at the help pages `?read_xlsx`: there is an argument called `skip` that we can set to 4 to skip 4 lines.
```
## read in data
lobsters <- read_xlsx(here("data/lobsters.xlsx"), skip=4)
```
Great. We’ve seen this data in Excel so I don’t feel the need to use `head()` here like we’ve done before, but I do like having a look at summary statistics and classes.
#### 6\.2\.2\.1 `skimr::skim`
To look at summary statistics we’ve used `summary`, which is good for numeric columns, but it doesn’t give a lot of useful information for non\-numeric data. So it means it wouldn’t tell us how many unique sites there are in this dataset. To have a look there I like using the `skimr` package:
```
# explore data
skimr::skim(lobsters)
```
This `skimr::` notation is a reminder to me that `skim` is from the `skimr` package. It is a nice convention: it’s a reminder to others (especially you!).
`skim` lets us look more at each variable. Here we can look at our character variables and see that there are 5 unique sites (in the `n_unique` output). Also, I particularly like looking at missing data. There are 6 missing values in the `size_mm` variable.
### 6\.2\.3 Our task
So now we have an idea of our data. But now we have a task: we’ve been asked by a colleague to report about how the average size of lobsters has changed for each site across time.
We will complete this task with R by using the `dplyr` package for data wrangling, which we will do after demoing how this would do it with pivot tables in Excel.
### 6\.2\.1 View data in Excel
When reading in Excel files (or really any data that isn’t yours), it can be a good idea to open the data and look at it so you know what you’re up against.
Let’s open the lobsters.xlsx data in Excel.
It’s one sheet, and it’s rectangular. In this data set, every row is a unique observation. This is called “uncounted” data; you’ll see there is no row for how many lobsters were seen because each row is an observation, or an “n of 1\.”
But also notice that the data doesn’t start until line 5; there are 4 lines of metadata — data about the data that is super important! — that we don’t want to muddy our analyses.
Now your first idea might be to delete these 4 rows from this Excel sheet and save them on another, but we also know that we need to keep the raw data raw. So let’s not touch this data in Excel, we’ll remove these lines in R. Let’s do that first so then we’ll be all set.
### 6\.2\.2 RMarkdown setup
Let’s start a new RMarkdown file in our repo, at the top\-level (where it will be created by default in our Project). I’ll call mine `pivot_lobsters.Rmd`.
In the setup chunk, let’s attach our libraries and read in our lobster data. In addition to the `tidyverse` package we will also use the `skimr` package. You will have to install it, but don’t want it to be installed every time you write your code. The following is a nice convention for having the install instructions available (on the same line) as the `library()` call.
```
## attach libraries
library(tidyverse)
library(readxl)
library(here)
library(skimr) # install.packages('skimr')
library(kableExtra) # install.packages('kableExtra')
```
We used the `read_excel()` before, which is the generic function that reads both .xls and .xlsx files. Since we know that this is a .xlsx file, we will demo using the `read_xlsx()` function.
We can expect that someone in the history of R and especially the history of the `readxl` package has needed to skip lines at the top of an Excel file before. So let’s look at the help pages `?read_xlsx`: there is an argument called `skip` that we can set to 4 to skip 4 lines.
```
## read in data
lobsters <- read_xlsx(here("data/lobsters.xlsx"), skip=4)
```
Great. We’ve seen this data in Excel so I don’t feel the need to use `head()` here like we’ve done before, but I do like having a look at summary statistics and classes.
#### 6\.2\.2\.1 `skimr::skim`
To look at summary statistics we’ve used `summary`, which is good for numeric columns, but it doesn’t give a lot of useful information for non\-numeric data. So it means it wouldn’t tell us how many unique sites there are in this dataset. To have a look there I like using the `skimr` package:
```
# explore data
skimr::skim(lobsters)
```
This `skimr::` notation is a reminder to me that `skim` is from the `skimr` package. It is a nice convention: it’s a reminder to others (especially you!).
`skim` lets us look more at each variable. Here we can look at our character variables and see that there are 5 unique sites (in the `n_unique` output). Also, I particularly like looking at missing data. There are 6 missing values in the `size_mm` variable.
#### 6\.2\.2\.1 `skimr::skim`
To look at summary statistics we’ve used `summary`, which is good for numeric columns, but it doesn’t give a lot of useful information for non\-numeric data. So it means it wouldn’t tell us how many unique sites there are in this dataset. To have a look there I like using the `skimr` package:
```
# explore data
skimr::skim(lobsters)
```
This `skimr::` notation is a reminder to me that `skim` is from the `skimr` package. It is a nice convention: it’s a reminder to others (especially you!).
`skim` lets us look more at each variable. Here we can look at our character variables and see that there are 5 unique sites (in the `n_unique` output). Also, I particularly like looking at missing data. There are 6 missing values in the `size_mm` variable.
### 6\.2\.3 Our task
So now we have an idea of our data. But now we have a task: we’ve been asked by a colleague to report about how the average size of lobsters has changed for each site across time.
We will complete this task with R by using the `dplyr` package for data wrangling, which we will do after demoing how this would do it with pivot tables in Excel.
6\.3 Pivot table demo
---------------------
I will demo how we will make a pivot table with our lobster data. You are welcome to sit back and watch rather than following along.
First let’s summarize how many lobsters were counted each year. This means I want to count of rows by year.
So to do this in Excel we would initiate the Pivot Table Process:
Excel will ask what data I would like to include, and it will do its best to suggest coordinates for my data within the spreadsheet (it can have difficulty with non\-rectangular or “non\-tidy” data). It does a good job here of ignoring those top lines of data description.
It will also suggest we make our PivotTable in a new worksheet.
And then we’ll see our new sheet and a little wizard to help us create the PivotTable.
### 6\.3\.1 pivot one variable
I want to start by summarizing by year, so I first drag the `year` variable down into the “Rows” box. What I see at this point are the years listed: this confirms that I’m going to group by years.
And then, to summarize the counts for each year, I actually drag the same `year` variable into the “Values” box. And it will create a Pivot Table for me! But “sum” as the default summary statistic; this doesn’t make a whole lot of sense for summarizing years. I can click the little “I” icon to change this summary statistic to what I want: Count of year.
A few things to note:
* The pivot table is separate entity from our data (it’s on a different sheet); the original data has not been affected. This “keeps the raw data raw,” which is great practice.
* The pivot table summarizes on the variables you request meaning that we don’t see other columns (like date, month, or site).
* Excel also calculates the Grand total for all sites (in bold). This is nice for communicating about data. But it can be problematic in the future, because it might not be clear that this is a calculation and not data. It could be easy to take a total of this column and introduce errors by doubling the total count.
So pivot tables are great because they summarize the data and keep the raw data raw — they even promote good practice because they by default ask you if you’d like to present the data in a new sheet rather than in the same sheet.
### 6\.3\.2 pivot two variables
We can include multiple variables in our PivotTable. If we want to add site as a second variable, we can drag it down:
But this is comparing sites within a year; we want to compare years within a site. We can reverse the order easily enough by dragging (you just have to remember to do all of these steps the next time you’d want to repeat this):
So in terms of our full task, which is to compare the average lobster size by site and year, we are on our way! I’ll leave this as a cliff\-hanger here in Excel and we will carry forward in R.
Just to recap what we did here: we told Excel we wanted to group by something (here: `year` and `site`) and then summarize by something (here: count, not sum!)
### 6\.3\.1 pivot one variable
I want to start by summarizing by year, so I first drag the `year` variable down into the “Rows” box. What I see at this point are the years listed: this confirms that I’m going to group by years.
And then, to summarize the counts for each year, I actually drag the same `year` variable into the “Values” box. And it will create a Pivot Table for me! But “sum” as the default summary statistic; this doesn’t make a whole lot of sense for summarizing years. I can click the little “I” icon to change this summary statistic to what I want: Count of year.
A few things to note:
* The pivot table is separate entity from our data (it’s on a different sheet); the original data has not been affected. This “keeps the raw data raw,” which is great practice.
* The pivot table summarizes on the variables you request meaning that we don’t see other columns (like date, month, or site).
* Excel also calculates the Grand total for all sites (in bold). This is nice for communicating about data. But it can be problematic in the future, because it might not be clear that this is a calculation and not data. It could be easy to take a total of this column and introduce errors by doubling the total count.
So pivot tables are great because they summarize the data and keep the raw data raw — they even promote good practice because they by default ask you if you’d like to present the data in a new sheet rather than in the same sheet.
### 6\.3\.2 pivot two variables
We can include multiple variables in our PivotTable. If we want to add site as a second variable, we can drag it down:
But this is comparing sites within a year; we want to compare years within a site. We can reverse the order easily enough by dragging (you just have to remember to do all of these steps the next time you’d want to repeat this):
So in terms of our full task, which is to compare the average lobster size by site and year, we are on our way! I’ll leave this as a cliff\-hanger here in Excel and we will carry forward in R.
Just to recap what we did here: we told Excel we wanted to group by something (here: `year` and `site`) and then summarize by something (here: count, not sum!)
6\.4 `group_by()` %\>% `summarize()`
------------------------------------
In R, we can create the functionality of pivot tables with the same logic: we will tell R to group by something and then summarize by something. Visually, it looks like this:
This graphic is from [RStudio’s old\-school data wrangling cheatsheet](http://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf); all cheatsheets available from <https://rstudio.com/resources/cheatsheets>). It’s incredibly powerful to visualize what we are talking about with our data when do do these kinds of operations.
And in code, it looks like this:
```
data %>%
group_by() %>%
summarize()
```
It reads: “Take the data and then group by something and then summarize by something.”
The pipe operator `%>%` is a really critical feature of the `dplyr` package, originally created for the `magrittr` package. It lets us chain together steps of our data wrangling, enabling us to tell a clear story about our entire data analysis. This is not only a written story to archive what we’ve done, but it will be a reproducible story that can be rerun and remixed. It is not difficult to read as a human, and it is not a series of clicks to remember.
Let’s try it out!
### 6\.4\.1 `group_by` one variable
Let’s use `group_by() %>% summarize()` with our `lobsters` data, just like we did in Excel. We will first group\_by year and then summarize by count, using the function `n()` (in the `dplyr` package). `n()` counts the number of times an observation shows up, and since this is uncounted data, this will count each row.
We can say this out loud while we write it: “take the lobsters data and then group\_by year and then summarize by count in a new column we’ll call `count_by_year`.”
```
lobsters %>%
group_by(year) %>%
summarize(count_by_year = n())
```
Notice how together, `group_by` and `summarize` minimize the amount of information we see. We also saw this with the pivot table. We lose the other columns that aren’t involved here.
Question: What if you *don’t* group\_by first? Let’s try it and discuss what’s going on.
```
lobsters %>%
summarize(count = n())
```
```
## # A tibble: 1 x 1
## count
## <int>
## 1 2893
```
So if we don’t `group_by` first, we will get a single summary statistic (sum in this case) for the whole dataset.
Another question: what if we *only* group\_by?
```
lobsters %>%
group_by(year)
```
```
## # A tibble: 2,893 x 7
## # Groups: year [5]
## year month date site transect replicate size_mm
## <dbl> <dbl> <chr> <chr> <dbl> <chr> <dbl>
## 1 2012 8 8/20/12 ivee 3 A 70
## 2 2012 8 8/20/12 ivee 3 B 60
## 3 2012 8 8/20/12 ivee 3 B 65
## 4 2012 8 8/20/12 ivee 3 B 70
## 5 2012 8 8/20/12 ivee 3 B 85
## 6 2012 8 8/20/12 ivee 3 C 60
## 7 2012 8 8/20/12 ivee 3 C 65
## 8 2012 8 8/20/12 ivee 3 C 67
## 9 2012 8 8/20/12 ivee 3 D 70
## 10 2012 8 8/20/12 ivee 4 B 85
## # … with 2,883 more rows
```
R doesn’t summarize our data, but you can see from the output that it is indeed grouped. However, we haven’t done anything to the original data: we are only exploring. We are keeping the raw data raw.
To convince ourselves, let’s now check the `lobsters` variable. We can do this by clicking on `lobsters` in the Environment pane in RStudio.
We see that we haven’t changed any of our original data that was stored in this variable. (Just like how the pivot table didn’t affect the raw data on the original sheet).
> ***Aside***: You’ll also see that when you click on the variable name in the Environment pane, `View(lobsters)` shows up in your Console. `View()` (capital V) is the R function to view any variable in the viewer. So this is something that you can write in your RMarkdown script, although RMarkdown will not be able to knit this view feature into the formatted document. So, if you want include `View()` in your RMarkdown document you will need to either comment it out `#View()` or add `eval=FALSE` to the top of the code chunk so that the full line reads `{r, eval=FALSE}`.
### 6\.4\.2 `group_by` multiple variables
Great. Now let’s summarize by both year and site like we did in the pivot table. We are able to `group_by` more than one variable. Let’s do this together:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n())
```
We put the site first because that is what we want as an end product. But we could easily have put year first. We saw visually what would happen when we did this in the Pivot Table.
Great.
### 6\.4\.3 `summarize` multiple variables
We can summarize multiple variables at a time.
So far we’ve summarized the count of lobster observations. Let’s also calculate the mean and standard deviation. First let’s use the `mean()` function to calculate the mean. We do this within the same `summarize()` function, but we can add a new line to make it easier to read. Notice how when you put your curser within the parenthesis and hit return, the indentation will automatically align.
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm))
```
> ***Aside*** Command\-I will properly indent selected lines.
Great! But this will actually calculate some of the means as NA because one or more values in that year are NA. So we can pass an argument that says to remove NAs first before calculating the average. Let’s do that, and then also calculate the standard deviation with the `sd()` function:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm=TRUE),
sd_size_mm = sd(size_mm, na.rm=TRUE))
```
So we can make the equivalent of Excel’s pivot table in R with `group_by() %>% summarize()`.
Now we are at the point where we actually want to save this summary information as a variable so we can use it in further analyses and formatting.
So let’s add a variable assignment to that first line:
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## inspect our new variable
siteyear_summary
```
### 6\.4\.4 Table formatting with `kable()`
There are several options for formatting tables in RMarkdown; we’ll show one here from the `kableExtra` package and learn more about it tomorrow.
It works nicely with the pipe operator, so we can build do this from our new object:
```
## make a table with our new variable
siteyear_summary %>%
kable()
```
### 6\.4\.5 R code in\-line in RMarkdown
Before we let you try this on your own, let’s go outside of our code chunk and write in Markdown.
I want to demo something that is a really powerful RMarkdown feature that we can already leverage with what we know in R.
Write this **in Markdown** but replace the \# with a backtick (\`): “There are \#r nrow(lobsters)\# total lobsters included in this report.” Let’s knit to see what happens.
I hope you can start to imagine the possibilities. If you wanted to write which year had the most observations, or which site had a decreasing trend, you would be able to.
### 6\.4\.6 Activity
1. Build from our analysis and calculate the median lobster size for each site year. Your calculation will use the `size_mm` variable and function to calculate the median (Hint: ?median)
2. create and ggsave() a plot.
Then, save, commit, and push your .Rmd, .html, and .png.
Solution (no peeking):
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE),
median_size_mm = median(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## a ggplot option:
ggplot(data = siteyear_summary, aes(x = year, y = median_size_mm, color = site)) +
geom_line()
```
```
ggsave(here("figures", "lobsters-line.png"))
```
```
## Saving 7 x 5 in image
```
```
## another option:
ggplot(siteyear_summary, aes(x = year, y = median_size_mm)) +
geom_col() +
facet_wrap(~site)
```
```
ggsave(here("figures", "lobsters-col.png"))
```
```
## Saving 7 x 5 in image
```
Don’t forget to knit, commit, and push!
Nice work everybody.
### 6\.4\.1 `group_by` one variable
Let’s use `group_by() %>% summarize()` with our `lobsters` data, just like we did in Excel. We will first group\_by year and then summarize by count, using the function `n()` (in the `dplyr` package). `n()` counts the number of times an observation shows up, and since this is uncounted data, this will count each row.
We can say this out loud while we write it: “take the lobsters data and then group\_by year and then summarize by count in a new column we’ll call `count_by_year`.”
```
lobsters %>%
group_by(year) %>%
summarize(count_by_year = n())
```
Notice how together, `group_by` and `summarize` minimize the amount of information we see. We also saw this with the pivot table. We lose the other columns that aren’t involved here.
Question: What if you *don’t* group\_by first? Let’s try it and discuss what’s going on.
```
lobsters %>%
summarize(count = n())
```
```
## # A tibble: 1 x 1
## count
## <int>
## 1 2893
```
So if we don’t `group_by` first, we will get a single summary statistic (sum in this case) for the whole dataset.
Another question: what if we *only* group\_by?
```
lobsters %>%
group_by(year)
```
```
## # A tibble: 2,893 x 7
## # Groups: year [5]
## year month date site transect replicate size_mm
## <dbl> <dbl> <chr> <chr> <dbl> <chr> <dbl>
## 1 2012 8 8/20/12 ivee 3 A 70
## 2 2012 8 8/20/12 ivee 3 B 60
## 3 2012 8 8/20/12 ivee 3 B 65
## 4 2012 8 8/20/12 ivee 3 B 70
## 5 2012 8 8/20/12 ivee 3 B 85
## 6 2012 8 8/20/12 ivee 3 C 60
## 7 2012 8 8/20/12 ivee 3 C 65
## 8 2012 8 8/20/12 ivee 3 C 67
## 9 2012 8 8/20/12 ivee 3 D 70
## 10 2012 8 8/20/12 ivee 4 B 85
## # … with 2,883 more rows
```
R doesn’t summarize our data, but you can see from the output that it is indeed grouped. However, we haven’t done anything to the original data: we are only exploring. We are keeping the raw data raw.
To convince ourselves, let’s now check the `lobsters` variable. We can do this by clicking on `lobsters` in the Environment pane in RStudio.
We see that we haven’t changed any of our original data that was stored in this variable. (Just like how the pivot table didn’t affect the raw data on the original sheet).
> ***Aside***: You’ll also see that when you click on the variable name in the Environment pane, `View(lobsters)` shows up in your Console. `View()` (capital V) is the R function to view any variable in the viewer. So this is something that you can write in your RMarkdown script, although RMarkdown will not be able to knit this view feature into the formatted document. So, if you want include `View()` in your RMarkdown document you will need to either comment it out `#View()` or add `eval=FALSE` to the top of the code chunk so that the full line reads `{r, eval=FALSE}`.
### 6\.4\.2 `group_by` multiple variables
Great. Now let’s summarize by both year and site like we did in the pivot table. We are able to `group_by` more than one variable. Let’s do this together:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n())
```
We put the site first because that is what we want as an end product. But we could easily have put year first. We saw visually what would happen when we did this in the Pivot Table.
Great.
### 6\.4\.3 `summarize` multiple variables
We can summarize multiple variables at a time.
So far we’ve summarized the count of lobster observations. Let’s also calculate the mean and standard deviation. First let’s use the `mean()` function to calculate the mean. We do this within the same `summarize()` function, but we can add a new line to make it easier to read. Notice how when you put your curser within the parenthesis and hit return, the indentation will automatically align.
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm))
```
> ***Aside*** Command\-I will properly indent selected lines.
Great! But this will actually calculate some of the means as NA because one or more values in that year are NA. So we can pass an argument that says to remove NAs first before calculating the average. Let’s do that, and then also calculate the standard deviation with the `sd()` function:
```
lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm=TRUE),
sd_size_mm = sd(size_mm, na.rm=TRUE))
```
So we can make the equivalent of Excel’s pivot table in R with `group_by() %>% summarize()`.
Now we are at the point where we actually want to save this summary information as a variable so we can use it in further analyses and formatting.
So let’s add a variable assignment to that first line:
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## inspect our new variable
siteyear_summary
```
### 6\.4\.4 Table formatting with `kable()`
There are several options for formatting tables in RMarkdown; we’ll show one here from the `kableExtra` package and learn more about it tomorrow.
It works nicely with the pipe operator, so we can build do this from our new object:
```
## make a table with our new variable
siteyear_summary %>%
kable()
```
### 6\.4\.5 R code in\-line in RMarkdown
Before we let you try this on your own, let’s go outside of our code chunk and write in Markdown.
I want to demo something that is a really powerful RMarkdown feature that we can already leverage with what we know in R.
Write this **in Markdown** but replace the \# with a backtick (\`): “There are \#r nrow(lobsters)\# total lobsters included in this report.” Let’s knit to see what happens.
I hope you can start to imagine the possibilities. If you wanted to write which year had the most observations, or which site had a decreasing trend, you would be able to.
### 6\.4\.6 Activity
1. Build from our analysis and calculate the median lobster size for each site year. Your calculation will use the `size_mm` variable and function to calculate the median (Hint: ?median)
2. create and ggsave() a plot.
Then, save, commit, and push your .Rmd, .html, and .png.
Solution (no peeking):
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE),
median_size_mm = median(size_mm, na.rm = TRUE))
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
## a ggplot option:
ggplot(data = siteyear_summary, aes(x = year, y = median_size_mm, color = site)) +
geom_line()
```
```
ggsave(here("figures", "lobsters-line.png"))
```
```
## Saving 7 x 5 in image
```
```
## another option:
ggplot(siteyear_summary, aes(x = year, y = median_size_mm)) +
geom_col() +
facet_wrap(~site)
```
```
ggsave(here("figures", "lobsters-col.png"))
```
```
## Saving 7 x 5 in image
```
Don’t forget to knit, commit, and push!
Nice work everybody.
6\.5 Oh no, they sent the wrong data!
-------------------------------------
Oh no! After all our analyses and everything we’ve done, our colleague just emailed us at 4:30pm on Friday that he sent the wrong data and we need to redo all our analyses with a new .xlsx file: `lobsters2.xlsx`, not `lobsters.xlsx`. Aaaaah!
If we were doing this in Excel, this would be a bummer; we’d have to rebuild our pivot table and click through all of our logic again. And then export our figures and save them into our report.
But, since we did it in R, we are much safer. R’s power is not only in analytical power, but in automation and reproducibility.
This means we can go back to the top of our RMarkdown file, and read in this new data file, and then re\-knit. We will still need to check that everything outputs correctly, (and that column headers haven’t been renamed), but our first pass will be to update the filename and re\-knit:
```
## read in data
lobsters <- read_xlsx(here("data/lobsters2.xlsx"), skip=4)
```
And now we can see that our plot updated as well:
```
siteyear_summary <- lobsters %>%
group_by(site, year) %>%
summarize(count_by_siteyear = n(),
mean_size_mm = mean(size_mm, na.rm = TRUE),
sd_size_mm = sd(size_mm, na.rm = TRUE),
median_size_mm = median(size_mm, na.rm = TRUE), )
```
```
## `summarise()` regrouping output by 'site' (override with `.groups` argument)
```
```
siteyear_summary
```
```
## a ggplot option:
ggplot(data = siteyear_summary, aes(x = year, y = median_size_mm, color = site)) +
geom_line()
```
```
ggsave(here("figures", "lobsters-line.png"))
## another option:
ggplot(siteyear_summary, aes(x = year, y = median_size_mm)) +
geom_col() +
facet_wrap(~site)
```
```
ggsave(here("figures", "lobsters-col.png"))
```
### 6\.5\.1 Knit, push, \& show differences on GitHub
So cool.
### 6\.5\.2 `dplyr::count()`
Now that we’ve spent time with group\_by %\>% summarize, there is a shortcut if you only want to summarize by count. This is with a function called `count()`, and it will group\_by your selected variable, count, and then also ungroup. It looks like this:
```
lobsters %>%
count(site, year)
## This is the same as:
lobsters %>%
group_by(site, year) %>%
summarize(n = n()) %>%
ungroup()
```
Hey, we could update our RMarkdown text knowing this: There are \#r count(lobsters)\# total lobsters included in this summary.
Switching gears…
### 6\.5\.1 Knit, push, \& show differences on GitHub
So cool.
### 6\.5\.2 `dplyr::count()`
Now that we’ve spent time with group\_by %\>% summarize, there is a shortcut if you only want to summarize by count. This is with a function called `count()`, and it will group\_by your selected variable, count, and then also ungroup. It looks like this:
```
lobsters %>%
count(site, year)
## This is the same as:
lobsters %>%
group_by(site, year) %>%
summarize(n = n()) %>%
ungroup()
```
Hey, we could update our RMarkdown text knowing this: There are \#r count(lobsters)\# total lobsters included in this summary.
Switching gears…
6\.6 `mutate()`
---------------
There are a lot of times where you don’t want to summarize your data, but you do want to operate beyond the original data. This is often done by adding a column. We do this with the `mutate()` function from `dplyr`. Let’s try this with our original lobsters data. The sizes are in millimeters but let’s say it was important for them to be in meters. We can add a column with this calculation:
```
lobsters %>%
mutate(size_m = size_mm / 1000)
```
If we want to add a column that has the same value repeated, we can pass it just one value, either a number or a character string (in quotes). And let’s save this as a variable called `lobsters_detailed`
```
lobsters_detailed <- lobsters %>%
mutate(size_m = size_mm / 1000,
millenia = 2000,
observer = "Allison Horst")
```
6\.7 `select()`
---------------
We will end with one final function, `select`. This is how to choose, retain, and move your data by columns:
Let’s say that we want to present this data finally with only columns for date, site, and size in meters. We would do this:
```
lobsters_detailed %>%
select(date, site, size_m)
```
One last time, let’s knit, save, commit, and push to GitHub.
### 6\.7\.1 END **dplyr\-pivot\-tables** session!
### 6\.7\.1 END **dplyr\-pivot\-tables** session!
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/tidying.html |
Chapter 7 Tidying
=================
7\.1 Summary
------------
In previous sessions, we learned to read in data, do some wrangling, and create a graph and table.
Here, we’ll continue by *reshaping* data frames (converting from long\-to\-wide, or wide\-to\-long format), *separating* and *uniting* variable (column) contents, and finding and replacing string patterns.
### 7\.1\.1 Tidy data
“Tidy” might sound like a generic way to describe non\-messy looking data, but it is actually a specific data structure. When data is *tidy*, it is rectangular with each variable as a column, each row an observation, and each cell contains a single value (see: [Ch. 12 in R for Data Science by Grolemund \& Wickham](https://r4ds.had.co.nz/tidy-data.html)).
### 7\.1\.2 Objectives
In this session we’ll learn some tools to help make our data **tidy** and more coder\-friendly. Those include:
* Use `tidyr::pivot_wider()` and `tidyr::pivot_longer()` to reshape data frames
* `janitor::clean_names()` to make column headers more manageable
* `tidyr::unite()` and `tidyr::separate()` to merge or separate information from different columns
* Detect or replace a string with `stringr` functions
### 7\.1\.3 Resources
– [Ch. 12 *Tidy Data*, in R for Data Science](https://r4ds.had.co.nz/tidy-data.html) by Grolemund \& Wickham
\- [`tidyr` documentation from tidyverse.org](https://tidyr.tidyverse.org/)
\- [`janitor` repo / information](https://github.com/sfirke/janitor) from Sam Firke
7\.2 Set\-up
------------
### 7\.2\.1 Create a new R Markdown and attach packages
* Open your project from Day 1 (click on the .Rproj file)
* PULL to make sure your project is up to date
* Create a new R Markdown file called `my_tidying.Rmd`
* Remove all example code / text below the first code chunk
* Attach the packages we’ll use here (`library(package_name)`):
+ `tidyverse`
+ `here`
+ `janitor`
+ `readxl`
Knit and save your new .Rmd within the project folder.
```
# Attach packages
library(tidyverse)
library(janitor)
library(here)
library(readxl)
```
### 7\.2\.2 `read_excel()` to read in data from an Excel worksheet
We’ve used both `read_csv()` and `read_excel()` to import data from spreadsheets into R.
Use `read_excel()` to read in the **inverts.xlsx** data as an objected called **inverts**.
```
inverts <- read_excel(here("data", "inverts.xlsx"))
```
Be sure to explore the imported data a bit:
```
View(inverts)
names(inverts)
summary(inverts)
```
7\.3 `tidyr::pivot_longer()` to reshape from wider\-to\-longer format
---------------------------------------------------------------------
If we look at *inverts*, we can see that the *year* variable is actually split over 3 columns, so we’d say this is currently in **wide format**.
There may be times when you want to have data in wide format, but often with code it is more efficient to convert to **long format** by gathering together observations for a variable that is currently split into multiple columns.
Schematically, converting from wide to long format using `pivot_longer()` looks like this:
We’ll use `tidyr::pivot_longer()` to gather data from all years in *inverts* (columns `2016`, `2017`, and `2018`) into two columns:
* one called *year*, which contains the year
* one called *sp\_count* containing the number of each species observed.
The new data frame will be stored as *inverts\_long*:
```
# Note: Either single-quotes, double-quotes, OR backticks around years work!
inverts_long <- pivot_longer(data = inverts,
cols = '2016':'2018',
names_to = "year",
values_to = "sp_count")
```
The outcome is the new long\-format *inverts\_long* data frame:
```
inverts_long
```
```
## # A tibble: 165 x 5
## month site common_name year sp_count
## <chr> <chr> <chr> <chr> <dbl>
## 1 7 abur california cone snail 2016 451
## 2 7 abur california cone snail 2017 28
## 3 7 abur california cone snail 2018 762
## 4 7 abur california spiny lobster 2016 17
## 5 7 abur california spiny lobster 2017 17
## 6 7 abur california spiny lobster 2018 16
## 7 7 abur orange cup coral 2016 24
## 8 7 abur orange cup coral 2017 24
## 9 7 abur orange cup coral 2018 24
## 10 7 abur purple urchin 2016 48
## # … with 155 more rows
```
Hooray, long format!
One thing that isn’t obvious at first (but would become obvious if you continued working with this data) is that since those year numbers were initially column names (characters), when they are stacked into the *year* column, their class wasn’t auto\-updated to numeric.
Explore the class of *year* in *inverts\_long*:
```
class(inverts_long$year)
```
```
## [1] "character"
```
That’s a good thing! We don’t want R to update classes of our data without our instruction. We’ll use `dplyr::mutate()` in a different way here: to create a new column (that’s how we’ve used `mutate()` previously) that has the same name of an existing column, in order to update and overwrite the existing column.
In this case, we’ll `mutate()` to add a column called *year*, which contains an `as.numeric()` version of the existing *year* variable:
```
# Coerce "year" class to numeric:
inverts_long <- inverts_long %>%
mutate(year = as.numeric(year))
```
Checking the class again, we see that *year* has been updated to a numeric variable:
```
class(inverts_long$year)
```
```
## [1] "numeric"
```
7\.4 `tidyr::pivot_wider()` to convert from longer\-to\-wider format
--------------------------------------------------------------------
In the previous example, we had information spread over multiple columns that we wanted to *gather*. Sometimes, we’ll have data that we want to *spread* over multiple columns.
For example, imagine that starting from *inverts\_long* we want each species in the *common\_name* column to exist as its **own column**. In that case, we would be converting from a longer to a wider format, and will use `tidyr::pivot_wider()`.
Specifically for our data, we’ll use `pivot_wider()` to spread the *common\_name* across multiple columns as follows:
```
inverts_wide <- inverts_long %>%
pivot_wider(names_from = common_name,
values_from = sp_count)
```
```
inverts_wide
```
```
## # A tibble: 33 x 8
## month site year `california con… `california spi… `orange cup cor…
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 7 abur 2016 451 17 24
## 2 7 abur 2017 28 17 24
## 3 7 abur 2018 762 16 24
## 4 7 ahnd 2016 27 16 24
## 5 7 ahnd 2017 24 16 24
## 6 7 ahnd 2018 24 16 24
## 7 7 aque 2016 4971 48 1526
## 8 7 aque 2017 1752 48 1623
## 9 7 aque 2018 2616 48 1859
## 10 7 bull 2016 1735 24 36
## # … with 23 more rows, and 2 more variables: `purple urchin` <dbl>, `rock
## # scallop` <dbl>
```
We can see that now each *species* has its own column (wider format). But also notice that those column headers (since they have spaces) might not be in the most coder\-friendly format…
7\.5 `janitor::clean_names()` to clean up column names
------------------------------------------------------
The `janitor` package by Sam Firke is a great collection of functions for some quick data cleaning, like:
* `janitor::clean_names()`: update column headers to a case of your choosing
* `janitor::get_dupes()`: see all rows that are duplicates within variables you choose
* `janitor::remove_empty()`: remove empty rows and/or columns
* `janitor::adorn_*()`: jazz up tables
Here, we’ll use `janitor::clean_names()` to convert all of our column headers to a more convenient case \- the default is **lower\_snake\_case**, which means all spaces and symbols are replaced with an underscore (or a word describing the symbol), all characters are lowercase, and a few other nice adjustments.
For example, `janitor::clean_names()` would update these nightmare column names into much nicer forms:
* `My...RECENT-income!` becomes `my_recent_income`
* `SAMPLE2.!test1` becomes `sample2_test1`
* `ThisIsTheName` becomes `this_is_the_name`
* `2015` becomes `x2015`
If we wanted to then use these columns (which we probably would, since we created them), we could clean the names to get them into more coder\-friendly lower\_snake\_case with `janitor::clean_names()`:
```
inverts_wide <- inverts_wide %>%
clean_names()
```
```
names(inverts_wide)
```
```
## [1] "month" "site"
## [3] "year" "california_cone_snail"
## [5] "california_spiny_lobster" "orange_cup_coral"
## [7] "purple_urchin" "rock_scallop"
```
And there are other case options in `clean_names()`, like:
* “snake” produces snake\_case (the default)
* “lower\_camel” or “small\_camel” produces lowerCamel
* “upper\_camel” or “big\_camel” produces UpperCamel
* “screaming\_snake” or “all\_caps” produces ALL\_CAPS
* “lower\_upper” produces lowerUPPER
* “upper\_lower” produces UPPERlower
7\.6 `tidyr::unite()` and `tidyr::separate()` to combine or separate information in column(s)
---------------------------------------------------------------------------------------------
Sometimes we’ll want to *separate* contents of a single column into multiple columns, or *combine* entries from different columns into a single column.
For example, the following data frame has *genus* and *species* in separate columns:
We may want to combine the genus and species into a single column, *scientific\_name*:
Or we may want to do the reverse (separate information from a single column into multiple columns). Here, we’ll learn `tidyr::unite()` and `tidyr::separate()` to help us do both.
### 7\.6\.1 `tidyr::unite()` to merge information from separate columns
Use `tidyr::unite()` to combine information from multiple columns into a single column (as for the scientific name example above)
To demonstrate uniting information from separate columns, we’ll make a single column that has the combined information from *site* abbreviation and *year* in *inverts\_long*.
We need to give `tidyr::unite()` several arguments:
* **data:** the data frame containing columns we want to combine (or pipe into the function from the data frame)
* **col:** the name of the new “united” column
* the **columns you are uniting**
* **sep:** the symbol, value or character to put between the united information from each column
```
inverts_unite <- inverts_long %>%
unite(col = "site_year", # What to name the new united column
c(site, year), # The columns we'll unite (site, year)
sep = "_") # How to separate the things we're uniting
```
```
## # A tibble: 6 x 4
## month site_year common_name sp_count
## <chr> <chr> <chr> <dbl>
## 1 7 abur_2016 california cone snail 451
## 2 7 abur_2017 california cone snail 28
## 3 7 abur_2018 california cone snail 762
## 4 7 abur_2016 california spiny lobster 17
## 5 7 abur_2017 california spiny lobster 17
## 6 7 abur_2018 california spiny lobster 16
```
#### 7\.6\.1\.1 Activity:
**Task:** Create a new object called ‘inverts\_moyr,’ starting from inverts\_long, that unites the month and year columns into a single column named “mo\_yr,” using a slash “/” as the separator. Then try updating the separator to something else! Like “*hello!*”
**Solution:**
```
inverts_moyr <- inverts_long %>%
unite(col = "mo_yr", # What to name the new united column
c(month, year), # The columns we'll unite (site, year)
sep = "/")
```
**Merging information from \> 2 columns (not done in workshop)**
`tidyr::unite()` can also combine information from *more* than two columns. For example, to combine the *site*, *common\_name* and *year* columns from *inverts\_long*, we could use:
```
# Uniting more than 2 columns:
inverts_triple_unite <- inverts_long %>%
tidyr::unite(col = "year_site_name",
c(year, site, common_name),
sep = "-") # Note: this is a dash
```
```
head(inverts_triple_unite)
```
```
## # A tibble: 6 x 3
## month year_site_name sp_count
## <chr> <chr> <dbl>
## 1 7 2016-abur-california cone snail 451
## 2 7 2017-abur-california cone snail 28
## 3 7 2018-abur-california cone snail 762
## 4 7 2016-abur-california spiny lobster 17
## 5 7 2017-abur-california spiny lobster 17
## 6 7 2018-abur-california spiny lobster 16
```
### 7\.6\.2 `tidyr::separate()` to separate information into multiple columns
While `tidyr::unite()` allows us to combine information from multiple columns, it’s more likely that you’ll *start* with a single column that you want to split up into pieces.
For example, I might want to split up a column containing the *genus* and *species* (*Scorpaena guttata*) into two separate columns (*Scorpaena* \| *guttata*), so that I can count how many *Scorpaena* organisms exist in my dataset at the genus level.
Use `tidyr::separate()` to “separate a character column into multiple columns using a regular expression separator.”
Let’s start again with *inverts\_unite*, where we have combined the *site* and *year* into a single column called *site\_year*. If we want to **separate** those, we can use:
```
inverts_sep <- inverts_unite %>%
tidyr::separate(site_year, into = c("my_site", "my_year"))
```
7\.7 `stringr::str_replace()` to replace a pattern
--------------------------------------------------
Was data entered in a way that’s difficult to code with, or is just plain annoying? Did someone wrongly enter “fish” as “fsh” throughout the spreadsheet, and you want to update it everywhere?
Use `stringr::str_replace()` to automatically replace a string pattern.
**Warning**: The pattern will be replaced everywhere \- so if you ask to replace “fsh” with “fish,” then “offshore” would be updated to “offishore.” Be careful to ensure that when you think you’re making one replacement, you’re not also replacing something else unexpectedly.
Starting with inverts, let’s any place we find “california” we want to replace it with the abbreviation “CA”:
```
ca_abbr <- inverts %>%
mutate(
common_name =
str_replace(common_name,
pattern = "california",
replacement = "CA")
)
```
Now, check to confirm that “california” has been replaced with “CA.”
### 7\.7\.1 END **tidying** session!
7\.1 Summary
------------
In previous sessions, we learned to read in data, do some wrangling, and create a graph and table.
Here, we’ll continue by *reshaping* data frames (converting from long\-to\-wide, or wide\-to\-long format), *separating* and *uniting* variable (column) contents, and finding and replacing string patterns.
### 7\.1\.1 Tidy data
“Tidy” might sound like a generic way to describe non\-messy looking data, but it is actually a specific data structure. When data is *tidy*, it is rectangular with each variable as a column, each row an observation, and each cell contains a single value (see: [Ch. 12 in R for Data Science by Grolemund \& Wickham](https://r4ds.had.co.nz/tidy-data.html)).
### 7\.1\.2 Objectives
In this session we’ll learn some tools to help make our data **tidy** and more coder\-friendly. Those include:
* Use `tidyr::pivot_wider()` and `tidyr::pivot_longer()` to reshape data frames
* `janitor::clean_names()` to make column headers more manageable
* `tidyr::unite()` and `tidyr::separate()` to merge or separate information from different columns
* Detect or replace a string with `stringr` functions
### 7\.1\.3 Resources
– [Ch. 12 *Tidy Data*, in R for Data Science](https://r4ds.had.co.nz/tidy-data.html) by Grolemund \& Wickham
\- [`tidyr` documentation from tidyverse.org](https://tidyr.tidyverse.org/)
\- [`janitor` repo / information](https://github.com/sfirke/janitor) from Sam Firke
### 7\.1\.1 Tidy data
“Tidy” might sound like a generic way to describe non\-messy looking data, but it is actually a specific data structure. When data is *tidy*, it is rectangular with each variable as a column, each row an observation, and each cell contains a single value (see: [Ch. 12 in R for Data Science by Grolemund \& Wickham](https://r4ds.had.co.nz/tidy-data.html)).
### 7\.1\.2 Objectives
In this session we’ll learn some tools to help make our data **tidy** and more coder\-friendly. Those include:
* Use `tidyr::pivot_wider()` and `tidyr::pivot_longer()` to reshape data frames
* `janitor::clean_names()` to make column headers more manageable
* `tidyr::unite()` and `tidyr::separate()` to merge or separate information from different columns
* Detect or replace a string with `stringr` functions
### 7\.1\.3 Resources
– [Ch. 12 *Tidy Data*, in R for Data Science](https://r4ds.had.co.nz/tidy-data.html) by Grolemund \& Wickham
\- [`tidyr` documentation from tidyverse.org](https://tidyr.tidyverse.org/)
\- [`janitor` repo / information](https://github.com/sfirke/janitor) from Sam Firke
7\.2 Set\-up
------------
### 7\.2\.1 Create a new R Markdown and attach packages
* Open your project from Day 1 (click on the .Rproj file)
* PULL to make sure your project is up to date
* Create a new R Markdown file called `my_tidying.Rmd`
* Remove all example code / text below the first code chunk
* Attach the packages we’ll use here (`library(package_name)`):
+ `tidyverse`
+ `here`
+ `janitor`
+ `readxl`
Knit and save your new .Rmd within the project folder.
```
# Attach packages
library(tidyverse)
library(janitor)
library(here)
library(readxl)
```
### 7\.2\.2 `read_excel()` to read in data from an Excel worksheet
We’ve used both `read_csv()` and `read_excel()` to import data from spreadsheets into R.
Use `read_excel()` to read in the **inverts.xlsx** data as an objected called **inverts**.
```
inverts <- read_excel(here("data", "inverts.xlsx"))
```
Be sure to explore the imported data a bit:
```
View(inverts)
names(inverts)
summary(inverts)
```
### 7\.2\.1 Create a new R Markdown and attach packages
* Open your project from Day 1 (click on the .Rproj file)
* PULL to make sure your project is up to date
* Create a new R Markdown file called `my_tidying.Rmd`
* Remove all example code / text below the first code chunk
* Attach the packages we’ll use here (`library(package_name)`):
+ `tidyverse`
+ `here`
+ `janitor`
+ `readxl`
Knit and save your new .Rmd within the project folder.
```
# Attach packages
library(tidyverse)
library(janitor)
library(here)
library(readxl)
```
### 7\.2\.2 `read_excel()` to read in data from an Excel worksheet
We’ve used both `read_csv()` and `read_excel()` to import data from spreadsheets into R.
Use `read_excel()` to read in the **inverts.xlsx** data as an objected called **inverts**.
```
inverts <- read_excel(here("data", "inverts.xlsx"))
```
Be sure to explore the imported data a bit:
```
View(inverts)
names(inverts)
summary(inverts)
```
7\.3 `tidyr::pivot_longer()` to reshape from wider\-to\-longer format
---------------------------------------------------------------------
If we look at *inverts*, we can see that the *year* variable is actually split over 3 columns, so we’d say this is currently in **wide format**.
There may be times when you want to have data in wide format, but often with code it is more efficient to convert to **long format** by gathering together observations for a variable that is currently split into multiple columns.
Schematically, converting from wide to long format using `pivot_longer()` looks like this:
We’ll use `tidyr::pivot_longer()` to gather data from all years in *inverts* (columns `2016`, `2017`, and `2018`) into two columns:
* one called *year*, which contains the year
* one called *sp\_count* containing the number of each species observed.
The new data frame will be stored as *inverts\_long*:
```
# Note: Either single-quotes, double-quotes, OR backticks around years work!
inverts_long <- pivot_longer(data = inverts,
cols = '2016':'2018',
names_to = "year",
values_to = "sp_count")
```
The outcome is the new long\-format *inverts\_long* data frame:
```
inverts_long
```
```
## # A tibble: 165 x 5
## month site common_name year sp_count
## <chr> <chr> <chr> <chr> <dbl>
## 1 7 abur california cone snail 2016 451
## 2 7 abur california cone snail 2017 28
## 3 7 abur california cone snail 2018 762
## 4 7 abur california spiny lobster 2016 17
## 5 7 abur california spiny lobster 2017 17
## 6 7 abur california spiny lobster 2018 16
## 7 7 abur orange cup coral 2016 24
## 8 7 abur orange cup coral 2017 24
## 9 7 abur orange cup coral 2018 24
## 10 7 abur purple urchin 2016 48
## # … with 155 more rows
```
Hooray, long format!
One thing that isn’t obvious at first (but would become obvious if you continued working with this data) is that since those year numbers were initially column names (characters), when they are stacked into the *year* column, their class wasn’t auto\-updated to numeric.
Explore the class of *year* in *inverts\_long*:
```
class(inverts_long$year)
```
```
## [1] "character"
```
That’s a good thing! We don’t want R to update classes of our data without our instruction. We’ll use `dplyr::mutate()` in a different way here: to create a new column (that’s how we’ve used `mutate()` previously) that has the same name of an existing column, in order to update and overwrite the existing column.
In this case, we’ll `mutate()` to add a column called *year*, which contains an `as.numeric()` version of the existing *year* variable:
```
# Coerce "year" class to numeric:
inverts_long <- inverts_long %>%
mutate(year = as.numeric(year))
```
Checking the class again, we see that *year* has been updated to a numeric variable:
```
class(inverts_long$year)
```
```
## [1] "numeric"
```
7\.4 `tidyr::pivot_wider()` to convert from longer\-to\-wider format
--------------------------------------------------------------------
In the previous example, we had information spread over multiple columns that we wanted to *gather*. Sometimes, we’ll have data that we want to *spread* over multiple columns.
For example, imagine that starting from *inverts\_long* we want each species in the *common\_name* column to exist as its **own column**. In that case, we would be converting from a longer to a wider format, and will use `tidyr::pivot_wider()`.
Specifically for our data, we’ll use `pivot_wider()` to spread the *common\_name* across multiple columns as follows:
```
inverts_wide <- inverts_long %>%
pivot_wider(names_from = common_name,
values_from = sp_count)
```
```
inverts_wide
```
```
## # A tibble: 33 x 8
## month site year `california con… `california spi… `orange cup cor…
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 7 abur 2016 451 17 24
## 2 7 abur 2017 28 17 24
## 3 7 abur 2018 762 16 24
## 4 7 ahnd 2016 27 16 24
## 5 7 ahnd 2017 24 16 24
## 6 7 ahnd 2018 24 16 24
## 7 7 aque 2016 4971 48 1526
## 8 7 aque 2017 1752 48 1623
## 9 7 aque 2018 2616 48 1859
## 10 7 bull 2016 1735 24 36
## # … with 23 more rows, and 2 more variables: `purple urchin` <dbl>, `rock
## # scallop` <dbl>
```
We can see that now each *species* has its own column (wider format). But also notice that those column headers (since they have spaces) might not be in the most coder\-friendly format…
7\.5 `janitor::clean_names()` to clean up column names
------------------------------------------------------
The `janitor` package by Sam Firke is a great collection of functions for some quick data cleaning, like:
* `janitor::clean_names()`: update column headers to a case of your choosing
* `janitor::get_dupes()`: see all rows that are duplicates within variables you choose
* `janitor::remove_empty()`: remove empty rows and/or columns
* `janitor::adorn_*()`: jazz up tables
Here, we’ll use `janitor::clean_names()` to convert all of our column headers to a more convenient case \- the default is **lower\_snake\_case**, which means all spaces and symbols are replaced with an underscore (or a word describing the symbol), all characters are lowercase, and a few other nice adjustments.
For example, `janitor::clean_names()` would update these nightmare column names into much nicer forms:
* `My...RECENT-income!` becomes `my_recent_income`
* `SAMPLE2.!test1` becomes `sample2_test1`
* `ThisIsTheName` becomes `this_is_the_name`
* `2015` becomes `x2015`
If we wanted to then use these columns (which we probably would, since we created them), we could clean the names to get them into more coder\-friendly lower\_snake\_case with `janitor::clean_names()`:
```
inverts_wide <- inverts_wide %>%
clean_names()
```
```
names(inverts_wide)
```
```
## [1] "month" "site"
## [3] "year" "california_cone_snail"
## [5] "california_spiny_lobster" "orange_cup_coral"
## [7] "purple_urchin" "rock_scallop"
```
And there are other case options in `clean_names()`, like:
* “snake” produces snake\_case (the default)
* “lower\_camel” or “small\_camel” produces lowerCamel
* “upper\_camel” or “big\_camel” produces UpperCamel
* “screaming\_snake” or “all\_caps” produces ALL\_CAPS
* “lower\_upper” produces lowerUPPER
* “upper\_lower” produces UPPERlower
7\.6 `tidyr::unite()` and `tidyr::separate()` to combine or separate information in column(s)
---------------------------------------------------------------------------------------------
Sometimes we’ll want to *separate* contents of a single column into multiple columns, or *combine* entries from different columns into a single column.
For example, the following data frame has *genus* and *species* in separate columns:
We may want to combine the genus and species into a single column, *scientific\_name*:
Or we may want to do the reverse (separate information from a single column into multiple columns). Here, we’ll learn `tidyr::unite()` and `tidyr::separate()` to help us do both.
### 7\.6\.1 `tidyr::unite()` to merge information from separate columns
Use `tidyr::unite()` to combine information from multiple columns into a single column (as for the scientific name example above)
To demonstrate uniting information from separate columns, we’ll make a single column that has the combined information from *site* abbreviation and *year* in *inverts\_long*.
We need to give `tidyr::unite()` several arguments:
* **data:** the data frame containing columns we want to combine (or pipe into the function from the data frame)
* **col:** the name of the new “united” column
* the **columns you are uniting**
* **sep:** the symbol, value or character to put between the united information from each column
```
inverts_unite <- inverts_long %>%
unite(col = "site_year", # What to name the new united column
c(site, year), # The columns we'll unite (site, year)
sep = "_") # How to separate the things we're uniting
```
```
## # A tibble: 6 x 4
## month site_year common_name sp_count
## <chr> <chr> <chr> <dbl>
## 1 7 abur_2016 california cone snail 451
## 2 7 abur_2017 california cone snail 28
## 3 7 abur_2018 california cone snail 762
## 4 7 abur_2016 california spiny lobster 17
## 5 7 abur_2017 california spiny lobster 17
## 6 7 abur_2018 california spiny lobster 16
```
#### 7\.6\.1\.1 Activity:
**Task:** Create a new object called ‘inverts\_moyr,’ starting from inverts\_long, that unites the month and year columns into a single column named “mo\_yr,” using a slash “/” as the separator. Then try updating the separator to something else! Like “*hello!*”
**Solution:**
```
inverts_moyr <- inverts_long %>%
unite(col = "mo_yr", # What to name the new united column
c(month, year), # The columns we'll unite (site, year)
sep = "/")
```
**Merging information from \> 2 columns (not done in workshop)**
`tidyr::unite()` can also combine information from *more* than two columns. For example, to combine the *site*, *common\_name* and *year* columns from *inverts\_long*, we could use:
```
# Uniting more than 2 columns:
inverts_triple_unite <- inverts_long %>%
tidyr::unite(col = "year_site_name",
c(year, site, common_name),
sep = "-") # Note: this is a dash
```
```
head(inverts_triple_unite)
```
```
## # A tibble: 6 x 3
## month year_site_name sp_count
## <chr> <chr> <dbl>
## 1 7 2016-abur-california cone snail 451
## 2 7 2017-abur-california cone snail 28
## 3 7 2018-abur-california cone snail 762
## 4 7 2016-abur-california spiny lobster 17
## 5 7 2017-abur-california spiny lobster 17
## 6 7 2018-abur-california spiny lobster 16
```
### 7\.6\.2 `tidyr::separate()` to separate information into multiple columns
While `tidyr::unite()` allows us to combine information from multiple columns, it’s more likely that you’ll *start* with a single column that you want to split up into pieces.
For example, I might want to split up a column containing the *genus* and *species* (*Scorpaena guttata*) into two separate columns (*Scorpaena* \| *guttata*), so that I can count how many *Scorpaena* organisms exist in my dataset at the genus level.
Use `tidyr::separate()` to “separate a character column into multiple columns using a regular expression separator.”
Let’s start again with *inverts\_unite*, where we have combined the *site* and *year* into a single column called *site\_year*. If we want to **separate** those, we can use:
```
inverts_sep <- inverts_unite %>%
tidyr::separate(site_year, into = c("my_site", "my_year"))
```
### 7\.6\.1 `tidyr::unite()` to merge information from separate columns
Use `tidyr::unite()` to combine information from multiple columns into a single column (as for the scientific name example above)
To demonstrate uniting information from separate columns, we’ll make a single column that has the combined information from *site* abbreviation and *year* in *inverts\_long*.
We need to give `tidyr::unite()` several arguments:
* **data:** the data frame containing columns we want to combine (or pipe into the function from the data frame)
* **col:** the name of the new “united” column
* the **columns you are uniting**
* **sep:** the symbol, value or character to put between the united information from each column
```
inverts_unite <- inverts_long %>%
unite(col = "site_year", # What to name the new united column
c(site, year), # The columns we'll unite (site, year)
sep = "_") # How to separate the things we're uniting
```
```
## # A tibble: 6 x 4
## month site_year common_name sp_count
## <chr> <chr> <chr> <dbl>
## 1 7 abur_2016 california cone snail 451
## 2 7 abur_2017 california cone snail 28
## 3 7 abur_2018 california cone snail 762
## 4 7 abur_2016 california spiny lobster 17
## 5 7 abur_2017 california spiny lobster 17
## 6 7 abur_2018 california spiny lobster 16
```
#### 7\.6\.1\.1 Activity:
**Task:** Create a new object called ‘inverts\_moyr,’ starting from inverts\_long, that unites the month and year columns into a single column named “mo\_yr,” using a slash “/” as the separator. Then try updating the separator to something else! Like “*hello!*”
**Solution:**
```
inverts_moyr <- inverts_long %>%
unite(col = "mo_yr", # What to name the new united column
c(month, year), # The columns we'll unite (site, year)
sep = "/")
```
**Merging information from \> 2 columns (not done in workshop)**
`tidyr::unite()` can also combine information from *more* than two columns. For example, to combine the *site*, *common\_name* and *year* columns from *inverts\_long*, we could use:
```
# Uniting more than 2 columns:
inverts_triple_unite <- inverts_long %>%
tidyr::unite(col = "year_site_name",
c(year, site, common_name),
sep = "-") # Note: this is a dash
```
```
head(inverts_triple_unite)
```
```
## # A tibble: 6 x 3
## month year_site_name sp_count
## <chr> <chr> <dbl>
## 1 7 2016-abur-california cone snail 451
## 2 7 2017-abur-california cone snail 28
## 3 7 2018-abur-california cone snail 762
## 4 7 2016-abur-california spiny lobster 17
## 5 7 2017-abur-california spiny lobster 17
## 6 7 2018-abur-california spiny lobster 16
```
#### 7\.6\.1\.1 Activity:
**Task:** Create a new object called ‘inverts\_moyr,’ starting from inverts\_long, that unites the month and year columns into a single column named “mo\_yr,” using a slash “/” as the separator. Then try updating the separator to something else! Like “*hello!*”
**Solution:**
```
inverts_moyr <- inverts_long %>%
unite(col = "mo_yr", # What to name the new united column
c(month, year), # The columns we'll unite (site, year)
sep = "/")
```
**Merging information from \> 2 columns (not done in workshop)**
`tidyr::unite()` can also combine information from *more* than two columns. For example, to combine the *site*, *common\_name* and *year* columns from *inverts\_long*, we could use:
```
# Uniting more than 2 columns:
inverts_triple_unite <- inverts_long %>%
tidyr::unite(col = "year_site_name",
c(year, site, common_name),
sep = "-") # Note: this is a dash
```
```
head(inverts_triple_unite)
```
```
## # A tibble: 6 x 3
## month year_site_name sp_count
## <chr> <chr> <dbl>
## 1 7 2016-abur-california cone snail 451
## 2 7 2017-abur-california cone snail 28
## 3 7 2018-abur-california cone snail 762
## 4 7 2016-abur-california spiny lobster 17
## 5 7 2017-abur-california spiny lobster 17
## 6 7 2018-abur-california spiny lobster 16
```
### 7\.6\.2 `tidyr::separate()` to separate information into multiple columns
While `tidyr::unite()` allows us to combine information from multiple columns, it’s more likely that you’ll *start* with a single column that you want to split up into pieces.
For example, I might want to split up a column containing the *genus* and *species* (*Scorpaena guttata*) into two separate columns (*Scorpaena* \| *guttata*), so that I can count how many *Scorpaena* organisms exist in my dataset at the genus level.
Use `tidyr::separate()` to “separate a character column into multiple columns using a regular expression separator.”
Let’s start again with *inverts\_unite*, where we have combined the *site* and *year* into a single column called *site\_year*. If we want to **separate** those, we can use:
```
inverts_sep <- inverts_unite %>%
tidyr::separate(site_year, into = c("my_site", "my_year"))
```
7\.7 `stringr::str_replace()` to replace a pattern
--------------------------------------------------
Was data entered in a way that’s difficult to code with, or is just plain annoying? Did someone wrongly enter “fish” as “fsh” throughout the spreadsheet, and you want to update it everywhere?
Use `stringr::str_replace()` to automatically replace a string pattern.
**Warning**: The pattern will be replaced everywhere \- so if you ask to replace “fsh” with “fish,” then “offshore” would be updated to “offishore.” Be careful to ensure that when you think you’re making one replacement, you’re not also replacing something else unexpectedly.
Starting with inverts, let’s any place we find “california” we want to replace it with the abbreviation “CA”:
```
ca_abbr <- inverts %>%
mutate(
common_name =
str_replace(common_name,
pattern = "california",
replacement = "CA")
)
```
Now, check to confirm that “california” has been replaced with “CA.”
### 7\.7\.1 END **tidying** session!
### 7\.7\.1 END **tidying** session!
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/tidying.html |
Chapter 7 Tidying
=================
7\.1 Summary
------------
In previous sessions, we learned to read in data, do some wrangling, and create a graph and table.
Here, we’ll continue by *reshaping* data frames (converting from long\-to\-wide, or wide\-to\-long format), *separating* and *uniting* variable (column) contents, and finding and replacing string patterns.
### 7\.1\.1 Tidy data
“Tidy” might sound like a generic way to describe non\-messy looking data, but it is actually a specific data structure. When data is *tidy*, it is rectangular with each variable as a column, each row an observation, and each cell contains a single value (see: [Ch. 12 in R for Data Science by Grolemund \& Wickham](https://r4ds.had.co.nz/tidy-data.html)).
### 7\.1\.2 Objectives
In this session we’ll learn some tools to help make our data **tidy** and more coder\-friendly. Those include:
* Use `tidyr::pivot_wider()` and `tidyr::pivot_longer()` to reshape data frames
* `janitor::clean_names()` to make column headers more manageable
* `tidyr::unite()` and `tidyr::separate()` to merge or separate information from different columns
* Detect or replace a string with `stringr` functions
### 7\.1\.3 Resources
– [Ch. 12 *Tidy Data*, in R for Data Science](https://r4ds.had.co.nz/tidy-data.html) by Grolemund \& Wickham
\- [`tidyr` documentation from tidyverse.org](https://tidyr.tidyverse.org/)
\- [`janitor` repo / information](https://github.com/sfirke/janitor) from Sam Firke
7\.2 Set\-up
------------
### 7\.2\.1 Create a new R Markdown and attach packages
* Open your project from Day 1 (click on the .Rproj file)
* PULL to make sure your project is up to date
* Create a new R Markdown file called `my_tidying.Rmd`
* Remove all example code / text below the first code chunk
* Attach the packages we’ll use here (`library(package_name)`):
+ `tidyverse`
+ `here`
+ `janitor`
+ `readxl`
Knit and save your new .Rmd within the project folder.
```
# Attach packages
library(tidyverse)
library(janitor)
library(here)
library(readxl)
```
### 7\.2\.2 `read_excel()` to read in data from an Excel worksheet
We’ve used both `read_csv()` and `read_excel()` to import data from spreadsheets into R.
Use `read_excel()` to read in the **inverts.xlsx** data as an objected called **inverts**.
```
inverts <- read_excel(here("data", "inverts.xlsx"))
```
Be sure to explore the imported data a bit:
```
View(inverts)
names(inverts)
summary(inverts)
```
7\.3 `tidyr::pivot_longer()` to reshape from wider\-to\-longer format
---------------------------------------------------------------------
If we look at *inverts*, we can see that the *year* variable is actually split over 3 columns, so we’d say this is currently in **wide format**.
There may be times when you want to have data in wide format, but often with code it is more efficient to convert to **long format** by gathering together observations for a variable that is currently split into multiple columns.
Schematically, converting from wide to long format using `pivot_longer()` looks like this:
We’ll use `tidyr::pivot_longer()` to gather data from all years in *inverts* (columns `2016`, `2017`, and `2018`) into two columns:
* one called *year*, which contains the year
* one called *sp\_count* containing the number of each species observed.
The new data frame will be stored as *inverts\_long*:
```
# Note: Either single-quotes, double-quotes, OR backticks around years work!
inverts_long <- pivot_longer(data = inverts,
cols = '2016':'2018',
names_to = "year",
values_to = "sp_count")
```
The outcome is the new long\-format *inverts\_long* data frame:
```
inverts_long
```
```
## # A tibble: 165 x 5
## month site common_name year sp_count
## <chr> <chr> <chr> <chr> <dbl>
## 1 7 abur california cone snail 2016 451
## 2 7 abur california cone snail 2017 28
## 3 7 abur california cone snail 2018 762
## 4 7 abur california spiny lobster 2016 17
## 5 7 abur california spiny lobster 2017 17
## 6 7 abur california spiny lobster 2018 16
## 7 7 abur orange cup coral 2016 24
## 8 7 abur orange cup coral 2017 24
## 9 7 abur orange cup coral 2018 24
## 10 7 abur purple urchin 2016 48
## # … with 155 more rows
```
Hooray, long format!
One thing that isn’t obvious at first (but would become obvious if you continued working with this data) is that since those year numbers were initially column names (characters), when they are stacked into the *year* column, their class wasn’t auto\-updated to numeric.
Explore the class of *year* in *inverts\_long*:
```
class(inverts_long$year)
```
```
## [1] "character"
```
That’s a good thing! We don’t want R to update classes of our data without our instruction. We’ll use `dplyr::mutate()` in a different way here: to create a new column (that’s how we’ve used `mutate()` previously) that has the same name of an existing column, in order to update and overwrite the existing column.
In this case, we’ll `mutate()` to add a column called *year*, which contains an `as.numeric()` version of the existing *year* variable:
```
# Coerce "year" class to numeric:
inverts_long <- inverts_long %>%
mutate(year = as.numeric(year))
```
Checking the class again, we see that *year* has been updated to a numeric variable:
```
class(inverts_long$year)
```
```
## [1] "numeric"
```
7\.4 `tidyr::pivot_wider()` to convert from longer\-to\-wider format
--------------------------------------------------------------------
In the previous example, we had information spread over multiple columns that we wanted to *gather*. Sometimes, we’ll have data that we want to *spread* over multiple columns.
For example, imagine that starting from *inverts\_long* we want each species in the *common\_name* column to exist as its **own column**. In that case, we would be converting from a longer to a wider format, and will use `tidyr::pivot_wider()`.
Specifically for our data, we’ll use `pivot_wider()` to spread the *common\_name* across multiple columns as follows:
```
inverts_wide <- inverts_long %>%
pivot_wider(names_from = common_name,
values_from = sp_count)
```
```
inverts_wide
```
```
## # A tibble: 33 x 8
## month site year `california con… `california spi… `orange cup cor…
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 7 abur 2016 451 17 24
## 2 7 abur 2017 28 17 24
## 3 7 abur 2018 762 16 24
## 4 7 ahnd 2016 27 16 24
## 5 7 ahnd 2017 24 16 24
## 6 7 ahnd 2018 24 16 24
## 7 7 aque 2016 4971 48 1526
## 8 7 aque 2017 1752 48 1623
## 9 7 aque 2018 2616 48 1859
## 10 7 bull 2016 1735 24 36
## # … with 23 more rows, and 2 more variables: `purple urchin` <dbl>, `rock
## # scallop` <dbl>
```
We can see that now each *species* has its own column (wider format). But also notice that those column headers (since they have spaces) might not be in the most coder\-friendly format…
7\.5 `janitor::clean_names()` to clean up column names
------------------------------------------------------
The `janitor` package by Sam Firke is a great collection of functions for some quick data cleaning, like:
* `janitor::clean_names()`: update column headers to a case of your choosing
* `janitor::get_dupes()`: see all rows that are duplicates within variables you choose
* `janitor::remove_empty()`: remove empty rows and/or columns
* `janitor::adorn_*()`: jazz up tables
Here, we’ll use `janitor::clean_names()` to convert all of our column headers to a more convenient case \- the default is **lower\_snake\_case**, which means all spaces and symbols are replaced with an underscore (or a word describing the symbol), all characters are lowercase, and a few other nice adjustments.
For example, `janitor::clean_names()` would update these nightmare column names into much nicer forms:
* `My...RECENT-income!` becomes `my_recent_income`
* `SAMPLE2.!test1` becomes `sample2_test1`
* `ThisIsTheName` becomes `this_is_the_name`
* `2015` becomes `x2015`
If we wanted to then use these columns (which we probably would, since we created them), we could clean the names to get them into more coder\-friendly lower\_snake\_case with `janitor::clean_names()`:
```
inverts_wide <- inverts_wide %>%
clean_names()
```
```
names(inverts_wide)
```
```
## [1] "month" "site"
## [3] "year" "california_cone_snail"
## [5] "california_spiny_lobster" "orange_cup_coral"
## [7] "purple_urchin" "rock_scallop"
```
And there are other case options in `clean_names()`, like:
* “snake” produces snake\_case (the default)
* “lower\_camel” or “small\_camel” produces lowerCamel
* “upper\_camel” or “big\_camel” produces UpperCamel
* “screaming\_snake” or “all\_caps” produces ALL\_CAPS
* “lower\_upper” produces lowerUPPER
* “upper\_lower” produces UPPERlower
7\.6 `tidyr::unite()` and `tidyr::separate()` to combine or separate information in column(s)
---------------------------------------------------------------------------------------------
Sometimes we’ll want to *separate* contents of a single column into multiple columns, or *combine* entries from different columns into a single column.
For example, the following data frame has *genus* and *species* in separate columns:
We may want to combine the genus and species into a single column, *scientific\_name*:
Or we may want to do the reverse (separate information from a single column into multiple columns). Here, we’ll learn `tidyr::unite()` and `tidyr::separate()` to help us do both.
### 7\.6\.1 `tidyr::unite()` to merge information from separate columns
Use `tidyr::unite()` to combine information from multiple columns into a single column (as for the scientific name example above)
To demonstrate uniting information from separate columns, we’ll make a single column that has the combined information from *site* abbreviation and *year* in *inverts\_long*.
We need to give `tidyr::unite()` several arguments:
* **data:** the data frame containing columns we want to combine (or pipe into the function from the data frame)
* **col:** the name of the new “united” column
* the **columns you are uniting**
* **sep:** the symbol, value or character to put between the united information from each column
```
inverts_unite <- inverts_long %>%
unite(col = "site_year", # What to name the new united column
c(site, year), # The columns we'll unite (site, year)
sep = "_") # How to separate the things we're uniting
```
```
## # A tibble: 6 x 4
## month site_year common_name sp_count
## <chr> <chr> <chr> <dbl>
## 1 7 abur_2016 california cone snail 451
## 2 7 abur_2017 california cone snail 28
## 3 7 abur_2018 california cone snail 762
## 4 7 abur_2016 california spiny lobster 17
## 5 7 abur_2017 california spiny lobster 17
## 6 7 abur_2018 california spiny lobster 16
```
#### 7\.6\.1\.1 Activity:
**Task:** Create a new object called ‘inverts\_moyr,’ starting from inverts\_long, that unites the month and year columns into a single column named “mo\_yr,” using a slash “/” as the separator. Then try updating the separator to something else! Like “*hello!*”
**Solution:**
```
inverts_moyr <- inverts_long %>%
unite(col = "mo_yr", # What to name the new united column
c(month, year), # The columns we'll unite (site, year)
sep = "/")
```
**Merging information from \> 2 columns (not done in workshop)**
`tidyr::unite()` can also combine information from *more* than two columns. For example, to combine the *site*, *common\_name* and *year* columns from *inverts\_long*, we could use:
```
# Uniting more than 2 columns:
inverts_triple_unite <- inverts_long %>%
tidyr::unite(col = "year_site_name",
c(year, site, common_name),
sep = "-") # Note: this is a dash
```
```
head(inverts_triple_unite)
```
```
## # A tibble: 6 x 3
## month year_site_name sp_count
## <chr> <chr> <dbl>
## 1 7 2016-abur-california cone snail 451
## 2 7 2017-abur-california cone snail 28
## 3 7 2018-abur-california cone snail 762
## 4 7 2016-abur-california spiny lobster 17
## 5 7 2017-abur-california spiny lobster 17
## 6 7 2018-abur-california spiny lobster 16
```
### 7\.6\.2 `tidyr::separate()` to separate information into multiple columns
While `tidyr::unite()` allows us to combine information from multiple columns, it’s more likely that you’ll *start* with a single column that you want to split up into pieces.
For example, I might want to split up a column containing the *genus* and *species* (*Scorpaena guttata*) into two separate columns (*Scorpaena* \| *guttata*), so that I can count how many *Scorpaena* organisms exist in my dataset at the genus level.
Use `tidyr::separate()` to “separate a character column into multiple columns using a regular expression separator.”
Let’s start again with *inverts\_unite*, where we have combined the *site* and *year* into a single column called *site\_year*. If we want to **separate** those, we can use:
```
inverts_sep <- inverts_unite %>%
tidyr::separate(site_year, into = c("my_site", "my_year"))
```
7\.7 `stringr::str_replace()` to replace a pattern
--------------------------------------------------
Was data entered in a way that’s difficult to code with, or is just plain annoying? Did someone wrongly enter “fish” as “fsh” throughout the spreadsheet, and you want to update it everywhere?
Use `stringr::str_replace()` to automatically replace a string pattern.
**Warning**: The pattern will be replaced everywhere \- so if you ask to replace “fsh” with “fish,” then “offshore” would be updated to “offishore.” Be careful to ensure that when you think you’re making one replacement, you’re not also replacing something else unexpectedly.
Starting with inverts, let’s any place we find “california” we want to replace it with the abbreviation “CA”:
```
ca_abbr <- inverts %>%
mutate(
common_name =
str_replace(common_name,
pattern = "california",
replacement = "CA")
)
```
Now, check to confirm that “california” has been replaced with “CA.”
### 7\.7\.1 END **tidying** session!
7\.1 Summary
------------
In previous sessions, we learned to read in data, do some wrangling, and create a graph and table.
Here, we’ll continue by *reshaping* data frames (converting from long\-to\-wide, or wide\-to\-long format), *separating* and *uniting* variable (column) contents, and finding and replacing string patterns.
### 7\.1\.1 Tidy data
“Tidy” might sound like a generic way to describe non\-messy looking data, but it is actually a specific data structure. When data is *tidy*, it is rectangular with each variable as a column, each row an observation, and each cell contains a single value (see: [Ch. 12 in R for Data Science by Grolemund \& Wickham](https://r4ds.had.co.nz/tidy-data.html)).
### 7\.1\.2 Objectives
In this session we’ll learn some tools to help make our data **tidy** and more coder\-friendly. Those include:
* Use `tidyr::pivot_wider()` and `tidyr::pivot_longer()` to reshape data frames
* `janitor::clean_names()` to make column headers more manageable
* `tidyr::unite()` and `tidyr::separate()` to merge or separate information from different columns
* Detect or replace a string with `stringr` functions
### 7\.1\.3 Resources
– [Ch. 12 *Tidy Data*, in R for Data Science](https://r4ds.had.co.nz/tidy-data.html) by Grolemund \& Wickham
\- [`tidyr` documentation from tidyverse.org](https://tidyr.tidyverse.org/)
\- [`janitor` repo / information](https://github.com/sfirke/janitor) from Sam Firke
### 7\.1\.1 Tidy data
“Tidy” might sound like a generic way to describe non\-messy looking data, but it is actually a specific data structure. When data is *tidy*, it is rectangular with each variable as a column, each row an observation, and each cell contains a single value (see: [Ch. 12 in R for Data Science by Grolemund \& Wickham](https://r4ds.had.co.nz/tidy-data.html)).
### 7\.1\.2 Objectives
In this session we’ll learn some tools to help make our data **tidy** and more coder\-friendly. Those include:
* Use `tidyr::pivot_wider()` and `tidyr::pivot_longer()` to reshape data frames
* `janitor::clean_names()` to make column headers more manageable
* `tidyr::unite()` and `tidyr::separate()` to merge or separate information from different columns
* Detect or replace a string with `stringr` functions
### 7\.1\.3 Resources
– [Ch. 12 *Tidy Data*, in R for Data Science](https://r4ds.had.co.nz/tidy-data.html) by Grolemund \& Wickham
\- [`tidyr` documentation from tidyverse.org](https://tidyr.tidyverse.org/)
\- [`janitor` repo / information](https://github.com/sfirke/janitor) from Sam Firke
7\.2 Set\-up
------------
### 7\.2\.1 Create a new R Markdown and attach packages
* Open your project from Day 1 (click on the .Rproj file)
* PULL to make sure your project is up to date
* Create a new R Markdown file called `my_tidying.Rmd`
* Remove all example code / text below the first code chunk
* Attach the packages we’ll use here (`library(package_name)`):
+ `tidyverse`
+ `here`
+ `janitor`
+ `readxl`
Knit and save your new .Rmd within the project folder.
```
# Attach packages
library(tidyverse)
library(janitor)
library(here)
library(readxl)
```
### 7\.2\.2 `read_excel()` to read in data from an Excel worksheet
We’ve used both `read_csv()` and `read_excel()` to import data from spreadsheets into R.
Use `read_excel()` to read in the **inverts.xlsx** data as an objected called **inverts**.
```
inverts <- read_excel(here("data", "inverts.xlsx"))
```
Be sure to explore the imported data a bit:
```
View(inverts)
names(inverts)
summary(inverts)
```
### 7\.2\.1 Create a new R Markdown and attach packages
* Open your project from Day 1 (click on the .Rproj file)
* PULL to make sure your project is up to date
* Create a new R Markdown file called `my_tidying.Rmd`
* Remove all example code / text below the first code chunk
* Attach the packages we’ll use here (`library(package_name)`):
+ `tidyverse`
+ `here`
+ `janitor`
+ `readxl`
Knit and save your new .Rmd within the project folder.
```
# Attach packages
library(tidyverse)
library(janitor)
library(here)
library(readxl)
```
### 7\.2\.2 `read_excel()` to read in data from an Excel worksheet
We’ve used both `read_csv()` and `read_excel()` to import data from spreadsheets into R.
Use `read_excel()` to read in the **inverts.xlsx** data as an objected called **inverts**.
```
inverts <- read_excel(here("data", "inverts.xlsx"))
```
Be sure to explore the imported data a bit:
```
View(inverts)
names(inverts)
summary(inverts)
```
7\.3 `tidyr::pivot_longer()` to reshape from wider\-to\-longer format
---------------------------------------------------------------------
If we look at *inverts*, we can see that the *year* variable is actually split over 3 columns, so we’d say this is currently in **wide format**.
There may be times when you want to have data in wide format, but often with code it is more efficient to convert to **long format** by gathering together observations for a variable that is currently split into multiple columns.
Schematically, converting from wide to long format using `pivot_longer()` looks like this:
We’ll use `tidyr::pivot_longer()` to gather data from all years in *inverts* (columns `2016`, `2017`, and `2018`) into two columns:
* one called *year*, which contains the year
* one called *sp\_count* containing the number of each species observed.
The new data frame will be stored as *inverts\_long*:
```
# Note: Either single-quotes, double-quotes, OR backticks around years work!
inverts_long <- pivot_longer(data = inverts,
cols = '2016':'2018',
names_to = "year",
values_to = "sp_count")
```
The outcome is the new long\-format *inverts\_long* data frame:
```
inverts_long
```
```
## # A tibble: 165 x 5
## month site common_name year sp_count
## <chr> <chr> <chr> <chr> <dbl>
## 1 7 abur california cone snail 2016 451
## 2 7 abur california cone snail 2017 28
## 3 7 abur california cone snail 2018 762
## 4 7 abur california spiny lobster 2016 17
## 5 7 abur california spiny lobster 2017 17
## 6 7 abur california spiny lobster 2018 16
## 7 7 abur orange cup coral 2016 24
## 8 7 abur orange cup coral 2017 24
## 9 7 abur orange cup coral 2018 24
## 10 7 abur purple urchin 2016 48
## # … with 155 more rows
```
Hooray, long format!
One thing that isn’t obvious at first (but would become obvious if you continued working with this data) is that since those year numbers were initially column names (characters), when they are stacked into the *year* column, their class wasn’t auto\-updated to numeric.
Explore the class of *year* in *inverts\_long*:
```
class(inverts_long$year)
```
```
## [1] "character"
```
That’s a good thing! We don’t want R to update classes of our data without our instruction. We’ll use `dplyr::mutate()` in a different way here: to create a new column (that’s how we’ve used `mutate()` previously) that has the same name of an existing column, in order to update and overwrite the existing column.
In this case, we’ll `mutate()` to add a column called *year*, which contains an `as.numeric()` version of the existing *year* variable:
```
# Coerce "year" class to numeric:
inverts_long <- inverts_long %>%
mutate(year = as.numeric(year))
```
Checking the class again, we see that *year* has been updated to a numeric variable:
```
class(inverts_long$year)
```
```
## [1] "numeric"
```
7\.4 `tidyr::pivot_wider()` to convert from longer\-to\-wider format
--------------------------------------------------------------------
In the previous example, we had information spread over multiple columns that we wanted to *gather*. Sometimes, we’ll have data that we want to *spread* over multiple columns.
For example, imagine that starting from *inverts\_long* we want each species in the *common\_name* column to exist as its **own column**. In that case, we would be converting from a longer to a wider format, and will use `tidyr::pivot_wider()`.
Specifically for our data, we’ll use `pivot_wider()` to spread the *common\_name* across multiple columns as follows:
```
inverts_wide <- inverts_long %>%
pivot_wider(names_from = common_name,
values_from = sp_count)
```
```
inverts_wide
```
```
## # A tibble: 33 x 8
## month site year `california con… `california spi… `orange cup cor…
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 7 abur 2016 451 17 24
## 2 7 abur 2017 28 17 24
## 3 7 abur 2018 762 16 24
## 4 7 ahnd 2016 27 16 24
## 5 7 ahnd 2017 24 16 24
## 6 7 ahnd 2018 24 16 24
## 7 7 aque 2016 4971 48 1526
## 8 7 aque 2017 1752 48 1623
## 9 7 aque 2018 2616 48 1859
## 10 7 bull 2016 1735 24 36
## # … with 23 more rows, and 2 more variables: `purple urchin` <dbl>, `rock
## # scallop` <dbl>
```
We can see that now each *species* has its own column (wider format). But also notice that those column headers (since they have spaces) might not be in the most coder\-friendly format…
7\.5 `janitor::clean_names()` to clean up column names
------------------------------------------------------
The `janitor` package by Sam Firke is a great collection of functions for some quick data cleaning, like:
* `janitor::clean_names()`: update column headers to a case of your choosing
* `janitor::get_dupes()`: see all rows that are duplicates within variables you choose
* `janitor::remove_empty()`: remove empty rows and/or columns
* `janitor::adorn_*()`: jazz up tables
Here, we’ll use `janitor::clean_names()` to convert all of our column headers to a more convenient case \- the default is **lower\_snake\_case**, which means all spaces and symbols are replaced with an underscore (or a word describing the symbol), all characters are lowercase, and a few other nice adjustments.
For example, `janitor::clean_names()` would update these nightmare column names into much nicer forms:
* `My...RECENT-income!` becomes `my_recent_income`
* `SAMPLE2.!test1` becomes `sample2_test1`
* `ThisIsTheName` becomes `this_is_the_name`
* `2015` becomes `x2015`
If we wanted to then use these columns (which we probably would, since we created them), we could clean the names to get them into more coder\-friendly lower\_snake\_case with `janitor::clean_names()`:
```
inverts_wide <- inverts_wide %>%
clean_names()
```
```
names(inverts_wide)
```
```
## [1] "month" "site"
## [3] "year" "california_cone_snail"
## [5] "california_spiny_lobster" "orange_cup_coral"
## [7] "purple_urchin" "rock_scallop"
```
And there are other case options in `clean_names()`, like:
* “snake” produces snake\_case (the default)
* “lower\_camel” or “small\_camel” produces lowerCamel
* “upper\_camel” or “big\_camel” produces UpperCamel
* “screaming\_snake” or “all\_caps” produces ALL\_CAPS
* “lower\_upper” produces lowerUPPER
* “upper\_lower” produces UPPERlower
7\.6 `tidyr::unite()` and `tidyr::separate()` to combine or separate information in column(s)
---------------------------------------------------------------------------------------------
Sometimes we’ll want to *separate* contents of a single column into multiple columns, or *combine* entries from different columns into a single column.
For example, the following data frame has *genus* and *species* in separate columns:
We may want to combine the genus and species into a single column, *scientific\_name*:
Or we may want to do the reverse (separate information from a single column into multiple columns). Here, we’ll learn `tidyr::unite()` and `tidyr::separate()` to help us do both.
### 7\.6\.1 `tidyr::unite()` to merge information from separate columns
Use `tidyr::unite()` to combine information from multiple columns into a single column (as for the scientific name example above)
To demonstrate uniting information from separate columns, we’ll make a single column that has the combined information from *site* abbreviation and *year* in *inverts\_long*.
We need to give `tidyr::unite()` several arguments:
* **data:** the data frame containing columns we want to combine (or pipe into the function from the data frame)
* **col:** the name of the new “united” column
* the **columns you are uniting**
* **sep:** the symbol, value or character to put between the united information from each column
```
inverts_unite <- inverts_long %>%
unite(col = "site_year", # What to name the new united column
c(site, year), # The columns we'll unite (site, year)
sep = "_") # How to separate the things we're uniting
```
```
## # A tibble: 6 x 4
## month site_year common_name sp_count
## <chr> <chr> <chr> <dbl>
## 1 7 abur_2016 california cone snail 451
## 2 7 abur_2017 california cone snail 28
## 3 7 abur_2018 california cone snail 762
## 4 7 abur_2016 california spiny lobster 17
## 5 7 abur_2017 california spiny lobster 17
## 6 7 abur_2018 california spiny lobster 16
```
#### 7\.6\.1\.1 Activity:
**Task:** Create a new object called ‘inverts\_moyr,’ starting from inverts\_long, that unites the month and year columns into a single column named “mo\_yr,” using a slash “/” as the separator. Then try updating the separator to something else! Like “*hello!*”
**Solution:**
```
inverts_moyr <- inverts_long %>%
unite(col = "mo_yr", # What to name the new united column
c(month, year), # The columns we'll unite (site, year)
sep = "/")
```
**Merging information from \> 2 columns (not done in workshop)**
`tidyr::unite()` can also combine information from *more* than two columns. For example, to combine the *site*, *common\_name* and *year* columns from *inverts\_long*, we could use:
```
# Uniting more than 2 columns:
inverts_triple_unite <- inverts_long %>%
tidyr::unite(col = "year_site_name",
c(year, site, common_name),
sep = "-") # Note: this is a dash
```
```
head(inverts_triple_unite)
```
```
## # A tibble: 6 x 3
## month year_site_name sp_count
## <chr> <chr> <dbl>
## 1 7 2016-abur-california cone snail 451
## 2 7 2017-abur-california cone snail 28
## 3 7 2018-abur-california cone snail 762
## 4 7 2016-abur-california spiny lobster 17
## 5 7 2017-abur-california spiny lobster 17
## 6 7 2018-abur-california spiny lobster 16
```
### 7\.6\.2 `tidyr::separate()` to separate information into multiple columns
While `tidyr::unite()` allows us to combine information from multiple columns, it’s more likely that you’ll *start* with a single column that you want to split up into pieces.
For example, I might want to split up a column containing the *genus* and *species* (*Scorpaena guttata*) into two separate columns (*Scorpaena* \| *guttata*), so that I can count how many *Scorpaena* organisms exist in my dataset at the genus level.
Use `tidyr::separate()` to “separate a character column into multiple columns using a regular expression separator.”
Let’s start again with *inverts\_unite*, where we have combined the *site* and *year* into a single column called *site\_year*. If we want to **separate** those, we can use:
```
inverts_sep <- inverts_unite %>%
tidyr::separate(site_year, into = c("my_site", "my_year"))
```
### 7\.6\.1 `tidyr::unite()` to merge information from separate columns
Use `tidyr::unite()` to combine information from multiple columns into a single column (as for the scientific name example above)
To demonstrate uniting information from separate columns, we’ll make a single column that has the combined information from *site* abbreviation and *year* in *inverts\_long*.
We need to give `tidyr::unite()` several arguments:
* **data:** the data frame containing columns we want to combine (or pipe into the function from the data frame)
* **col:** the name of the new “united” column
* the **columns you are uniting**
* **sep:** the symbol, value or character to put between the united information from each column
```
inverts_unite <- inverts_long %>%
unite(col = "site_year", # What to name the new united column
c(site, year), # The columns we'll unite (site, year)
sep = "_") # How to separate the things we're uniting
```
```
## # A tibble: 6 x 4
## month site_year common_name sp_count
## <chr> <chr> <chr> <dbl>
## 1 7 abur_2016 california cone snail 451
## 2 7 abur_2017 california cone snail 28
## 3 7 abur_2018 california cone snail 762
## 4 7 abur_2016 california spiny lobster 17
## 5 7 abur_2017 california spiny lobster 17
## 6 7 abur_2018 california spiny lobster 16
```
#### 7\.6\.1\.1 Activity:
**Task:** Create a new object called ‘inverts\_moyr,’ starting from inverts\_long, that unites the month and year columns into a single column named “mo\_yr,” using a slash “/” as the separator. Then try updating the separator to something else! Like “*hello!*”
**Solution:**
```
inverts_moyr <- inverts_long %>%
unite(col = "mo_yr", # What to name the new united column
c(month, year), # The columns we'll unite (site, year)
sep = "/")
```
**Merging information from \> 2 columns (not done in workshop)**
`tidyr::unite()` can also combine information from *more* than two columns. For example, to combine the *site*, *common\_name* and *year* columns from *inverts\_long*, we could use:
```
# Uniting more than 2 columns:
inverts_triple_unite <- inverts_long %>%
tidyr::unite(col = "year_site_name",
c(year, site, common_name),
sep = "-") # Note: this is a dash
```
```
head(inverts_triple_unite)
```
```
## # A tibble: 6 x 3
## month year_site_name sp_count
## <chr> <chr> <dbl>
## 1 7 2016-abur-california cone snail 451
## 2 7 2017-abur-california cone snail 28
## 3 7 2018-abur-california cone snail 762
## 4 7 2016-abur-california spiny lobster 17
## 5 7 2017-abur-california spiny lobster 17
## 6 7 2018-abur-california spiny lobster 16
```
#### 7\.6\.1\.1 Activity:
**Task:** Create a new object called ‘inverts\_moyr,’ starting from inverts\_long, that unites the month and year columns into a single column named “mo\_yr,” using a slash “/” as the separator. Then try updating the separator to something else! Like “*hello!*”
**Solution:**
```
inverts_moyr <- inverts_long %>%
unite(col = "mo_yr", # What to name the new united column
c(month, year), # The columns we'll unite (site, year)
sep = "/")
```
**Merging information from \> 2 columns (not done in workshop)**
`tidyr::unite()` can also combine information from *more* than two columns. For example, to combine the *site*, *common\_name* and *year* columns from *inverts\_long*, we could use:
```
# Uniting more than 2 columns:
inverts_triple_unite <- inverts_long %>%
tidyr::unite(col = "year_site_name",
c(year, site, common_name),
sep = "-") # Note: this is a dash
```
```
head(inverts_triple_unite)
```
```
## # A tibble: 6 x 3
## month year_site_name sp_count
## <chr> <chr> <dbl>
## 1 7 2016-abur-california cone snail 451
## 2 7 2017-abur-california cone snail 28
## 3 7 2018-abur-california cone snail 762
## 4 7 2016-abur-california spiny lobster 17
## 5 7 2017-abur-california spiny lobster 17
## 6 7 2018-abur-california spiny lobster 16
```
### 7\.6\.2 `tidyr::separate()` to separate information into multiple columns
While `tidyr::unite()` allows us to combine information from multiple columns, it’s more likely that you’ll *start* with a single column that you want to split up into pieces.
For example, I might want to split up a column containing the *genus* and *species* (*Scorpaena guttata*) into two separate columns (*Scorpaena* \| *guttata*), so that I can count how many *Scorpaena* organisms exist in my dataset at the genus level.
Use `tidyr::separate()` to “separate a character column into multiple columns using a regular expression separator.”
Let’s start again with *inverts\_unite*, where we have combined the *site* and *year* into a single column called *site\_year*. If we want to **separate** those, we can use:
```
inverts_sep <- inverts_unite %>%
tidyr::separate(site_year, into = c("my_site", "my_year"))
```
7\.7 `stringr::str_replace()` to replace a pattern
--------------------------------------------------
Was data entered in a way that’s difficult to code with, or is just plain annoying? Did someone wrongly enter “fish” as “fsh” throughout the spreadsheet, and you want to update it everywhere?
Use `stringr::str_replace()` to automatically replace a string pattern.
**Warning**: The pattern will be replaced everywhere \- so if you ask to replace “fsh” with “fish,” then “offshore” would be updated to “offishore.” Be careful to ensure that when you think you’re making one replacement, you’re not also replacing something else unexpectedly.
Starting with inverts, let’s any place we find “california” we want to replace it with the abbreviation “CA”:
```
ca_abbr <- inverts %>%
mutate(
common_name =
str_replace(common_name,
pattern = "california",
replacement = "CA")
)
```
Now, check to confirm that “california” has been replaced with “CA.”
### 7\.7\.1 END **tidying** session!
### 7\.7\.1 END **tidying** session!
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/filter-join.html |
Chapter 8 Filters and joins
===========================
8\.1 Summary
------------
In previous sessions, we’ve learned to do some basic wrangling and find summary information with functions in the `dplyr` package, which exists within the `tidyverse`. In this session, we’ll expand our data wrangling toolkit using:
* `filter()` to conditionally subset our data by **rows**, and
* `*_join()` functions to merge data frames together
* And we’ll make a nicely formatted HTML table with `kable()` and `kableExtra`
The combination of `filter()` and `*_join()` \- to return rows satisfying a condition we specify, and merging data frames by like variables \- is analogous to the useful VLOOKUP function in Excel.
### 8\.1\.1 Objectives
* Use `filter()` to subset data frames, returning **rows** that satisfy variable conditions
* Use `full_join()`, `left_join()`, and `inner_join()` to merge data frames, with different endpoints in mind
* Use `filter()` and `*_join()` as part of a wrangling sequence
### 8\.1\.2 Resources
* [`filter()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/filter.html)
* [`join()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/join.html)
* [Chapters 5 and 13 in *R for Data Science* by Garrett Grolemund and Hadley Wickham](https://r4ds.had.co.nz/)
* [“Create awesome HTML tables with knitr::kable() and kableExtra” by Hao Zhu](https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html)
8\.2 Set\-up: Create a new .Rmd, attach packages \& get data
------------------------------------------------------------
Create a new R Markdown document in your r\-workshop project and knit to save as **filter\_join.Rmd**. Remove all the example code (everything below the set\-up code chunk).
In this session, we’ll attach four packages:
* `tidyverse`
* `readxl`
* `here`
* `kableExtra`
Attach the packages in the setup code chunk in your .Rmd:
```
library(tidyverse)
library(readxl)
library(here)
library(kableExtra)
```
Then create a new code chunk to read in two files from your ‘data’ subfolder:
* fish.csv
* kelp\_fronds.xlsx (read in only the “abur” worksheet by adding argument `sheet = "abur"` to `read_excel()`)
```
# Read in data:
fish <- read_csv(here("data", "fish.csv"))
kelp_abur <- read_excel(here("data", "kelp_fronds.xlsx"), sheet = "abur")
```
We should always explore the data we’ve read in. Use functions like `View()`, `names()`, `summary()`, `head()` and `tail()` to check them out.
Now, let’s use `filter()` to decide which observations (rows) we’ll keep or exclude in new subsets, similar to using Excel’s VLOOKUP function or filter tool.
8\.3 `dplyr::filter()` to conditionally subset by rows
------------------------------------------------------
Use `filter()` to let R know which **rows** you want to keep or exclude, based whether or not their contents match conditions that you set for one or more variables.
Some examples in words that might inspire you to use `filter()`:
* “I only want to keep rows where the temperature is greater than 90°F.”
* “I want to keep all observations **except** those where the tree type is listed as **unknown**.”
* “I want to make a new subset with only data for mountain lions (the species variable) in California (the state variable).”
When we use `filter()`, we need to let R know a couple of things:
* What data frame we’re filtering from
* What condition(s) we want observations to **match** and/or **not match** in order to keep them in the new subset
Here, we’ll learn some common ways to use `filter()`.
### 8\.3\.1 Filter rows by matching a single character string
Let’s say we want to keep all observations from the **fish** data frame where the common name is “garibaldi” (fun fact: that’s California’s official marine state fish, protected in California coastal waters!).
Here, we need to tell R to only *keep rows* from the **fish** data frame when the common name (**common\_name** variable) exactly matches **garibaldi**.
Use `==` to ask R to look for exact matches:
```
fish_garibaldi <- fish %>%
filter(common_name == "garibaldi")
```
Check out the **fish\_garibaldi** object to ensure that only *garibaldi* observations remain.
#### 8\.3\.1\.1 Activity
**Task**: Create a subset starting from the **fish** data frame, stored as object **fish\_mohk**, that only contains observations from Mohawk Reef (site entered as “mohk”).
**Solution**:
```
fish_mohk <- fish %>%
filter(site == "mohk")
```
Explore the subset you just created to ensure that only Mohawk Reef observations are returned.
### 8\.3\.2 Filter rows based on numeric conditions
Use expected operators (\>, \<, \>\=, \<\=, \=\=) to set conditions for a numeric variable when filtering. For this example, we only want to retain observations when the **total\_count** column value is \>\= 50:
```
fish_over50 <- fish %>%
filter(total_count >= 50)
```
### 8\.3\.3 Filter to return rows that match *this* OR *that* OR *that*
What if we want to return a subset of the **fish** df that contains *garibaldi*, *blacksmith* OR *black surfperch*?
There are several ways to write an “OR” statement for filtering, which will keep any observations that match Condition A *or* Condition B *or* Condition C. In this example, we will create a subset from **fish** that only contains rows where the **common\_name** is *garibaldi* or *blacksmith* or *black surfperch*.
Way 1: You can indicate **OR** using the vertical line operator `|` to indicate “OR”:
```
fish_3sp <- fish %>%
filter(common_name == "garibaldi" |
common_name == "blacksmith" |
common_name == "black surfperch")
```
Alternatively, if you’re looking for multiple matches in the *same variable*, you can use the `%in%` operator instead. Use `%in%` to ask R to look for *any matches* within a vector:
```
fish_3sp <- fish %>%
filter(common_name %in% c("garibaldi", "blacksmith", "black surfperch"))
```
Notice that the two methods above return the same thing.
**Critical thinking:** In what scenario might you *NOT* want to use `%in%` for an “or” filter statement? Hint: What if the “or” conditions aren’t different outcomes for the same variable?
#### 8\.3\.3\.1 Activity
**Task:** Create a subset from **fish** called **fish\_gar\_2016** that keeps all observations if the year is 2016 *OR* the common name is “garibaldi.”
**Solution:**
```
fish_gar_2016 <- fish %>%
filter(year == 2016 | common_name == "garibaldi")
```
### 8\.3\.4 Filter to return observations that match **this** AND **that**
In the examples above, we learned to keep observations that matched any of a number of conditions (**or** statements).
Sometimes we’ll only want to keep observations that satisfy multiple conditions (e.g., to keep this observation it must satisfy this condition **AND** that condition). For example, we may want to create a subset that only returns rows from **fish** where the **year** is 2018 *and* the **site** is Arroyo Quemado “aque”
In `filter()`, add a comma (or ampersand ‘\&’) between arguments for multiple “and” conditions:
```
aque_2018 <- fish %>%
filter(year == 2018, site == "aque")
```
Check it out to see that only observations where the site is “aque” in 2018 are retained:
```
aque_2018
```
```
## # A tibble: 5 x 4
## year site common_name total_count
## <dbl> <chr> <chr> <dbl>
## 1 2018 aque black surfperch 2
## 2 2018 aque blacksmith 1
## 3 2018 aque garibaldi 1
## 4 2018 aque rock wrasse 4
## 5 2018 aque senorita 36
```
Like most things in R, there are other ways to do the same thing. For example, you could do the same thing using `&` (instead of a comma) between “and” conditions:
```
# Use the ampersand (&) to add another condition "and this must be true":
aque_2018 <- fish %>%
filter(year == 2018 & site == "aque")
```
Or you could just do two filter steps in sequence:
```
# Written as sequential filter steps:
aque_2018 <- fish %>%
filter(year == 2018) %>%
filter(site == "aque")
```
### 8\.3\.5 Activity: combined filter conditions
**Challenge task:** Create a subset from the **fish** data frame, called **low\_gb\_wr** that only contains:
* Observations for *garibaldi* or *rock wrasse*
* AND the *total\_count* is *less than or equal to 10*
**Solution:**
```
low_gb_wr <- fish %>%
filter(common_name %in% c("garibaldi", "rock wrasse"),
total_count <= 10)
```
### 8\.3\.6 `stringr::str_detect()` to filter by a partial pattern
Sometimes we’ll want to keep observations that contain a specific string pattern within a variable of interest.
For example, consider the fantasy data below:
| id | species |
| --- | --- |
| 1 | rainbow rockfish |
| 2 | blue rockfish |
| 3 | sparkle urchin |
| 4 | royal blue fish |
There might be a time when we would want to use observations that:
* Contain the string “fish,” in isolation or within a larger string (like “rockfish”)
* Contain the string “blue”
In those cases, it would be useful to **detect** a string pattern, and potentially keep any rows that contain it. Here, we’ll use `stringr::str_detect()` to find and keep observations that contain our specified string pattern.
Let’s detect and keep observations from **fish** where the **common\_name** variable contains string pattern “black.” Note that there are two fish, blacksmith and black surfperch, that would satisfy this condition.
Using `filter()` \+ `str_detect()` in combination to find and keep observations where the **site** variable contains pattern “sc”:
```
fish_bl <- fish %>%
filter(str_detect(common_name, pattern = "black"))
```
So `str_detect()` returns is a series of TRUE/FALSE responses for each row, based on whether or not they contain the specified pattern. In that example, any row that *does* contain “black” returns `TRUE`, and any row that *does not* contain “black” returns `FALSE`.
### 8\.3\.7 Activity
**Task:** Create a new object called **fish\_it**, starting from **fish**, that only contains observations if the **common\_name** variable contains the string pattern “it.” What species remain?
**Solution:**
```
fish_it <- fish %>%
filter(str_detect(common_name, pattern = "it"))
# blacksmITh and senorITa remain!
```
We can also *exclude* observations that contain a set string pattern by adding the `negate = TRUE` argument within `str_detect()`.
**Sync your local project to your repo on GitHub.**
8\.4 `dplyr::*_join()` to merge data frames
-------------------------------------------
There are a number of ways to merge data frames in R. We’ll use `full_join()`, `left_join()`, and `inner_join()` in this session.
From R Documentation (`?join`):
* `full_join()`: “returns all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing.” Basically, nothing gets thrown out, even if a match doesn’t exist \- making `full_join()` the safest option for merging data frames. When in doubt, `full_join()`.
* `left_join()`: “return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned.”
* `inner_join()`: “returns all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned.” This will drop observations that don’t have a match between the merged data frames, which makes it a riskier merging option if you’re not sure what you’re trying to do.
Schematic (from RStudio data wrangling cheat sheet):
We will use **kelp\_abur** as our “left” data frame, and **fish** as our “right” data frame, to explore different join outcomes.
### 8\.4\.1 `full_join()` to merge data frames, keeping everything
When we join data frames in R, we need to tell R a couple of things (and it does the hard joining work for us):
* Which data frames we want to merge together
* Which variables to merge by
Use `full_join()` to safely combine two data frames, keeping everything from both and populating with `NA` as necessary.
Example: use `full_join()` to combine **kelp\_abur** and **fish**:
```
abur_kelp_fish <- kelp_abur %>%
full_join(fish, by = c("year", "site"))
```
Let’s look at the merged data frame with `View(abur_kelp_fish)`. A few things to notice about how `full_join()` has worked:
1. All columns that existed in **both data frames** still exist
2. All observations are retained, even if they don’t have a match. In this case, notice that for other sites (not ‘abur’) the observation for fish still exists, even though there was no corresponding kelp data to merge with it.
3. The kelp frond data is joined to *all observations* where the joining variables (*year*, *site*) are a match, which is why it is repeated 5 times for each year (once for each fish species).
Because all data (observations \& columns) are retained, `full_join()` is the safest option if you’re unclear about how to merge data frames.
### 8\.4\.2 `left_join(x,y)` to merge data frames, keeping everything in the ‘x’ data frame and only matches from the ‘y’ data frame
Now, we want to keep all observations in *kelp\_abur*, and merge them with *fish* while only keeping observations from *fish* that match an observation within *abur*. When we use `left_join()`, any information from *fish* that don’t have a match (by year and site) in *kelp\_abur* won’t be retained, because those wouldn’t have a match in the left data frame.
```
kelp_fish_left <- kelp_abur %>%
left_join(fish, by = c("year","site"))
```
Notice when you look at **kelp\_fish\_left**, data for other sites that exist in fish do **not** get joined, because `left_join(df_a, df_b)` will only keep observations from `df_b` if they have a match in `df_a`!
### 8\.4\.3 `inner_join()` to merge data frames, only keeping observations with a match in **both**
Use `inner_join()` if you **only** want to retain observations that have matches across **both data** frames. Caution: this is built to exclude any observations that don’t match across data frames by joined variables \- double check to make sure this is actually what you want to do!
For example, if we use `inner_join()` to merge fish and kelp\_abur, then we are asking R to **only return observations where the joining variables (*year* and *site*) have matches in both data frames.** Let’s see what the outcome is:
```
kelp_fish_injoin <- kelp_abur %>%
inner_join(fish, by = c("year", "site"))
# kelp_fish_injoin
```
Here, we see that only observations (rows) where there is a match for *year* and *site* in both data frames are returned.
### 8\.4\.4 `filter()` and `join()` in a sequence
Now let’s combine what we’ve learned about piping, filtering and joining!
Let’s complete the following as part of a single sequence (remember, check to see what you’ve produced after each step) to create a new data frame called **my\_fish\_join**:
* Start with **fish** data frame
* Filter **fish** to only including observations for 2017 at Arroyo Burro
* Join the **kelp\_abur** data frame to the resulting subset using `left_join()`
* Add a new column that contains the ‘fish per kelp fronds’ density (total\_count / total\_fronds)
That sequence might look like this:
```
my_fish_join <- fish %>%
filter(year == 2017, site == "abur") %>%
left_join(kelp_abur, by = c("year", "site")) %>%
mutate(fish_per_frond = total_count / total_fronds)
```
Explore the resulting **my\_fish\_join** data frame.
8\.5 An HTML table with `kable()` and `kableExtra`
--------------------------------------------------
With any data frame, you can a nicer looking table in your knitted HTML using `knitr::kable()` and functions in the `kableExtra` package.
Start by using `kable()` with my\_fish\_join, and see what the default HTML table looks like in your knitted document:
```
kable(my_fish_join)
```
Simple, but quick to get a clear \& useful table! Now let’s spruce it up a bit with `kableExtra::kable_styling()` to modify HTML table styles:
```
my_fish_join %>%
kable() %>%
kable_styling(bootstrap_options = "striped",
full_width = FALSE)
```
…with many other options for customizing HTML tables! Make sure to check out [“Create awesome HTML tables with knitr::kable() and kableExtra” by Hao Zhu](https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html) for more examples and options.
**Sync your project with your repo on GitHub**
### 8\.5\.1 End `filter()` \+ `_join()` section!
8\.1 Summary
------------
In previous sessions, we’ve learned to do some basic wrangling and find summary information with functions in the `dplyr` package, which exists within the `tidyverse`. In this session, we’ll expand our data wrangling toolkit using:
* `filter()` to conditionally subset our data by **rows**, and
* `*_join()` functions to merge data frames together
* And we’ll make a nicely formatted HTML table with `kable()` and `kableExtra`
The combination of `filter()` and `*_join()` \- to return rows satisfying a condition we specify, and merging data frames by like variables \- is analogous to the useful VLOOKUP function in Excel.
### 8\.1\.1 Objectives
* Use `filter()` to subset data frames, returning **rows** that satisfy variable conditions
* Use `full_join()`, `left_join()`, and `inner_join()` to merge data frames, with different endpoints in mind
* Use `filter()` and `*_join()` as part of a wrangling sequence
### 8\.1\.2 Resources
* [`filter()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/filter.html)
* [`join()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/join.html)
* [Chapters 5 and 13 in *R for Data Science* by Garrett Grolemund and Hadley Wickham](https://r4ds.had.co.nz/)
* [“Create awesome HTML tables with knitr::kable() and kableExtra” by Hao Zhu](https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html)
### 8\.1\.1 Objectives
* Use `filter()` to subset data frames, returning **rows** that satisfy variable conditions
* Use `full_join()`, `left_join()`, and `inner_join()` to merge data frames, with different endpoints in mind
* Use `filter()` and `*_join()` as part of a wrangling sequence
### 8\.1\.2 Resources
* [`filter()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/filter.html)
* [`join()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/join.html)
* [Chapters 5 and 13 in *R for Data Science* by Garrett Grolemund and Hadley Wickham](https://r4ds.had.co.nz/)
* [“Create awesome HTML tables with knitr::kable() and kableExtra” by Hao Zhu](https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html)
8\.2 Set\-up: Create a new .Rmd, attach packages \& get data
------------------------------------------------------------
Create a new R Markdown document in your r\-workshop project and knit to save as **filter\_join.Rmd**. Remove all the example code (everything below the set\-up code chunk).
In this session, we’ll attach four packages:
* `tidyverse`
* `readxl`
* `here`
* `kableExtra`
Attach the packages in the setup code chunk in your .Rmd:
```
library(tidyverse)
library(readxl)
library(here)
library(kableExtra)
```
Then create a new code chunk to read in two files from your ‘data’ subfolder:
* fish.csv
* kelp\_fronds.xlsx (read in only the “abur” worksheet by adding argument `sheet = "abur"` to `read_excel()`)
```
# Read in data:
fish <- read_csv(here("data", "fish.csv"))
kelp_abur <- read_excel(here("data", "kelp_fronds.xlsx"), sheet = "abur")
```
We should always explore the data we’ve read in. Use functions like `View()`, `names()`, `summary()`, `head()` and `tail()` to check them out.
Now, let’s use `filter()` to decide which observations (rows) we’ll keep or exclude in new subsets, similar to using Excel’s VLOOKUP function or filter tool.
8\.3 `dplyr::filter()` to conditionally subset by rows
------------------------------------------------------
Use `filter()` to let R know which **rows** you want to keep or exclude, based whether or not their contents match conditions that you set for one or more variables.
Some examples in words that might inspire you to use `filter()`:
* “I only want to keep rows where the temperature is greater than 90°F.”
* “I want to keep all observations **except** those where the tree type is listed as **unknown**.”
* “I want to make a new subset with only data for mountain lions (the species variable) in California (the state variable).”
When we use `filter()`, we need to let R know a couple of things:
* What data frame we’re filtering from
* What condition(s) we want observations to **match** and/or **not match** in order to keep them in the new subset
Here, we’ll learn some common ways to use `filter()`.
### 8\.3\.1 Filter rows by matching a single character string
Let’s say we want to keep all observations from the **fish** data frame where the common name is “garibaldi” (fun fact: that’s California’s official marine state fish, protected in California coastal waters!).
Here, we need to tell R to only *keep rows* from the **fish** data frame when the common name (**common\_name** variable) exactly matches **garibaldi**.
Use `==` to ask R to look for exact matches:
```
fish_garibaldi <- fish %>%
filter(common_name == "garibaldi")
```
Check out the **fish\_garibaldi** object to ensure that only *garibaldi* observations remain.
#### 8\.3\.1\.1 Activity
**Task**: Create a subset starting from the **fish** data frame, stored as object **fish\_mohk**, that only contains observations from Mohawk Reef (site entered as “mohk”).
**Solution**:
```
fish_mohk <- fish %>%
filter(site == "mohk")
```
Explore the subset you just created to ensure that only Mohawk Reef observations are returned.
### 8\.3\.2 Filter rows based on numeric conditions
Use expected operators (\>, \<, \>\=, \<\=, \=\=) to set conditions for a numeric variable when filtering. For this example, we only want to retain observations when the **total\_count** column value is \>\= 50:
```
fish_over50 <- fish %>%
filter(total_count >= 50)
```
### 8\.3\.3 Filter to return rows that match *this* OR *that* OR *that*
What if we want to return a subset of the **fish** df that contains *garibaldi*, *blacksmith* OR *black surfperch*?
There are several ways to write an “OR” statement for filtering, which will keep any observations that match Condition A *or* Condition B *or* Condition C. In this example, we will create a subset from **fish** that only contains rows where the **common\_name** is *garibaldi* or *blacksmith* or *black surfperch*.
Way 1: You can indicate **OR** using the vertical line operator `|` to indicate “OR”:
```
fish_3sp <- fish %>%
filter(common_name == "garibaldi" |
common_name == "blacksmith" |
common_name == "black surfperch")
```
Alternatively, if you’re looking for multiple matches in the *same variable*, you can use the `%in%` operator instead. Use `%in%` to ask R to look for *any matches* within a vector:
```
fish_3sp <- fish %>%
filter(common_name %in% c("garibaldi", "blacksmith", "black surfperch"))
```
Notice that the two methods above return the same thing.
**Critical thinking:** In what scenario might you *NOT* want to use `%in%` for an “or” filter statement? Hint: What if the “or” conditions aren’t different outcomes for the same variable?
#### 8\.3\.3\.1 Activity
**Task:** Create a subset from **fish** called **fish\_gar\_2016** that keeps all observations if the year is 2016 *OR* the common name is “garibaldi.”
**Solution:**
```
fish_gar_2016 <- fish %>%
filter(year == 2016 | common_name == "garibaldi")
```
### 8\.3\.4 Filter to return observations that match **this** AND **that**
In the examples above, we learned to keep observations that matched any of a number of conditions (**or** statements).
Sometimes we’ll only want to keep observations that satisfy multiple conditions (e.g., to keep this observation it must satisfy this condition **AND** that condition). For example, we may want to create a subset that only returns rows from **fish** where the **year** is 2018 *and* the **site** is Arroyo Quemado “aque”
In `filter()`, add a comma (or ampersand ‘\&’) between arguments for multiple “and” conditions:
```
aque_2018 <- fish %>%
filter(year == 2018, site == "aque")
```
Check it out to see that only observations where the site is “aque” in 2018 are retained:
```
aque_2018
```
```
## # A tibble: 5 x 4
## year site common_name total_count
## <dbl> <chr> <chr> <dbl>
## 1 2018 aque black surfperch 2
## 2 2018 aque blacksmith 1
## 3 2018 aque garibaldi 1
## 4 2018 aque rock wrasse 4
## 5 2018 aque senorita 36
```
Like most things in R, there are other ways to do the same thing. For example, you could do the same thing using `&` (instead of a comma) between “and” conditions:
```
# Use the ampersand (&) to add another condition "and this must be true":
aque_2018 <- fish %>%
filter(year == 2018 & site == "aque")
```
Or you could just do two filter steps in sequence:
```
# Written as sequential filter steps:
aque_2018 <- fish %>%
filter(year == 2018) %>%
filter(site == "aque")
```
### 8\.3\.5 Activity: combined filter conditions
**Challenge task:** Create a subset from the **fish** data frame, called **low\_gb\_wr** that only contains:
* Observations for *garibaldi* or *rock wrasse*
* AND the *total\_count* is *less than or equal to 10*
**Solution:**
```
low_gb_wr <- fish %>%
filter(common_name %in% c("garibaldi", "rock wrasse"),
total_count <= 10)
```
### 8\.3\.6 `stringr::str_detect()` to filter by a partial pattern
Sometimes we’ll want to keep observations that contain a specific string pattern within a variable of interest.
For example, consider the fantasy data below:
| id | species |
| --- | --- |
| 1 | rainbow rockfish |
| 2 | blue rockfish |
| 3 | sparkle urchin |
| 4 | royal blue fish |
There might be a time when we would want to use observations that:
* Contain the string “fish,” in isolation or within a larger string (like “rockfish”)
* Contain the string “blue”
In those cases, it would be useful to **detect** a string pattern, and potentially keep any rows that contain it. Here, we’ll use `stringr::str_detect()` to find and keep observations that contain our specified string pattern.
Let’s detect and keep observations from **fish** where the **common\_name** variable contains string pattern “black.” Note that there are two fish, blacksmith and black surfperch, that would satisfy this condition.
Using `filter()` \+ `str_detect()` in combination to find and keep observations where the **site** variable contains pattern “sc”:
```
fish_bl <- fish %>%
filter(str_detect(common_name, pattern = "black"))
```
So `str_detect()` returns is a series of TRUE/FALSE responses for each row, based on whether or not they contain the specified pattern. In that example, any row that *does* contain “black” returns `TRUE`, and any row that *does not* contain “black” returns `FALSE`.
### 8\.3\.7 Activity
**Task:** Create a new object called **fish\_it**, starting from **fish**, that only contains observations if the **common\_name** variable contains the string pattern “it.” What species remain?
**Solution:**
```
fish_it <- fish %>%
filter(str_detect(common_name, pattern = "it"))
# blacksmITh and senorITa remain!
```
We can also *exclude* observations that contain a set string pattern by adding the `negate = TRUE` argument within `str_detect()`.
**Sync your local project to your repo on GitHub.**
### 8\.3\.1 Filter rows by matching a single character string
Let’s say we want to keep all observations from the **fish** data frame where the common name is “garibaldi” (fun fact: that’s California’s official marine state fish, protected in California coastal waters!).
Here, we need to tell R to only *keep rows* from the **fish** data frame when the common name (**common\_name** variable) exactly matches **garibaldi**.
Use `==` to ask R to look for exact matches:
```
fish_garibaldi <- fish %>%
filter(common_name == "garibaldi")
```
Check out the **fish\_garibaldi** object to ensure that only *garibaldi* observations remain.
#### 8\.3\.1\.1 Activity
**Task**: Create a subset starting from the **fish** data frame, stored as object **fish\_mohk**, that only contains observations from Mohawk Reef (site entered as “mohk”).
**Solution**:
```
fish_mohk <- fish %>%
filter(site == "mohk")
```
Explore the subset you just created to ensure that only Mohawk Reef observations are returned.
#### 8\.3\.1\.1 Activity
**Task**: Create a subset starting from the **fish** data frame, stored as object **fish\_mohk**, that only contains observations from Mohawk Reef (site entered as “mohk”).
**Solution**:
```
fish_mohk <- fish %>%
filter(site == "mohk")
```
Explore the subset you just created to ensure that only Mohawk Reef observations are returned.
### 8\.3\.2 Filter rows based on numeric conditions
Use expected operators (\>, \<, \>\=, \<\=, \=\=) to set conditions for a numeric variable when filtering. For this example, we only want to retain observations when the **total\_count** column value is \>\= 50:
```
fish_over50 <- fish %>%
filter(total_count >= 50)
```
### 8\.3\.3 Filter to return rows that match *this* OR *that* OR *that*
What if we want to return a subset of the **fish** df that contains *garibaldi*, *blacksmith* OR *black surfperch*?
There are several ways to write an “OR” statement for filtering, which will keep any observations that match Condition A *or* Condition B *or* Condition C. In this example, we will create a subset from **fish** that only contains rows where the **common\_name** is *garibaldi* or *blacksmith* or *black surfperch*.
Way 1: You can indicate **OR** using the vertical line operator `|` to indicate “OR”:
```
fish_3sp <- fish %>%
filter(common_name == "garibaldi" |
common_name == "blacksmith" |
common_name == "black surfperch")
```
Alternatively, if you’re looking for multiple matches in the *same variable*, you can use the `%in%` operator instead. Use `%in%` to ask R to look for *any matches* within a vector:
```
fish_3sp <- fish %>%
filter(common_name %in% c("garibaldi", "blacksmith", "black surfperch"))
```
Notice that the two methods above return the same thing.
**Critical thinking:** In what scenario might you *NOT* want to use `%in%` for an “or” filter statement? Hint: What if the “or” conditions aren’t different outcomes for the same variable?
#### 8\.3\.3\.1 Activity
**Task:** Create a subset from **fish** called **fish\_gar\_2016** that keeps all observations if the year is 2016 *OR* the common name is “garibaldi.”
**Solution:**
```
fish_gar_2016 <- fish %>%
filter(year == 2016 | common_name == "garibaldi")
```
#### 8\.3\.3\.1 Activity
**Task:** Create a subset from **fish** called **fish\_gar\_2016** that keeps all observations if the year is 2016 *OR* the common name is “garibaldi.”
**Solution:**
```
fish_gar_2016 <- fish %>%
filter(year == 2016 | common_name == "garibaldi")
```
### 8\.3\.4 Filter to return observations that match **this** AND **that**
In the examples above, we learned to keep observations that matched any of a number of conditions (**or** statements).
Sometimes we’ll only want to keep observations that satisfy multiple conditions (e.g., to keep this observation it must satisfy this condition **AND** that condition). For example, we may want to create a subset that only returns rows from **fish** where the **year** is 2018 *and* the **site** is Arroyo Quemado “aque”
In `filter()`, add a comma (or ampersand ‘\&’) between arguments for multiple “and” conditions:
```
aque_2018 <- fish %>%
filter(year == 2018, site == "aque")
```
Check it out to see that only observations where the site is “aque” in 2018 are retained:
```
aque_2018
```
```
## # A tibble: 5 x 4
## year site common_name total_count
## <dbl> <chr> <chr> <dbl>
## 1 2018 aque black surfperch 2
## 2 2018 aque blacksmith 1
## 3 2018 aque garibaldi 1
## 4 2018 aque rock wrasse 4
## 5 2018 aque senorita 36
```
Like most things in R, there are other ways to do the same thing. For example, you could do the same thing using `&` (instead of a comma) between “and” conditions:
```
# Use the ampersand (&) to add another condition "and this must be true":
aque_2018 <- fish %>%
filter(year == 2018 & site == "aque")
```
Or you could just do two filter steps in sequence:
```
# Written as sequential filter steps:
aque_2018 <- fish %>%
filter(year == 2018) %>%
filter(site == "aque")
```
### 8\.3\.5 Activity: combined filter conditions
**Challenge task:** Create a subset from the **fish** data frame, called **low\_gb\_wr** that only contains:
* Observations for *garibaldi* or *rock wrasse*
* AND the *total\_count* is *less than or equal to 10*
**Solution:**
```
low_gb_wr <- fish %>%
filter(common_name %in% c("garibaldi", "rock wrasse"),
total_count <= 10)
```
### 8\.3\.6 `stringr::str_detect()` to filter by a partial pattern
Sometimes we’ll want to keep observations that contain a specific string pattern within a variable of interest.
For example, consider the fantasy data below:
| id | species |
| --- | --- |
| 1 | rainbow rockfish |
| 2 | blue rockfish |
| 3 | sparkle urchin |
| 4 | royal blue fish |
There might be a time when we would want to use observations that:
* Contain the string “fish,” in isolation or within a larger string (like “rockfish”)
* Contain the string “blue”
In those cases, it would be useful to **detect** a string pattern, and potentially keep any rows that contain it. Here, we’ll use `stringr::str_detect()` to find and keep observations that contain our specified string pattern.
Let’s detect and keep observations from **fish** where the **common\_name** variable contains string pattern “black.” Note that there are two fish, blacksmith and black surfperch, that would satisfy this condition.
Using `filter()` \+ `str_detect()` in combination to find and keep observations where the **site** variable contains pattern “sc”:
```
fish_bl <- fish %>%
filter(str_detect(common_name, pattern = "black"))
```
So `str_detect()` returns is a series of TRUE/FALSE responses for each row, based on whether or not they contain the specified pattern. In that example, any row that *does* contain “black” returns `TRUE`, and any row that *does not* contain “black” returns `FALSE`.
### 8\.3\.7 Activity
**Task:** Create a new object called **fish\_it**, starting from **fish**, that only contains observations if the **common\_name** variable contains the string pattern “it.” What species remain?
**Solution:**
```
fish_it <- fish %>%
filter(str_detect(common_name, pattern = "it"))
# blacksmITh and senorITa remain!
```
We can also *exclude* observations that contain a set string pattern by adding the `negate = TRUE` argument within `str_detect()`.
**Sync your local project to your repo on GitHub.**
8\.4 `dplyr::*_join()` to merge data frames
-------------------------------------------
There are a number of ways to merge data frames in R. We’ll use `full_join()`, `left_join()`, and `inner_join()` in this session.
From R Documentation (`?join`):
* `full_join()`: “returns all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing.” Basically, nothing gets thrown out, even if a match doesn’t exist \- making `full_join()` the safest option for merging data frames. When in doubt, `full_join()`.
* `left_join()`: “return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned.”
* `inner_join()`: “returns all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned.” This will drop observations that don’t have a match between the merged data frames, which makes it a riskier merging option if you’re not sure what you’re trying to do.
Schematic (from RStudio data wrangling cheat sheet):
We will use **kelp\_abur** as our “left” data frame, and **fish** as our “right” data frame, to explore different join outcomes.
### 8\.4\.1 `full_join()` to merge data frames, keeping everything
When we join data frames in R, we need to tell R a couple of things (and it does the hard joining work for us):
* Which data frames we want to merge together
* Which variables to merge by
Use `full_join()` to safely combine two data frames, keeping everything from both and populating with `NA` as necessary.
Example: use `full_join()` to combine **kelp\_abur** and **fish**:
```
abur_kelp_fish <- kelp_abur %>%
full_join(fish, by = c("year", "site"))
```
Let’s look at the merged data frame with `View(abur_kelp_fish)`. A few things to notice about how `full_join()` has worked:
1. All columns that existed in **both data frames** still exist
2. All observations are retained, even if they don’t have a match. In this case, notice that for other sites (not ‘abur’) the observation for fish still exists, even though there was no corresponding kelp data to merge with it.
3. The kelp frond data is joined to *all observations* where the joining variables (*year*, *site*) are a match, which is why it is repeated 5 times for each year (once for each fish species).
Because all data (observations \& columns) are retained, `full_join()` is the safest option if you’re unclear about how to merge data frames.
### 8\.4\.2 `left_join(x,y)` to merge data frames, keeping everything in the ‘x’ data frame and only matches from the ‘y’ data frame
Now, we want to keep all observations in *kelp\_abur*, and merge them with *fish* while only keeping observations from *fish* that match an observation within *abur*. When we use `left_join()`, any information from *fish* that don’t have a match (by year and site) in *kelp\_abur* won’t be retained, because those wouldn’t have a match in the left data frame.
```
kelp_fish_left <- kelp_abur %>%
left_join(fish, by = c("year","site"))
```
Notice when you look at **kelp\_fish\_left**, data for other sites that exist in fish do **not** get joined, because `left_join(df_a, df_b)` will only keep observations from `df_b` if they have a match in `df_a`!
### 8\.4\.3 `inner_join()` to merge data frames, only keeping observations with a match in **both**
Use `inner_join()` if you **only** want to retain observations that have matches across **both data** frames. Caution: this is built to exclude any observations that don’t match across data frames by joined variables \- double check to make sure this is actually what you want to do!
For example, if we use `inner_join()` to merge fish and kelp\_abur, then we are asking R to **only return observations where the joining variables (*year* and *site*) have matches in both data frames.** Let’s see what the outcome is:
```
kelp_fish_injoin <- kelp_abur %>%
inner_join(fish, by = c("year", "site"))
# kelp_fish_injoin
```
Here, we see that only observations (rows) where there is a match for *year* and *site* in both data frames are returned.
### 8\.4\.4 `filter()` and `join()` in a sequence
Now let’s combine what we’ve learned about piping, filtering and joining!
Let’s complete the following as part of a single sequence (remember, check to see what you’ve produced after each step) to create a new data frame called **my\_fish\_join**:
* Start with **fish** data frame
* Filter **fish** to only including observations for 2017 at Arroyo Burro
* Join the **kelp\_abur** data frame to the resulting subset using `left_join()`
* Add a new column that contains the ‘fish per kelp fronds’ density (total\_count / total\_fronds)
That sequence might look like this:
```
my_fish_join <- fish %>%
filter(year == 2017, site == "abur") %>%
left_join(kelp_abur, by = c("year", "site")) %>%
mutate(fish_per_frond = total_count / total_fronds)
```
Explore the resulting **my\_fish\_join** data frame.
### 8\.4\.1 `full_join()` to merge data frames, keeping everything
When we join data frames in R, we need to tell R a couple of things (and it does the hard joining work for us):
* Which data frames we want to merge together
* Which variables to merge by
Use `full_join()` to safely combine two data frames, keeping everything from both and populating with `NA` as necessary.
Example: use `full_join()` to combine **kelp\_abur** and **fish**:
```
abur_kelp_fish <- kelp_abur %>%
full_join(fish, by = c("year", "site"))
```
Let’s look at the merged data frame with `View(abur_kelp_fish)`. A few things to notice about how `full_join()` has worked:
1. All columns that existed in **both data frames** still exist
2. All observations are retained, even if they don’t have a match. In this case, notice that for other sites (not ‘abur’) the observation for fish still exists, even though there was no corresponding kelp data to merge with it.
3. The kelp frond data is joined to *all observations* where the joining variables (*year*, *site*) are a match, which is why it is repeated 5 times for each year (once for each fish species).
Because all data (observations \& columns) are retained, `full_join()` is the safest option if you’re unclear about how to merge data frames.
### 8\.4\.2 `left_join(x,y)` to merge data frames, keeping everything in the ‘x’ data frame and only matches from the ‘y’ data frame
Now, we want to keep all observations in *kelp\_abur*, and merge them with *fish* while only keeping observations from *fish* that match an observation within *abur*. When we use `left_join()`, any information from *fish* that don’t have a match (by year and site) in *kelp\_abur* won’t be retained, because those wouldn’t have a match in the left data frame.
```
kelp_fish_left <- kelp_abur %>%
left_join(fish, by = c("year","site"))
```
Notice when you look at **kelp\_fish\_left**, data for other sites that exist in fish do **not** get joined, because `left_join(df_a, df_b)` will only keep observations from `df_b` if they have a match in `df_a`!
### 8\.4\.3 `inner_join()` to merge data frames, only keeping observations with a match in **both**
Use `inner_join()` if you **only** want to retain observations that have matches across **both data** frames. Caution: this is built to exclude any observations that don’t match across data frames by joined variables \- double check to make sure this is actually what you want to do!
For example, if we use `inner_join()` to merge fish and kelp\_abur, then we are asking R to **only return observations where the joining variables (*year* and *site*) have matches in both data frames.** Let’s see what the outcome is:
```
kelp_fish_injoin <- kelp_abur %>%
inner_join(fish, by = c("year", "site"))
# kelp_fish_injoin
```
Here, we see that only observations (rows) where there is a match for *year* and *site* in both data frames are returned.
### 8\.4\.4 `filter()` and `join()` in a sequence
Now let’s combine what we’ve learned about piping, filtering and joining!
Let’s complete the following as part of a single sequence (remember, check to see what you’ve produced after each step) to create a new data frame called **my\_fish\_join**:
* Start with **fish** data frame
* Filter **fish** to only including observations for 2017 at Arroyo Burro
* Join the **kelp\_abur** data frame to the resulting subset using `left_join()`
* Add a new column that contains the ‘fish per kelp fronds’ density (total\_count / total\_fronds)
That sequence might look like this:
```
my_fish_join <- fish %>%
filter(year == 2017, site == "abur") %>%
left_join(kelp_abur, by = c("year", "site")) %>%
mutate(fish_per_frond = total_count / total_fronds)
```
Explore the resulting **my\_fish\_join** data frame.
8\.5 An HTML table with `kable()` and `kableExtra`
--------------------------------------------------
With any data frame, you can a nicer looking table in your knitted HTML using `knitr::kable()` and functions in the `kableExtra` package.
Start by using `kable()` with my\_fish\_join, and see what the default HTML table looks like in your knitted document:
```
kable(my_fish_join)
```
Simple, but quick to get a clear \& useful table! Now let’s spruce it up a bit with `kableExtra::kable_styling()` to modify HTML table styles:
```
my_fish_join %>%
kable() %>%
kable_styling(bootstrap_options = "striped",
full_width = FALSE)
```
…with many other options for customizing HTML tables! Make sure to check out [“Create awesome HTML tables with knitr::kable() and kableExtra” by Hao Zhu](https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html) for more examples and options.
**Sync your project with your repo on GitHub**
### 8\.5\.1 End `filter()` \+ `_join()` section!
### 8\.5\.1 End `filter()` \+ `_join()` section!
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/filter-join.html |
Chapter 8 Filters and joins
===========================
8\.1 Summary
------------
In previous sessions, we’ve learned to do some basic wrangling and find summary information with functions in the `dplyr` package, which exists within the `tidyverse`. In this session, we’ll expand our data wrangling toolkit using:
* `filter()` to conditionally subset our data by **rows**, and
* `*_join()` functions to merge data frames together
* And we’ll make a nicely formatted HTML table with `kable()` and `kableExtra`
The combination of `filter()` and `*_join()` \- to return rows satisfying a condition we specify, and merging data frames by like variables \- is analogous to the useful VLOOKUP function in Excel.
### 8\.1\.1 Objectives
* Use `filter()` to subset data frames, returning **rows** that satisfy variable conditions
* Use `full_join()`, `left_join()`, and `inner_join()` to merge data frames, with different endpoints in mind
* Use `filter()` and `*_join()` as part of a wrangling sequence
### 8\.1\.2 Resources
* [`filter()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/filter.html)
* [`join()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/join.html)
* [Chapters 5 and 13 in *R for Data Science* by Garrett Grolemund and Hadley Wickham](https://r4ds.had.co.nz/)
* [“Create awesome HTML tables with knitr::kable() and kableExtra” by Hao Zhu](https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html)
8\.2 Set\-up: Create a new .Rmd, attach packages \& get data
------------------------------------------------------------
Create a new R Markdown document in your r\-workshop project and knit to save as **filter\_join.Rmd**. Remove all the example code (everything below the set\-up code chunk).
In this session, we’ll attach four packages:
* `tidyverse`
* `readxl`
* `here`
* `kableExtra`
Attach the packages in the setup code chunk in your .Rmd:
```
library(tidyverse)
library(readxl)
library(here)
library(kableExtra)
```
Then create a new code chunk to read in two files from your ‘data’ subfolder:
* fish.csv
* kelp\_fronds.xlsx (read in only the “abur” worksheet by adding argument `sheet = "abur"` to `read_excel()`)
```
# Read in data:
fish <- read_csv(here("data", "fish.csv"))
kelp_abur <- read_excel(here("data", "kelp_fronds.xlsx"), sheet = "abur")
```
We should always explore the data we’ve read in. Use functions like `View()`, `names()`, `summary()`, `head()` and `tail()` to check them out.
Now, let’s use `filter()` to decide which observations (rows) we’ll keep or exclude in new subsets, similar to using Excel’s VLOOKUP function or filter tool.
8\.3 `dplyr::filter()` to conditionally subset by rows
------------------------------------------------------
Use `filter()` to let R know which **rows** you want to keep or exclude, based whether or not their contents match conditions that you set for one or more variables.
Some examples in words that might inspire you to use `filter()`:
* “I only want to keep rows where the temperature is greater than 90°F.”
* “I want to keep all observations **except** those where the tree type is listed as **unknown**.”
* “I want to make a new subset with only data for mountain lions (the species variable) in California (the state variable).”
When we use `filter()`, we need to let R know a couple of things:
* What data frame we’re filtering from
* What condition(s) we want observations to **match** and/or **not match** in order to keep them in the new subset
Here, we’ll learn some common ways to use `filter()`.
### 8\.3\.1 Filter rows by matching a single character string
Let’s say we want to keep all observations from the **fish** data frame where the common name is “garibaldi” (fun fact: that’s California’s official marine state fish, protected in California coastal waters!).
Here, we need to tell R to only *keep rows* from the **fish** data frame when the common name (**common\_name** variable) exactly matches **garibaldi**.
Use `==` to ask R to look for exact matches:
```
fish_garibaldi <- fish %>%
filter(common_name == "garibaldi")
```
Check out the **fish\_garibaldi** object to ensure that only *garibaldi* observations remain.
#### 8\.3\.1\.1 Activity
**Task**: Create a subset starting from the **fish** data frame, stored as object **fish\_mohk**, that only contains observations from Mohawk Reef (site entered as “mohk”).
**Solution**:
```
fish_mohk <- fish %>%
filter(site == "mohk")
```
Explore the subset you just created to ensure that only Mohawk Reef observations are returned.
### 8\.3\.2 Filter rows based on numeric conditions
Use expected operators (\>, \<, \>\=, \<\=, \=\=) to set conditions for a numeric variable when filtering. For this example, we only want to retain observations when the **total\_count** column value is \>\= 50:
```
fish_over50 <- fish %>%
filter(total_count >= 50)
```
### 8\.3\.3 Filter to return rows that match *this* OR *that* OR *that*
What if we want to return a subset of the **fish** df that contains *garibaldi*, *blacksmith* OR *black surfperch*?
There are several ways to write an “OR” statement for filtering, which will keep any observations that match Condition A *or* Condition B *or* Condition C. In this example, we will create a subset from **fish** that only contains rows where the **common\_name** is *garibaldi* or *blacksmith* or *black surfperch*.
Way 1: You can indicate **OR** using the vertical line operator `|` to indicate “OR”:
```
fish_3sp <- fish %>%
filter(common_name == "garibaldi" |
common_name == "blacksmith" |
common_name == "black surfperch")
```
Alternatively, if you’re looking for multiple matches in the *same variable*, you can use the `%in%` operator instead. Use `%in%` to ask R to look for *any matches* within a vector:
```
fish_3sp <- fish %>%
filter(common_name %in% c("garibaldi", "blacksmith", "black surfperch"))
```
Notice that the two methods above return the same thing.
**Critical thinking:** In what scenario might you *NOT* want to use `%in%` for an “or” filter statement? Hint: What if the “or” conditions aren’t different outcomes for the same variable?
#### 8\.3\.3\.1 Activity
**Task:** Create a subset from **fish** called **fish\_gar\_2016** that keeps all observations if the year is 2016 *OR* the common name is “garibaldi.”
**Solution:**
```
fish_gar_2016 <- fish %>%
filter(year == 2016 | common_name == "garibaldi")
```
### 8\.3\.4 Filter to return observations that match **this** AND **that**
In the examples above, we learned to keep observations that matched any of a number of conditions (**or** statements).
Sometimes we’ll only want to keep observations that satisfy multiple conditions (e.g., to keep this observation it must satisfy this condition **AND** that condition). For example, we may want to create a subset that only returns rows from **fish** where the **year** is 2018 *and* the **site** is Arroyo Quemado “aque”
In `filter()`, add a comma (or ampersand ‘\&’) between arguments for multiple “and” conditions:
```
aque_2018 <- fish %>%
filter(year == 2018, site == "aque")
```
Check it out to see that only observations where the site is “aque” in 2018 are retained:
```
aque_2018
```
```
## # A tibble: 5 x 4
## year site common_name total_count
## <dbl> <chr> <chr> <dbl>
## 1 2018 aque black surfperch 2
## 2 2018 aque blacksmith 1
## 3 2018 aque garibaldi 1
## 4 2018 aque rock wrasse 4
## 5 2018 aque senorita 36
```
Like most things in R, there are other ways to do the same thing. For example, you could do the same thing using `&` (instead of a comma) between “and” conditions:
```
# Use the ampersand (&) to add another condition "and this must be true":
aque_2018 <- fish %>%
filter(year == 2018 & site == "aque")
```
Or you could just do two filter steps in sequence:
```
# Written as sequential filter steps:
aque_2018 <- fish %>%
filter(year == 2018) %>%
filter(site == "aque")
```
### 8\.3\.5 Activity: combined filter conditions
**Challenge task:** Create a subset from the **fish** data frame, called **low\_gb\_wr** that only contains:
* Observations for *garibaldi* or *rock wrasse*
* AND the *total\_count* is *less than or equal to 10*
**Solution:**
```
low_gb_wr <- fish %>%
filter(common_name %in% c("garibaldi", "rock wrasse"),
total_count <= 10)
```
### 8\.3\.6 `stringr::str_detect()` to filter by a partial pattern
Sometimes we’ll want to keep observations that contain a specific string pattern within a variable of interest.
For example, consider the fantasy data below:
| id | species |
| --- | --- |
| 1 | rainbow rockfish |
| 2 | blue rockfish |
| 3 | sparkle urchin |
| 4 | royal blue fish |
There might be a time when we would want to use observations that:
* Contain the string “fish,” in isolation or within a larger string (like “rockfish”)
* Contain the string “blue”
In those cases, it would be useful to **detect** a string pattern, and potentially keep any rows that contain it. Here, we’ll use `stringr::str_detect()` to find and keep observations that contain our specified string pattern.
Let’s detect and keep observations from **fish** where the **common\_name** variable contains string pattern “black.” Note that there are two fish, blacksmith and black surfperch, that would satisfy this condition.
Using `filter()` \+ `str_detect()` in combination to find and keep observations where the **site** variable contains pattern “sc”:
```
fish_bl <- fish %>%
filter(str_detect(common_name, pattern = "black"))
```
So `str_detect()` returns is a series of TRUE/FALSE responses for each row, based on whether or not they contain the specified pattern. In that example, any row that *does* contain “black” returns `TRUE`, and any row that *does not* contain “black” returns `FALSE`.
### 8\.3\.7 Activity
**Task:** Create a new object called **fish\_it**, starting from **fish**, that only contains observations if the **common\_name** variable contains the string pattern “it.” What species remain?
**Solution:**
```
fish_it <- fish %>%
filter(str_detect(common_name, pattern = "it"))
# blacksmITh and senorITa remain!
```
We can also *exclude* observations that contain a set string pattern by adding the `negate = TRUE` argument within `str_detect()`.
**Sync your local project to your repo on GitHub.**
8\.4 `dplyr::*_join()` to merge data frames
-------------------------------------------
There are a number of ways to merge data frames in R. We’ll use `full_join()`, `left_join()`, and `inner_join()` in this session.
From R Documentation (`?join`):
* `full_join()`: “returns all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing.” Basically, nothing gets thrown out, even if a match doesn’t exist \- making `full_join()` the safest option for merging data frames. When in doubt, `full_join()`.
* `left_join()`: “return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned.”
* `inner_join()`: “returns all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned.” This will drop observations that don’t have a match between the merged data frames, which makes it a riskier merging option if you’re not sure what you’re trying to do.
Schematic (from RStudio data wrangling cheat sheet):
We will use **kelp\_abur** as our “left” data frame, and **fish** as our “right” data frame, to explore different join outcomes.
### 8\.4\.1 `full_join()` to merge data frames, keeping everything
When we join data frames in R, we need to tell R a couple of things (and it does the hard joining work for us):
* Which data frames we want to merge together
* Which variables to merge by
Use `full_join()` to safely combine two data frames, keeping everything from both and populating with `NA` as necessary.
Example: use `full_join()` to combine **kelp\_abur** and **fish**:
```
abur_kelp_fish <- kelp_abur %>%
full_join(fish, by = c("year", "site"))
```
Let’s look at the merged data frame with `View(abur_kelp_fish)`. A few things to notice about how `full_join()` has worked:
1. All columns that existed in **both data frames** still exist
2. All observations are retained, even if they don’t have a match. In this case, notice that for other sites (not ‘abur’) the observation for fish still exists, even though there was no corresponding kelp data to merge with it.
3. The kelp frond data is joined to *all observations* where the joining variables (*year*, *site*) are a match, which is why it is repeated 5 times for each year (once for each fish species).
Because all data (observations \& columns) are retained, `full_join()` is the safest option if you’re unclear about how to merge data frames.
### 8\.4\.2 `left_join(x,y)` to merge data frames, keeping everything in the ‘x’ data frame and only matches from the ‘y’ data frame
Now, we want to keep all observations in *kelp\_abur*, and merge them with *fish* while only keeping observations from *fish* that match an observation within *abur*. When we use `left_join()`, any information from *fish* that don’t have a match (by year and site) in *kelp\_abur* won’t be retained, because those wouldn’t have a match in the left data frame.
```
kelp_fish_left <- kelp_abur %>%
left_join(fish, by = c("year","site"))
```
Notice when you look at **kelp\_fish\_left**, data for other sites that exist in fish do **not** get joined, because `left_join(df_a, df_b)` will only keep observations from `df_b` if they have a match in `df_a`!
### 8\.4\.3 `inner_join()` to merge data frames, only keeping observations with a match in **both**
Use `inner_join()` if you **only** want to retain observations that have matches across **both data** frames. Caution: this is built to exclude any observations that don’t match across data frames by joined variables \- double check to make sure this is actually what you want to do!
For example, if we use `inner_join()` to merge fish and kelp\_abur, then we are asking R to **only return observations where the joining variables (*year* and *site*) have matches in both data frames.** Let’s see what the outcome is:
```
kelp_fish_injoin <- kelp_abur %>%
inner_join(fish, by = c("year", "site"))
# kelp_fish_injoin
```
Here, we see that only observations (rows) where there is a match for *year* and *site* in both data frames are returned.
### 8\.4\.4 `filter()` and `join()` in a sequence
Now let’s combine what we’ve learned about piping, filtering and joining!
Let’s complete the following as part of a single sequence (remember, check to see what you’ve produced after each step) to create a new data frame called **my\_fish\_join**:
* Start with **fish** data frame
* Filter **fish** to only including observations for 2017 at Arroyo Burro
* Join the **kelp\_abur** data frame to the resulting subset using `left_join()`
* Add a new column that contains the ‘fish per kelp fronds’ density (total\_count / total\_fronds)
That sequence might look like this:
```
my_fish_join <- fish %>%
filter(year == 2017, site == "abur") %>%
left_join(kelp_abur, by = c("year", "site")) %>%
mutate(fish_per_frond = total_count / total_fronds)
```
Explore the resulting **my\_fish\_join** data frame.
8\.5 An HTML table with `kable()` and `kableExtra`
--------------------------------------------------
With any data frame, you can a nicer looking table in your knitted HTML using `knitr::kable()` and functions in the `kableExtra` package.
Start by using `kable()` with my\_fish\_join, and see what the default HTML table looks like in your knitted document:
```
kable(my_fish_join)
```
Simple, but quick to get a clear \& useful table! Now let’s spruce it up a bit with `kableExtra::kable_styling()` to modify HTML table styles:
```
my_fish_join %>%
kable() %>%
kable_styling(bootstrap_options = "striped",
full_width = FALSE)
```
…with many other options for customizing HTML tables! Make sure to check out [“Create awesome HTML tables with knitr::kable() and kableExtra” by Hao Zhu](https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html) for more examples and options.
**Sync your project with your repo on GitHub**
### 8\.5\.1 End `filter()` \+ `_join()` section!
8\.1 Summary
------------
In previous sessions, we’ve learned to do some basic wrangling and find summary information with functions in the `dplyr` package, which exists within the `tidyverse`. In this session, we’ll expand our data wrangling toolkit using:
* `filter()` to conditionally subset our data by **rows**, and
* `*_join()` functions to merge data frames together
* And we’ll make a nicely formatted HTML table with `kable()` and `kableExtra`
The combination of `filter()` and `*_join()` \- to return rows satisfying a condition we specify, and merging data frames by like variables \- is analogous to the useful VLOOKUP function in Excel.
### 8\.1\.1 Objectives
* Use `filter()` to subset data frames, returning **rows** that satisfy variable conditions
* Use `full_join()`, `left_join()`, and `inner_join()` to merge data frames, with different endpoints in mind
* Use `filter()` and `*_join()` as part of a wrangling sequence
### 8\.1\.2 Resources
* [`filter()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/filter.html)
* [`join()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/join.html)
* [Chapters 5 and 13 in *R for Data Science* by Garrett Grolemund and Hadley Wickham](https://r4ds.had.co.nz/)
* [“Create awesome HTML tables with knitr::kable() and kableExtra” by Hao Zhu](https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html)
### 8\.1\.1 Objectives
* Use `filter()` to subset data frames, returning **rows** that satisfy variable conditions
* Use `full_join()`, `left_join()`, and `inner_join()` to merge data frames, with different endpoints in mind
* Use `filter()` and `*_join()` as part of a wrangling sequence
### 8\.1\.2 Resources
* [`filter()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/filter.html)
* [`join()` documentation from tidyverse.org](https://dplyr.tidyverse.org/reference/join.html)
* [Chapters 5 and 13 in *R for Data Science* by Garrett Grolemund and Hadley Wickham](https://r4ds.had.co.nz/)
* [“Create awesome HTML tables with knitr::kable() and kableExtra” by Hao Zhu](https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html)
8\.2 Set\-up: Create a new .Rmd, attach packages \& get data
------------------------------------------------------------
Create a new R Markdown document in your r\-workshop project and knit to save as **filter\_join.Rmd**. Remove all the example code (everything below the set\-up code chunk).
In this session, we’ll attach four packages:
* `tidyverse`
* `readxl`
* `here`
* `kableExtra`
Attach the packages in the setup code chunk in your .Rmd:
```
library(tidyverse)
library(readxl)
library(here)
library(kableExtra)
```
Then create a new code chunk to read in two files from your ‘data’ subfolder:
* fish.csv
* kelp\_fronds.xlsx (read in only the “abur” worksheet by adding argument `sheet = "abur"` to `read_excel()`)
```
# Read in data:
fish <- read_csv(here("data", "fish.csv"))
kelp_abur <- read_excel(here("data", "kelp_fronds.xlsx"), sheet = "abur")
```
We should always explore the data we’ve read in. Use functions like `View()`, `names()`, `summary()`, `head()` and `tail()` to check them out.
Now, let’s use `filter()` to decide which observations (rows) we’ll keep or exclude in new subsets, similar to using Excel’s VLOOKUP function or filter tool.
8\.3 `dplyr::filter()` to conditionally subset by rows
------------------------------------------------------
Use `filter()` to let R know which **rows** you want to keep or exclude, based whether or not their contents match conditions that you set for one or more variables.
Some examples in words that might inspire you to use `filter()`:
* “I only want to keep rows where the temperature is greater than 90°F.”
* “I want to keep all observations **except** those where the tree type is listed as **unknown**.”
* “I want to make a new subset with only data for mountain lions (the species variable) in California (the state variable).”
When we use `filter()`, we need to let R know a couple of things:
* What data frame we’re filtering from
* What condition(s) we want observations to **match** and/or **not match** in order to keep them in the new subset
Here, we’ll learn some common ways to use `filter()`.
### 8\.3\.1 Filter rows by matching a single character string
Let’s say we want to keep all observations from the **fish** data frame where the common name is “garibaldi” (fun fact: that’s California’s official marine state fish, protected in California coastal waters!).
Here, we need to tell R to only *keep rows* from the **fish** data frame when the common name (**common\_name** variable) exactly matches **garibaldi**.
Use `==` to ask R to look for exact matches:
```
fish_garibaldi <- fish %>%
filter(common_name == "garibaldi")
```
Check out the **fish\_garibaldi** object to ensure that only *garibaldi* observations remain.
#### 8\.3\.1\.1 Activity
**Task**: Create a subset starting from the **fish** data frame, stored as object **fish\_mohk**, that only contains observations from Mohawk Reef (site entered as “mohk”).
**Solution**:
```
fish_mohk <- fish %>%
filter(site == "mohk")
```
Explore the subset you just created to ensure that only Mohawk Reef observations are returned.
### 8\.3\.2 Filter rows based on numeric conditions
Use expected operators (\>, \<, \>\=, \<\=, \=\=) to set conditions for a numeric variable when filtering. For this example, we only want to retain observations when the **total\_count** column value is \>\= 50:
```
fish_over50 <- fish %>%
filter(total_count >= 50)
```
### 8\.3\.3 Filter to return rows that match *this* OR *that* OR *that*
What if we want to return a subset of the **fish** df that contains *garibaldi*, *blacksmith* OR *black surfperch*?
There are several ways to write an “OR” statement for filtering, which will keep any observations that match Condition A *or* Condition B *or* Condition C. In this example, we will create a subset from **fish** that only contains rows where the **common\_name** is *garibaldi* or *blacksmith* or *black surfperch*.
Way 1: You can indicate **OR** using the vertical line operator `|` to indicate “OR”:
```
fish_3sp <- fish %>%
filter(common_name == "garibaldi" |
common_name == "blacksmith" |
common_name == "black surfperch")
```
Alternatively, if you’re looking for multiple matches in the *same variable*, you can use the `%in%` operator instead. Use `%in%` to ask R to look for *any matches* within a vector:
```
fish_3sp <- fish %>%
filter(common_name %in% c("garibaldi", "blacksmith", "black surfperch"))
```
Notice that the two methods above return the same thing.
**Critical thinking:** In what scenario might you *NOT* want to use `%in%` for an “or” filter statement? Hint: What if the “or” conditions aren’t different outcomes for the same variable?
#### 8\.3\.3\.1 Activity
**Task:** Create a subset from **fish** called **fish\_gar\_2016** that keeps all observations if the year is 2016 *OR* the common name is “garibaldi.”
**Solution:**
```
fish_gar_2016 <- fish %>%
filter(year == 2016 | common_name == "garibaldi")
```
### 8\.3\.4 Filter to return observations that match **this** AND **that**
In the examples above, we learned to keep observations that matched any of a number of conditions (**or** statements).
Sometimes we’ll only want to keep observations that satisfy multiple conditions (e.g., to keep this observation it must satisfy this condition **AND** that condition). For example, we may want to create a subset that only returns rows from **fish** where the **year** is 2018 *and* the **site** is Arroyo Quemado “aque”
In `filter()`, add a comma (or ampersand ‘\&’) between arguments for multiple “and” conditions:
```
aque_2018 <- fish %>%
filter(year == 2018, site == "aque")
```
Check it out to see that only observations where the site is “aque” in 2018 are retained:
```
aque_2018
```
```
## # A tibble: 5 x 4
## year site common_name total_count
## <dbl> <chr> <chr> <dbl>
## 1 2018 aque black surfperch 2
## 2 2018 aque blacksmith 1
## 3 2018 aque garibaldi 1
## 4 2018 aque rock wrasse 4
## 5 2018 aque senorita 36
```
Like most things in R, there are other ways to do the same thing. For example, you could do the same thing using `&` (instead of a comma) between “and” conditions:
```
# Use the ampersand (&) to add another condition "and this must be true":
aque_2018 <- fish %>%
filter(year == 2018 & site == "aque")
```
Or you could just do two filter steps in sequence:
```
# Written as sequential filter steps:
aque_2018 <- fish %>%
filter(year == 2018) %>%
filter(site == "aque")
```
### 8\.3\.5 Activity: combined filter conditions
**Challenge task:** Create a subset from the **fish** data frame, called **low\_gb\_wr** that only contains:
* Observations for *garibaldi* or *rock wrasse*
* AND the *total\_count* is *less than or equal to 10*
**Solution:**
```
low_gb_wr <- fish %>%
filter(common_name %in% c("garibaldi", "rock wrasse"),
total_count <= 10)
```
### 8\.3\.6 `stringr::str_detect()` to filter by a partial pattern
Sometimes we’ll want to keep observations that contain a specific string pattern within a variable of interest.
For example, consider the fantasy data below:
| id | species |
| --- | --- |
| 1 | rainbow rockfish |
| 2 | blue rockfish |
| 3 | sparkle urchin |
| 4 | royal blue fish |
There might be a time when we would want to use observations that:
* Contain the string “fish,” in isolation or within a larger string (like “rockfish”)
* Contain the string “blue”
In those cases, it would be useful to **detect** a string pattern, and potentially keep any rows that contain it. Here, we’ll use `stringr::str_detect()` to find and keep observations that contain our specified string pattern.
Let’s detect and keep observations from **fish** where the **common\_name** variable contains string pattern “black.” Note that there are two fish, blacksmith and black surfperch, that would satisfy this condition.
Using `filter()` \+ `str_detect()` in combination to find and keep observations where the **site** variable contains pattern “sc”:
```
fish_bl <- fish %>%
filter(str_detect(common_name, pattern = "black"))
```
So `str_detect()` returns is a series of TRUE/FALSE responses for each row, based on whether or not they contain the specified pattern. In that example, any row that *does* contain “black” returns `TRUE`, and any row that *does not* contain “black” returns `FALSE`.
### 8\.3\.7 Activity
**Task:** Create a new object called **fish\_it**, starting from **fish**, that only contains observations if the **common\_name** variable contains the string pattern “it.” What species remain?
**Solution:**
```
fish_it <- fish %>%
filter(str_detect(common_name, pattern = "it"))
# blacksmITh and senorITa remain!
```
We can also *exclude* observations that contain a set string pattern by adding the `negate = TRUE` argument within `str_detect()`.
**Sync your local project to your repo on GitHub.**
### 8\.3\.1 Filter rows by matching a single character string
Let’s say we want to keep all observations from the **fish** data frame where the common name is “garibaldi” (fun fact: that’s California’s official marine state fish, protected in California coastal waters!).
Here, we need to tell R to only *keep rows* from the **fish** data frame when the common name (**common\_name** variable) exactly matches **garibaldi**.
Use `==` to ask R to look for exact matches:
```
fish_garibaldi <- fish %>%
filter(common_name == "garibaldi")
```
Check out the **fish\_garibaldi** object to ensure that only *garibaldi* observations remain.
#### 8\.3\.1\.1 Activity
**Task**: Create a subset starting from the **fish** data frame, stored as object **fish\_mohk**, that only contains observations from Mohawk Reef (site entered as “mohk”).
**Solution**:
```
fish_mohk <- fish %>%
filter(site == "mohk")
```
Explore the subset you just created to ensure that only Mohawk Reef observations are returned.
#### 8\.3\.1\.1 Activity
**Task**: Create a subset starting from the **fish** data frame, stored as object **fish\_mohk**, that only contains observations from Mohawk Reef (site entered as “mohk”).
**Solution**:
```
fish_mohk <- fish %>%
filter(site == "mohk")
```
Explore the subset you just created to ensure that only Mohawk Reef observations are returned.
### 8\.3\.2 Filter rows based on numeric conditions
Use expected operators (\>, \<, \>\=, \<\=, \=\=) to set conditions for a numeric variable when filtering. For this example, we only want to retain observations when the **total\_count** column value is \>\= 50:
```
fish_over50 <- fish %>%
filter(total_count >= 50)
```
### 8\.3\.3 Filter to return rows that match *this* OR *that* OR *that*
What if we want to return a subset of the **fish** df that contains *garibaldi*, *blacksmith* OR *black surfperch*?
There are several ways to write an “OR” statement for filtering, which will keep any observations that match Condition A *or* Condition B *or* Condition C. In this example, we will create a subset from **fish** that only contains rows where the **common\_name** is *garibaldi* or *blacksmith* or *black surfperch*.
Way 1: You can indicate **OR** using the vertical line operator `|` to indicate “OR”:
```
fish_3sp <- fish %>%
filter(common_name == "garibaldi" |
common_name == "blacksmith" |
common_name == "black surfperch")
```
Alternatively, if you’re looking for multiple matches in the *same variable*, you can use the `%in%` operator instead. Use `%in%` to ask R to look for *any matches* within a vector:
```
fish_3sp <- fish %>%
filter(common_name %in% c("garibaldi", "blacksmith", "black surfperch"))
```
Notice that the two methods above return the same thing.
**Critical thinking:** In what scenario might you *NOT* want to use `%in%` for an “or” filter statement? Hint: What if the “or” conditions aren’t different outcomes for the same variable?
#### 8\.3\.3\.1 Activity
**Task:** Create a subset from **fish** called **fish\_gar\_2016** that keeps all observations if the year is 2016 *OR* the common name is “garibaldi.”
**Solution:**
```
fish_gar_2016 <- fish %>%
filter(year == 2016 | common_name == "garibaldi")
```
#### 8\.3\.3\.1 Activity
**Task:** Create a subset from **fish** called **fish\_gar\_2016** that keeps all observations if the year is 2016 *OR* the common name is “garibaldi.”
**Solution:**
```
fish_gar_2016 <- fish %>%
filter(year == 2016 | common_name == "garibaldi")
```
### 8\.3\.4 Filter to return observations that match **this** AND **that**
In the examples above, we learned to keep observations that matched any of a number of conditions (**or** statements).
Sometimes we’ll only want to keep observations that satisfy multiple conditions (e.g., to keep this observation it must satisfy this condition **AND** that condition). For example, we may want to create a subset that only returns rows from **fish** where the **year** is 2018 *and* the **site** is Arroyo Quemado “aque”
In `filter()`, add a comma (or ampersand ‘\&’) between arguments for multiple “and” conditions:
```
aque_2018 <- fish %>%
filter(year == 2018, site == "aque")
```
Check it out to see that only observations where the site is “aque” in 2018 are retained:
```
aque_2018
```
```
## # A tibble: 5 x 4
## year site common_name total_count
## <dbl> <chr> <chr> <dbl>
## 1 2018 aque black surfperch 2
## 2 2018 aque blacksmith 1
## 3 2018 aque garibaldi 1
## 4 2018 aque rock wrasse 4
## 5 2018 aque senorita 36
```
Like most things in R, there are other ways to do the same thing. For example, you could do the same thing using `&` (instead of a comma) between “and” conditions:
```
# Use the ampersand (&) to add another condition "and this must be true":
aque_2018 <- fish %>%
filter(year == 2018 & site == "aque")
```
Or you could just do two filter steps in sequence:
```
# Written as sequential filter steps:
aque_2018 <- fish %>%
filter(year == 2018) %>%
filter(site == "aque")
```
### 8\.3\.5 Activity: combined filter conditions
**Challenge task:** Create a subset from the **fish** data frame, called **low\_gb\_wr** that only contains:
* Observations for *garibaldi* or *rock wrasse*
* AND the *total\_count* is *less than or equal to 10*
**Solution:**
```
low_gb_wr <- fish %>%
filter(common_name %in% c("garibaldi", "rock wrasse"),
total_count <= 10)
```
### 8\.3\.6 `stringr::str_detect()` to filter by a partial pattern
Sometimes we’ll want to keep observations that contain a specific string pattern within a variable of interest.
For example, consider the fantasy data below:
| id | species |
| --- | --- |
| 1 | rainbow rockfish |
| 2 | blue rockfish |
| 3 | sparkle urchin |
| 4 | royal blue fish |
There might be a time when we would want to use observations that:
* Contain the string “fish,” in isolation or within a larger string (like “rockfish”)
* Contain the string “blue”
In those cases, it would be useful to **detect** a string pattern, and potentially keep any rows that contain it. Here, we’ll use `stringr::str_detect()` to find and keep observations that contain our specified string pattern.
Let’s detect and keep observations from **fish** where the **common\_name** variable contains string pattern “black.” Note that there are two fish, blacksmith and black surfperch, that would satisfy this condition.
Using `filter()` \+ `str_detect()` in combination to find and keep observations where the **site** variable contains pattern “sc”:
```
fish_bl <- fish %>%
filter(str_detect(common_name, pattern = "black"))
```
So `str_detect()` returns is a series of TRUE/FALSE responses for each row, based on whether or not they contain the specified pattern. In that example, any row that *does* contain “black” returns `TRUE`, and any row that *does not* contain “black” returns `FALSE`.
### 8\.3\.7 Activity
**Task:** Create a new object called **fish\_it**, starting from **fish**, that only contains observations if the **common\_name** variable contains the string pattern “it.” What species remain?
**Solution:**
```
fish_it <- fish %>%
filter(str_detect(common_name, pattern = "it"))
# blacksmITh and senorITa remain!
```
We can also *exclude* observations that contain a set string pattern by adding the `negate = TRUE` argument within `str_detect()`.
**Sync your local project to your repo on GitHub.**
8\.4 `dplyr::*_join()` to merge data frames
-------------------------------------------
There are a number of ways to merge data frames in R. We’ll use `full_join()`, `left_join()`, and `inner_join()` in this session.
From R Documentation (`?join`):
* `full_join()`: “returns all rows and all columns from both x and y. Where there are not matching values, returns NA for the one missing.” Basically, nothing gets thrown out, even if a match doesn’t exist \- making `full_join()` the safest option for merging data frames. When in doubt, `full_join()`.
* `left_join()`: “return all rows from x, and all columns from x and y. Rows in x with no match in y will have NA values in the new columns. If there are multiple matches between x and y, all combinations of the matches are returned.”
* `inner_join()`: “returns all rows from x where there are matching values in y, and all columns from x and y. If there are multiple matches between x and y, all combination of the matches are returned.” This will drop observations that don’t have a match between the merged data frames, which makes it a riskier merging option if you’re not sure what you’re trying to do.
Schematic (from RStudio data wrangling cheat sheet):
We will use **kelp\_abur** as our “left” data frame, and **fish** as our “right” data frame, to explore different join outcomes.
### 8\.4\.1 `full_join()` to merge data frames, keeping everything
When we join data frames in R, we need to tell R a couple of things (and it does the hard joining work for us):
* Which data frames we want to merge together
* Which variables to merge by
Use `full_join()` to safely combine two data frames, keeping everything from both and populating with `NA` as necessary.
Example: use `full_join()` to combine **kelp\_abur** and **fish**:
```
abur_kelp_fish <- kelp_abur %>%
full_join(fish, by = c("year", "site"))
```
Let’s look at the merged data frame with `View(abur_kelp_fish)`. A few things to notice about how `full_join()` has worked:
1. All columns that existed in **both data frames** still exist
2. All observations are retained, even if they don’t have a match. In this case, notice that for other sites (not ‘abur’) the observation for fish still exists, even though there was no corresponding kelp data to merge with it.
3. The kelp frond data is joined to *all observations* where the joining variables (*year*, *site*) are a match, which is why it is repeated 5 times for each year (once for each fish species).
Because all data (observations \& columns) are retained, `full_join()` is the safest option if you’re unclear about how to merge data frames.
### 8\.4\.2 `left_join(x,y)` to merge data frames, keeping everything in the ‘x’ data frame and only matches from the ‘y’ data frame
Now, we want to keep all observations in *kelp\_abur*, and merge them with *fish* while only keeping observations from *fish* that match an observation within *abur*. When we use `left_join()`, any information from *fish* that don’t have a match (by year and site) in *kelp\_abur* won’t be retained, because those wouldn’t have a match in the left data frame.
```
kelp_fish_left <- kelp_abur %>%
left_join(fish, by = c("year","site"))
```
Notice when you look at **kelp\_fish\_left**, data for other sites that exist in fish do **not** get joined, because `left_join(df_a, df_b)` will only keep observations from `df_b` if they have a match in `df_a`!
### 8\.4\.3 `inner_join()` to merge data frames, only keeping observations with a match in **both**
Use `inner_join()` if you **only** want to retain observations that have matches across **both data** frames. Caution: this is built to exclude any observations that don’t match across data frames by joined variables \- double check to make sure this is actually what you want to do!
For example, if we use `inner_join()` to merge fish and kelp\_abur, then we are asking R to **only return observations where the joining variables (*year* and *site*) have matches in both data frames.** Let’s see what the outcome is:
```
kelp_fish_injoin <- kelp_abur %>%
inner_join(fish, by = c("year", "site"))
# kelp_fish_injoin
```
Here, we see that only observations (rows) where there is a match for *year* and *site* in both data frames are returned.
### 8\.4\.4 `filter()` and `join()` in a sequence
Now let’s combine what we’ve learned about piping, filtering and joining!
Let’s complete the following as part of a single sequence (remember, check to see what you’ve produced after each step) to create a new data frame called **my\_fish\_join**:
* Start with **fish** data frame
* Filter **fish** to only including observations for 2017 at Arroyo Burro
* Join the **kelp\_abur** data frame to the resulting subset using `left_join()`
* Add a new column that contains the ‘fish per kelp fronds’ density (total\_count / total\_fronds)
That sequence might look like this:
```
my_fish_join <- fish %>%
filter(year == 2017, site == "abur") %>%
left_join(kelp_abur, by = c("year", "site")) %>%
mutate(fish_per_frond = total_count / total_fronds)
```
Explore the resulting **my\_fish\_join** data frame.
### 8\.4\.1 `full_join()` to merge data frames, keeping everything
When we join data frames in R, we need to tell R a couple of things (and it does the hard joining work for us):
* Which data frames we want to merge together
* Which variables to merge by
Use `full_join()` to safely combine two data frames, keeping everything from both and populating with `NA` as necessary.
Example: use `full_join()` to combine **kelp\_abur** and **fish**:
```
abur_kelp_fish <- kelp_abur %>%
full_join(fish, by = c("year", "site"))
```
Let’s look at the merged data frame with `View(abur_kelp_fish)`. A few things to notice about how `full_join()` has worked:
1. All columns that existed in **both data frames** still exist
2. All observations are retained, even if they don’t have a match. In this case, notice that for other sites (not ‘abur’) the observation for fish still exists, even though there was no corresponding kelp data to merge with it.
3. The kelp frond data is joined to *all observations* where the joining variables (*year*, *site*) are a match, which is why it is repeated 5 times for each year (once for each fish species).
Because all data (observations \& columns) are retained, `full_join()` is the safest option if you’re unclear about how to merge data frames.
### 8\.4\.2 `left_join(x,y)` to merge data frames, keeping everything in the ‘x’ data frame and only matches from the ‘y’ data frame
Now, we want to keep all observations in *kelp\_abur*, and merge them with *fish* while only keeping observations from *fish* that match an observation within *abur*. When we use `left_join()`, any information from *fish* that don’t have a match (by year and site) in *kelp\_abur* won’t be retained, because those wouldn’t have a match in the left data frame.
```
kelp_fish_left <- kelp_abur %>%
left_join(fish, by = c("year","site"))
```
Notice when you look at **kelp\_fish\_left**, data for other sites that exist in fish do **not** get joined, because `left_join(df_a, df_b)` will only keep observations from `df_b` if they have a match in `df_a`!
### 8\.4\.3 `inner_join()` to merge data frames, only keeping observations with a match in **both**
Use `inner_join()` if you **only** want to retain observations that have matches across **both data** frames. Caution: this is built to exclude any observations that don’t match across data frames by joined variables \- double check to make sure this is actually what you want to do!
For example, if we use `inner_join()` to merge fish and kelp\_abur, then we are asking R to **only return observations where the joining variables (*year* and *site*) have matches in both data frames.** Let’s see what the outcome is:
```
kelp_fish_injoin <- kelp_abur %>%
inner_join(fish, by = c("year", "site"))
# kelp_fish_injoin
```
Here, we see that only observations (rows) where there is a match for *year* and *site* in both data frames are returned.
### 8\.4\.4 `filter()` and `join()` in a sequence
Now let’s combine what we’ve learned about piping, filtering and joining!
Let’s complete the following as part of a single sequence (remember, check to see what you’ve produced after each step) to create a new data frame called **my\_fish\_join**:
* Start with **fish** data frame
* Filter **fish** to only including observations for 2017 at Arroyo Burro
* Join the **kelp\_abur** data frame to the resulting subset using `left_join()`
* Add a new column that contains the ‘fish per kelp fronds’ density (total\_count / total\_fronds)
That sequence might look like this:
```
my_fish_join <- fish %>%
filter(year == 2017, site == "abur") %>%
left_join(kelp_abur, by = c("year", "site")) %>%
mutate(fish_per_frond = total_count / total_fronds)
```
Explore the resulting **my\_fish\_join** data frame.
8\.5 An HTML table with `kable()` and `kableExtra`
--------------------------------------------------
With any data frame, you can a nicer looking table in your knitted HTML using `knitr::kable()` and functions in the `kableExtra` package.
Start by using `kable()` with my\_fish\_join, and see what the default HTML table looks like in your knitted document:
```
kable(my_fish_join)
```
Simple, but quick to get a clear \& useful table! Now let’s spruce it up a bit with `kableExtra::kable_styling()` to modify HTML table styles:
```
my_fish_join %>%
kable() %>%
kable_styling(bootstrap_options = "striped",
full_width = FALSE)
```
…with many other options for customizing HTML tables! Make sure to check out [“Create awesome HTML tables with knitr::kable() and kableExtra” by Hao Zhu](https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_html.html) for more examples and options.
**Sync your project with your repo on GitHub**
### 8\.5\.1 End `filter()` \+ `_join()` section!
### 8\.5\.1 End `filter()` \+ `_join()` section!
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/collaborating.html |
Chapter 9 Collaborating \& getting help
=======================================
9\.1 Summary
------------
Since the GitHub session (Chapter [4](github.html#github)), we have been practicing using GitHub with RStudio to collaborate with our most important collaborator: Future You.
Here we will practice using GitHub with RStudio to collaborate with others now, with a mindset towards Future Us (your colleagues that you know and have yet to meet). We will also how to engage with the \#rstats community, including how to engage on Twitter, and how to ask for help.
We are going to teach you the simplest way to collaborate with someone, which is for both of you to have privileges to directly edit and add files to a repository. GitHub is built for software developer teams, and there is a lot of features that limit who can directly edit files (which lead to “pull requests”), but we won’t cover that today.
### 9\.1\.1 Objectives
* intro to R communities
* How to effectively ask for help
+ Googling. Error messages are your friends
+ How to use Twitter for \#rstats
+ Create a reproducible example with `reprex`
* create a new repo and give permission to a collaborator
* publish webpages online
### 9\.1\.2 Resources
* [ESM 206 Intro to data science \& stats](https://allisonhorst.github.io), specifically [ESM Lecture 2](https://docs.google.com/presentation/d/1u1DdhU_WTv1b-sbQgqVGAE-bA2Nq_Yym8BzcPW4lS3k/edit#slide=id.g63942ead2d_0_219) \- by Allison Horst
* [Finding the YOU in the R community](https://github.com/jthomasmock/presentations/blob/master/r-community2.pdf) \- by Thomas Mock
* [reprex.tidyverse.org](https://reprex.tidyverse.org/)
* [Reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) \- by Jenny Bryan
* [Getting help in R: do as I say, not as I’ve done](https://sctyner.github.io/rhelp.html) by Sam Tyner
* [Making free websites with RStudio’s R Markdown](https://jules32.github.io/rmarkdown-website-tutorial/) \- by Julie Lowndes
9\.2 R communities
------------------
We are going to start off by talking about communities that exist around R and how you can engage with them.
R communities connect online and in person. And we use Twitter as a platform to connect with each other. Yes, Twitter is a legit tool for data science. Most communities have some degree of in\-person and online presence, with Twitter being a big part of that online presence, and it enables you to talk directly with people. On Twitter, we connect using the \#rstats hashtag, and thus often called the “rstats community” (more on Twitter in a moment).
This is a small (and incomplete!) sampling to give you a sense of a few communities. Please see Thomas Mock’s presentation [Finding the YOU in the R community](https://github.com/jthomasmock/presentations/blob/master/r-community2.pdf) for more details.
#### 9\.2\.0\.1 RStudio Community
What is it: Online community forum for all questions R \& RStudio
Location: online at [community.rstudio.com](https://community.rstudio.com%3E)
Also: [RStudio](https://twitter.com/rstudio) on Twitter
#### 9\.2\.0\.2 RLadies
RLadies is a world\-wide organization to promote gender diversity in the R community.
Location: online at [rladies.org](https://rladies.org/), on Twitter at [rladiesglobal](https://twitter.com/rladiesglobal)
Also: [WeAreRLadies](https://twitter.com/WeAreRLadies)
#### 9\.2\.0\.3 rOpenSci
What is it: rOpenSci builds software with a community of users and developers, and educate scientists about transparent research practices.
Location: online at [ropensci.org](https://ropensci.org/), on Twitter at [ropensci](https://twitter.com/ropensci)
Also: [roknowtifier](https://twitter.com/roknowtifier), [rocitations](https://twitter.com/rocitations)
#### 9\.2\.0\.4 R User Groups
What is it: R User Groups (“RUGs”) are in\-person meetups supported by [The R Consortium](https://www.r-consortium.org/projects/r-user-group-support-program).
Location: local chapters. See a [list of RUGs and conferences](https://jumpingrivers.github.io/meetingsR/r-user-groups.html).
Also: example: [Los Angeles R Users Group](https://twitter.com/la_rusers)
#### 9\.2\.0\.5 The Carpentries
What is it: Network teaching foundational data science skills to researchers worldwide
Location: online at [carpentries.org](https://carpentries.org), on Twitter at [thecarpentries](https://twitter.com/thecarpentries), local workshops worldwide
#### 9\.2\.0\.6 R4DS Community
What is it: A community of R learners at all skill levels working together to improve our skills.
Location: on Twitter: [R4DScommunity](https://twitter.com/R4DScommunity), on Slack — sign up from [rfordatasci.com](https://www.rfordatasci.com/)
Also: [\#tidytuesday](https://twitter.com/search?q=%23tidytuesday&src=typed_query), [R4DS\_es](https://twitter.com/RFDS_es)
### 9\.2\.1 Community awesomeness
Example with Sam Firke’s janitor package: [sfirke.github.io/janitor](http://sfirke.github.io/janitor/), highlighting the [`excel_numeric_to_date`](http://sfirke.github.io/janitor/reference/excel_numeric_to_date.html) function and learning about it through Twitter.
9\.3 How to use Twitter for \#rstats
------------------------------------
Twitter is how we connect with other R users, learn from each other, develop together, and become friends. Especially at an event like RStudio::conf, it is a great way to stay connect and stay connected with folks you meet.
Twitter is definitely a firehose of information, but if you use it deliberately, you can hear the signal through the noise.
I was super skeptical of Twitter. I thought it was a megaphone for angry people. But it turns out it is a place to have small, thoughtful conversations and be part of innovative and friendly communities.
### 9\.3\.1 Examples
Here are a few examples of how to use Twitter for \#rstats.
When I saw [this tweet](https://twitter.com/Md_Harris/status/1074469302974193665/photo/1) by [Md\_Harris](https://twitter.com/Md_Harris), this was my internal monologue:
1. Cool visualization!
2. I want to represent my data this way
3. He includes his [code](https://gist.github.com/mrecos) that I can look at to understand what he did, and I can run and remix
4. The package is from [sckottie](https://twitter.com/sckottie) — who I know from [rOpenSci](https://ropensci.org), which is a really amazing software developer community for science
5. [`rnoaa`](https://cran.r-project.org/web/packages/rnoaa/index.html) is a package making NOAA \[US environmental] data more accessible! I didn’t know about this, it will be so useful for my colleagues
6. I will retweet so my network can benefit as well
Another example, [this tweet](https://twitter.com/JennyBryan/status/1074339217986138113) where [JennyBryan](https://twitter.com/JennyBryan/) is asking for feedback on a super useful package for interfacing between R and excel: [`readxl`](https://readxl.tidyverse.org/).
My internal monologue:
1. Yay, `readxl` is awesome, and also getting better thanks to Jenny
2. Do I have any spreadsheets to contribute?
3. In any case, I will retweet so others can contribute. And I’ll like it too because I appreciate this work
### 9\.3\.2 How to Twitter
My advice for Twitter is to start off small and deliberately. Curate who you follow and start by listening. I use Twitter deliberately for R and science communities, so that is the majority of the folks I follow (but of course I also follow [Mark Hamill](https://twitter.com/HamillHimself).
So start using Twitter to listen and learn, and then as you gradually build up courage, you can like and retweet things. And remember that liking and retweeting is not only a way to engage with the community yourself, but it is also a way to welcome and amplify other people. Sometimes I just reply saying how cool something is. Sometimes I like it. Sometimes I retweet. Sometimes I retweet with a quote/comment. But I also miss a lot of things since I limit how much time I give to Twitter, and that’s OK. You will always miss things but you are part of the community and they are there for you like you are for them.
If you’re joining twitter to learn R, I suggest following:
* [hadleywickham](https://twitter.com/hadleywickham)
* [JennyBryan](https://twitter.com/JennyBryan)
* [rOpenSci](https://twitter.com/ropensci)
* [WeAreRLadies](https://twitter.com/https://twitter.com/WeAreRLadies)
Listen to what they say and who joins those conversations, and follow other people and organizations. You could also look at who they are following. Also, check out the [\#rstats](https://twitter.com/search?q=%23rstats&src=typed_query) hashtag. This is not something that you can follow (although you can have it as a column in software like TweetDeck), but you can search it and you’ll see that the people you follow use it to help tag conversations. You’ll find other useful tags as well, within your domain, as well as other R\-related interests, e.g. [\#rspatial](https://twitter.com/search?q=%23rspatial&src=typed_query). When I read marine science papers, I see if the authors are on Twitter; I sometimes follow them, ask them questions, or just tell them I liked their work!
You can also follow us:
* [juliesquid](https://twitter.com/juliesquid)
* [allison\_horst](https://twitter.com/allison_horst)
* [jamiecmonty](https://twitter.com/jamiecmonty)
* [ECOuture9](https://twitter.com/ECOuture9)
These are just a few ways to learn and build community on Twitter. And as you feel comfortable, you can start sharing your ideas or your links too. Live\-tweeting is a really great way to engage as well, and bridge in\-person conferences with online communities. And of course, in addition to engaging on Twitter, check whether there are local RLadies chapters or other R meetups, and join! Or perhaps [start one](https://openscapes.org/blog/2018/11/16/how-to-start-a-coding-club/)?
So Twitter is a place to engage with folks and learn, and while it is also a place to ask questions, there are other places to look first, depending on your question.
9\.4 Getting help
-----------------
Getting help, or really helping you help yourself, means moving beyond “it’s not working” and towards solution\-oriented approaches. Part of this is the mindset where you **expect that someone has encountered this problem before** and that **most likely the problem is your typo or misuse**, and not that R is broken or hates you.
We’re going to talk about how to ask for help, how to interpret responses, and how to act upon the help you receive.
### 9\.4\.1 Read the error message
As we’ve talked about before, they may be red, they may be unfamiliar, but **error messages are your friends**. There are multiple types of messages that R will print. Read the message to figure out what it’s trying to tell you.
**Error:** There’s a fatal error in your code that prevented it from being run through successfully. You need to fix it for the code to run.
**Warning:** Non\-fatal errors (don’t stop the code from running, but this is a potential problem that you should know about).
**Message:** Here’s some helpful information about the code you just ran (you can hide these if you want to)
### 9\.4\.2 Googling
The internet has the answer to all of your R questions, hopes, and dreams.
When you get an error you don’t understand, copy it and paste it into Google. You can also add “rstats” or “tidyverse” or something to help Google (although it’s getting really good without it too).
For error messages, copy\-pasting the exact message is best. But if you have a “how do I…?” type question you can also enter this into Google. You’ll develop the vocabulary you need to refine your search terms as you become more familiar with R. It’s a continued learning process.
And just as important as Googling your error message is being able to identify a useful result.
Something I can’t emphasize enough: **pay attention to filepaths**. They tell you the source, they help you find pages again. Often remembering a few things about it will let you either google it again or navigate back there yourself.
**Check the date, check the source, check the relevance.** Is this a modern solution, or one from 2013? Do I trust the person responding? Is this about my question or on a different topic?
You will see links from many places, particularly:
* RStudio Community
* Stack Overflow
* Books, blogs, tutorials, courses, webinars
* GitHub Issues
### 9\.4\.3 Create a reprex
A “reprex” is a REPRoducible EXample: code that you need help with and want to ask someone about.
Jenny Bryan made the `reprex` package because “conversations about code are more productive with code that ***actually runs***, that ***I don’t have to run***, and that ***I can easily run***.”
Let me demo an example, and then you will do it yourself. This is Jenny’s summary from her [reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) of what I’ll do:
`reprex` is part of the Tidyverse, so we all already have it installed, but we do need to attach it:
```
library(reprex)
```
First let me create a little example that I have a question about. I want to know how I change the color of the geom\_points in my ggplot. (Reminder: this example is to illustrate reprex, not how you would actually look in the help pages!!!)
I’ll type into our RMarkdown script:
```
library(tidyverse)
ggplot(cars, aes(speed, dist)) +
geom_point()
```
So this is the code I have a question about it. My next step is to select it all and copy it into my clipboard.
Then I go to my Console and type:
```
reprex()
```
Reprex does its thing, making sure this is a reproducible example — this wouldn’t be without `library(tidyverse)`! — and displaying it in my Viewer on the bottom\-right of the RStudio IDE.
reprex includes the output — experienced programmers who you might be asking for help can often read your code and know where the problem lies, especially when they can see the output.
When it finishes I also have what I see in the Viewer copied in my clipboard. I can paste it anywhere! In an email, Google Doc, in Slack. I’m going to paste mine in an Issue for my r\-workshop repository.
When I paste it:
Notice that following the backticks, there is only `r`, not `r{}`. This is because what we have pasted is so GitHub can format as R code, but it won’t be executed by R through RMarkdown.
I can click on the Preview button in the Issues to see how this will render: and it will show my code, nicely formatted for R.
So in this example I might write at the top of the comment: “Practicing a reprex and Issues.”allison\_horst how do I change the point color to cyan?"
**`reprex` is a “workflow package”**. That means that it’s something we don’t put in Rmds, scripts, or anything else. We use is in the Console when we are preparing to ask for help — from ourselves or someone else.
### 9\.4\.4 Activity
Make a reprex using the built\-in `mtcars` dataset and paste it in the Issues for your repository. (Have a look: `head(mtcars); skimr::skim(mtcars)`)
1. install and attach the `reprex` package
2. For your reprex: take the `mtcars` dataset and then filter it for observations where `mpg` is more than 26\.
3. Navigate to github.com/your\_username/r\-workshop/issues
Hint: remember to read the error message. “could not find function `%>%`” means you’ve forgotten to attach the appropriate package with `library()`
#### 9\.4\.4\.1 Solution (no peeking)
```
## setup: run in Rmd or Console
library(reprex)
## reprex code: run in Rmd or Console
library(tidyverse) # or library(dplyr) or library(magrittr)
mtcars %>% filter(mpg > 26)
## copy the above
## reprex call: run in Console
reprex()
## paste in Issue!
```
9\.5 Collaborating with GitHub
------------------------------
Now we’re going to collaborate with a partner and set up for our last session, which will tie together everything we’ve been learning.
### 9\.5\.1 Create repo (Partner 1\)
Team up with a partner sitting next to you. Partner 1 will create a new repository. We will do this in the same way that we did in Chapter [4](github.html#github): [Create a repository on Github.com](github.html#create-a-repository-on-github.com).
Let’s name it `r-collab`.
### 9\.5\.2 Create a gh\-pages branch (Partner 1\)
We aren’t going to talk about branches very much, but they are a powerful feature of git/GitHub. I think of it as creating a copy of your work that becomes a parallel universe that you can modify safely because it’s not affecting your original work. And then you can choose to merge the universes back together if and when you want. By default, when you create a new repo you begin with one branch, and it is named `master`. When you create new branches, you can name them whatever you want. However, if you name one `gh-pages` (all lowercase, with a `-` and no spaces), this will let you create a website. And that’s our plan. So, Partner 1, do this to create a `gh-pages` branch:
On the homepage for your repo on GitHub.com, click the button that says “Branch:master.” Here, you can switch to another branch (right now there aren’t any others besides `master`), or create one by typing a new name.
Let’s type `gh-pages`.
Let’s also change `gh-pages` to the default branch and delete the master branch: this will be a one\-time\-only thing that we do here:
First click to control branches:
And then click to change the default branch to `gh-pages`. I like to then delete the `master` branch when it has the little red trash can next to it. It will make you confirm that you really want to delete it, which I do!
### 9\.5\.3 Give your collaborator privileges (Partner 1 and 2\)
Now, Partner 1, go into Settings \> Collaborators \> enter Partner 2’s (your collaborator’s) username.
Partner 2 then needs to check their email and accept as a collaborator. Notice that your collaborator has “Push access to the repository” (highlighted below):
### 9\.5\.4 Clone to a new R Project (Partner 1\)
Now let’s have Partner 1 clone the repository to their local computer. We’ll do this through RStudio like we did before (see Chapter [4](github.html#github): [Clone your repository using RStudio](github.html#clone-your-repository-using-rstudio)), but with a final additional step before hitting “Create Project”: select “Open in a new Session.”
Opening this Project in a new Session opens up a new world of awesomeness from RStudio. Having different RStudio project sessions allows you to keep your work separate and organized. So you can collaborate with this collaborator on this repository while also working on your other repository from this morning. I tend to have a lot of projects going at one time:
Have a look in your git tab.
Like we saw this morning, when you first clone a repo through RStudio, RStudio will add an `.Rproj` file to your repo. And if you didn’t add a `.gitignore` file when you originally created the repo on GitHub.com, RStudio will also add this for you. So, Partner 1, let’s go ahead and sync this back to GitHub.com.
Remember:
Let’s confirm that this was synced by looking at GitHub.com again. You may have to refresh the page, but you should see this commit where you added the `.Rproj` file.
### 9\.5\.5 Clone to a new R Project (Partner 2\)
Now it’s Partner 2’s turn! Partner 2, clone this repository following the same steps that Partner 1 just did. When you clone it, RStudio should not create any new files — why? Partner 1 already created and pushed the `.Rproj` and `.gitignore` files so they already exist in the repo.
### 9\.5\.6 Create data folder (Partner 2\)
Partner 2, let’s create a folder for our data and copy our `noaa_landings.csv` there.
And now let’s sync back to GitHub: Pull, Stage, Commit, Push
When we inspect on GitHub.com, click to view all the commits, you’ll see commits logged from both Partner 1 and 2!
> Question: Would you still be able clone a repository that you are not a collaborator on? What do you think would happen? Try it! Can you sync back?
### 9\.5\.7 State of the Repository
OK, so where do things stand right now? GitHub.com has the most recent versions of all the repository’s files. Partner 2 also has these most recent versions locally. How about Partner 1?
Partner 1 does not have the most recent versions of everything on their computer.
Question: How can we change that? Or how could we even check?
Answer: PULL.
Let’s have Partner 1 go back to RStudio and Pull. If their files aren’t up\-to\-date, this will pull the most recent versions to their local computer. And if they already did have the most recent versions? Well, pulling doesn’t cost anything (other than an internet connection), so if everything is up\-to\-date, pulling is fine too.
I recommend pulling every time you come back to a collaborative repository. Whether you haven’t opened RStudio in a month or you’ve just been away for a lunch break, pull. It might not be necessary, but it can save a lot of heartache later.
9\.6 Merge conflicts
--------------------
What kind of heartache are we talking about? Merge conflicts.
Within a file, GitHub tracks changes line\-by\-line. So you can also have collaborators working on different lines within the same file and GitHub will be able to weave those changes into each other – that’s it’s job!
It’s when you have collaborators working on *the same lines within the same file* that you can have **merge conflicts**. This is when there is a conflict within the same line so that GitHub can’t merge automatically. They need a human to help decide what information to keep (which is good because you don’t want GitHub to decide for you). Merge conflicts can be frustrating, but like R’s error messages, they are actually trying to help you.
So let’s experience this together: we will create and solve a merge conflict. **Stop and watch me demo how to create and solve a merge conflict with my Partner 2, and then you will do the same with your partner.** Here’s what I am going to do:
### 9\.6\.1 Pull (Partners 1 and 2\)
Both partners go to RStudio and pull so you have the most recent versions of all your files.
### 9\.6\.2 Create a conflict (Partners 1 and 2\)
Now, Partners 1 and 2, both go to the README.md, and on Line 4, write something, anything. Save the README.
I’m not going to give any examples because when you do this I want to be sure that both Partners to write something different. Save the README.
### 9\.6\.3 Sync (Partner 2\)
OK. Now, let’s have Partner 2 sync: pull, stage, commit, push. Just like normal.
Great.
### 9\.6\.4 Sync attempts \& fixes (Partner 1\)
Now, let’s have Partner 1 (me) try.
When I try to Pull, I get the first error we will see today: “Your local changes to README.md would be overwritten by merge.” GitHub is telling me that it knows I’ve modified my README, but since I haven’t staged and committed them, it can’t do its job and merge my conflicts with whatever is different about the version from GitHub.com.
This is good: the alternative would be GitHub deciding which one to keep and it’s better that we have that kind of control and decision making.
GitHub provides some guidance: either commit this work first, or “stash it,” which you can interpret that as moving the README temporarily to another folder somewhere outside of this GitHub repository so that you can successfully pull and then decide your next steps.
Let’s follow their advice and have Partner 1 commit. Great. Now let’s try pulling again.
New error: “Merge conflict in README…fix conflicts and then commit the result.”
So this error is different from the previous: GitHub knows what has changed line\-by\-line in my file here, and it knows what has changed line\-by\-line in the version on GitHub.com. And it knows there is a conflict between them. So it’s asking me to now compare these changes, choose a preference, and commit.
**Note:** if Partner 2 and I were not intentionally in this demo editing exactly the same lines, GitHub likely could have done its job and merged this file successfully after our first error fix above.
We will again follow GitHub’s advice to fix the conflicts. Let’s close this window and inspect.
Did you notice two other things that happened along with this message?
First\< in the Git tab, next to the README listing there are orange `U`s; this means that there is an unresolved conflict. It means my file is not staged with a check anymore because modifications have occurred to the file since it has been staged.
Second, the README file itself changed; there is new text and symbols. (We got a preview in the diff pane also).
```
<<<<<<< HEAD
Julie is collaborating on this README.
=======
**Allison is adding text here.**
>>>>>>> 05a189b23372f0bdb5b42630f8cb318003cee19b
```
In this example, Partner 1 is Julie and Partner 2 is Allison. GitHub is displaying the line that Julie wrote and the line Allison. wrote separated by `=======`. These are the two choices that I (Partner 1\) has to decide between, which one do you want to keep? Where where does this decision start and end? The lines are bounded by `<<<<<<<HEAD` and `>>>>>>>long commit identifier`.
So, to resolve this merge conflict, Partner 1 has to chose which one to keep. And I tell GitHub my choice by deleting everything in this bundle of tex except the line they want. So, Partner 1 will delete the `<<<<<<HEAD`, `=====`, `>>>>long commit identifier` and either Julie or Allison’s line that I don’t want to keep.
I’ll do this, and then commit again. In this example, we’ve kept Allison’s line:
Then I’ll stage, and write a commit message. I often write “resolving merge conflict” or something similar. When I stage the file, notice how now my edits look like a simple line replacement (compare with the image above before it was re\-staged):
And we’re done! We can inspect on GitHub.com that I am the most recent contributor to this repository. And if we look in the commit history we will see both Allison and my original commits, along with our merge conflict fix.
### 9\.6\.5 Activity
Create a merge conflict with your partner, following the steps that we just did in the demo above. Practice different approaches to solving errors: for example, try stashing instead of committing.
### 9\.6\.6 How do you avoid merge conflicts?
Merge conflicts can occur when you collaborate with others — I find most often it is collaborating with ME from a different computer. They will happen, but you can minimize them by getting into good habits.
To minimize merge conflicts, pull often so that you are aware of anything that is different, and deal with it early. Similarly, commit and push often so that your contributions do not become too unweildly for yourself or others later on.
Also, talk with your collaborators. Are they working on the exact same file right now that you need to be? If so, coordinate with them (in person, GChat, Slack, email). For example: “I’m working on X part and will push my changes before my meeting — then you can work on it and I’ll pull when I’m back.” Also, if you find yourself always working on the exact same file, you could consider breaking it into different files to minimize problems.
But merge conflicts will occur and some of them will be heartbreaking and demoralizing. They happen to me when I collaborate with myself between my work computer and laptop. We demoed small conflicts with just one file, but they can occur across many files, particularly when your code is generating figures, scripts, or HTML files. Sometimes the best approach is the [burn it all down method](https://happygitwithr.com/burn.html), where you delete your local copy of the repo and re\-clone.
Protect yourself by pulling and syncing often!
9\.7 Create your collaborative website
--------------------------------------
OK. Let’s have both Partners create a new RMarkdown file and name it `my_name_fisheries.Rmd`. Here’s what you will do:
1. Pull
2. Create a new RMarkdown file **and name it `my_name_fisheries.Rmd`**. Let’s do it all lowercase. These will become pages for our website
3. We’ll start by testing: let’s simply change the title inside the Rmd, call it “My Name’s Fisheries Analysis”
4. Knit
5. Save and sync your .Rmd and your .html files
* (pull, stage, commit, push)
6. Go to Partner 1’s repo, mine is [https://github.com/jules32/r\-collab/](https://github.com/jules32/r-collab/)
7. GitHub also supports this as a website (because we set up our gh\-pages branch)
Where is it? Figure out your website’s url from your github repo’s url — pay attention to urls. \- note that the url starts with my **username.github.io**
* my github repo: [https://github.com/jules32/r\-collab/](https://github.com/jules32/r-collab/)
* my website url: [https://jules32\.github.io/r\-collab/](https://jules32.github.io/r-collab/)
* right now this displays the README as the “home page” for our website.
8. Now navigate to your web page! For example:
* my github repo: [https://github.com/jules32/r\-collab/julie\_fisheries](https://github.com/jules32/r-collab/julie_fisheries)
* my website url: [https://jules32\.github.io/r\-collab/julie\_fisheries](https://jules32.github.io/r-collab/julie_fisheries)
> ***ProTip*** Pay attention to URLs. An unsung skill of the modern analyst is to be able to navigate the internet by keeping an eye on patterns.
So cool!
You and your partner have created individual webpages here, but they do not talk to each other (i.e. you can’t navigate between them or even know that one exists from the other). We will not organize these pages into a website today, but you can practice this on your own with this hour\-long tutorial: [Making free websites with RStudio’s R Markdown](https://jules32.github.io/rmarkdown-website-tutorial/).
> **Aside:** On websites, if something is called `index.html`, that defaults to the home page. So [https://jules32\.github.io/r\-collab/](https://jules32.github.io/r-collab/) is the same as [https://jules32\.github.io/r\-collab/index.html](https://jules32.github.io/r-collab/index.html). So as you think about building websites you can develop your index.Rmd file rather than your README.md as your homepage.
#### 9\.7\.0\.1 Troubleshooting
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 9\.7\.1 END **collaborating** session!
| always\_allow\_html: true |
| --- |
9\.1 Summary
------------
Since the GitHub session (Chapter [4](github.html#github)), we have been practicing using GitHub with RStudio to collaborate with our most important collaborator: Future You.
Here we will practice using GitHub with RStudio to collaborate with others now, with a mindset towards Future Us (your colleagues that you know and have yet to meet). We will also how to engage with the \#rstats community, including how to engage on Twitter, and how to ask for help.
We are going to teach you the simplest way to collaborate with someone, which is for both of you to have privileges to directly edit and add files to a repository. GitHub is built for software developer teams, and there is a lot of features that limit who can directly edit files (which lead to “pull requests”), but we won’t cover that today.
### 9\.1\.1 Objectives
* intro to R communities
* How to effectively ask for help
+ Googling. Error messages are your friends
+ How to use Twitter for \#rstats
+ Create a reproducible example with `reprex`
* create a new repo and give permission to a collaborator
* publish webpages online
### 9\.1\.2 Resources
* [ESM 206 Intro to data science \& stats](https://allisonhorst.github.io), specifically [ESM Lecture 2](https://docs.google.com/presentation/d/1u1DdhU_WTv1b-sbQgqVGAE-bA2Nq_Yym8BzcPW4lS3k/edit#slide=id.g63942ead2d_0_219) \- by Allison Horst
* [Finding the YOU in the R community](https://github.com/jthomasmock/presentations/blob/master/r-community2.pdf) \- by Thomas Mock
* [reprex.tidyverse.org](https://reprex.tidyverse.org/)
* [Reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) \- by Jenny Bryan
* [Getting help in R: do as I say, not as I’ve done](https://sctyner.github.io/rhelp.html) by Sam Tyner
* [Making free websites with RStudio’s R Markdown](https://jules32.github.io/rmarkdown-website-tutorial/) \- by Julie Lowndes
### 9\.1\.1 Objectives
* intro to R communities
* How to effectively ask for help
+ Googling. Error messages are your friends
+ How to use Twitter for \#rstats
+ Create a reproducible example with `reprex`
* create a new repo and give permission to a collaborator
* publish webpages online
### 9\.1\.2 Resources
* [ESM 206 Intro to data science \& stats](https://allisonhorst.github.io), specifically [ESM Lecture 2](https://docs.google.com/presentation/d/1u1DdhU_WTv1b-sbQgqVGAE-bA2Nq_Yym8BzcPW4lS3k/edit#slide=id.g63942ead2d_0_219) \- by Allison Horst
* [Finding the YOU in the R community](https://github.com/jthomasmock/presentations/blob/master/r-community2.pdf) \- by Thomas Mock
* [reprex.tidyverse.org](https://reprex.tidyverse.org/)
* [Reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) \- by Jenny Bryan
* [Getting help in R: do as I say, not as I’ve done](https://sctyner.github.io/rhelp.html) by Sam Tyner
* [Making free websites with RStudio’s R Markdown](https://jules32.github.io/rmarkdown-website-tutorial/) \- by Julie Lowndes
9\.2 R communities
------------------
We are going to start off by talking about communities that exist around R and how you can engage with them.
R communities connect online and in person. And we use Twitter as a platform to connect with each other. Yes, Twitter is a legit tool for data science. Most communities have some degree of in\-person and online presence, with Twitter being a big part of that online presence, and it enables you to talk directly with people. On Twitter, we connect using the \#rstats hashtag, and thus often called the “rstats community” (more on Twitter in a moment).
This is a small (and incomplete!) sampling to give you a sense of a few communities. Please see Thomas Mock’s presentation [Finding the YOU in the R community](https://github.com/jthomasmock/presentations/blob/master/r-community2.pdf) for more details.
#### 9\.2\.0\.1 RStudio Community
What is it: Online community forum for all questions R \& RStudio
Location: online at [community.rstudio.com](https://community.rstudio.com%3E)
Also: [RStudio](https://twitter.com/rstudio) on Twitter
#### 9\.2\.0\.2 RLadies
RLadies is a world\-wide organization to promote gender diversity in the R community.
Location: online at [rladies.org](https://rladies.org/), on Twitter at [rladiesglobal](https://twitter.com/rladiesglobal)
Also: [WeAreRLadies](https://twitter.com/WeAreRLadies)
#### 9\.2\.0\.3 rOpenSci
What is it: rOpenSci builds software with a community of users and developers, and educate scientists about transparent research practices.
Location: online at [ropensci.org](https://ropensci.org/), on Twitter at [ropensci](https://twitter.com/ropensci)
Also: [roknowtifier](https://twitter.com/roknowtifier), [rocitations](https://twitter.com/rocitations)
#### 9\.2\.0\.4 R User Groups
What is it: R User Groups (“RUGs”) are in\-person meetups supported by [The R Consortium](https://www.r-consortium.org/projects/r-user-group-support-program).
Location: local chapters. See a [list of RUGs and conferences](https://jumpingrivers.github.io/meetingsR/r-user-groups.html).
Also: example: [Los Angeles R Users Group](https://twitter.com/la_rusers)
#### 9\.2\.0\.5 The Carpentries
What is it: Network teaching foundational data science skills to researchers worldwide
Location: online at [carpentries.org](https://carpentries.org), on Twitter at [thecarpentries](https://twitter.com/thecarpentries), local workshops worldwide
#### 9\.2\.0\.6 R4DS Community
What is it: A community of R learners at all skill levels working together to improve our skills.
Location: on Twitter: [R4DScommunity](https://twitter.com/R4DScommunity), on Slack — sign up from [rfordatasci.com](https://www.rfordatasci.com/)
Also: [\#tidytuesday](https://twitter.com/search?q=%23tidytuesday&src=typed_query), [R4DS\_es](https://twitter.com/RFDS_es)
### 9\.2\.1 Community awesomeness
Example with Sam Firke’s janitor package: [sfirke.github.io/janitor](http://sfirke.github.io/janitor/), highlighting the [`excel_numeric_to_date`](http://sfirke.github.io/janitor/reference/excel_numeric_to_date.html) function and learning about it through Twitter.
#### 9\.2\.0\.1 RStudio Community
What is it: Online community forum for all questions R \& RStudio
Location: online at [community.rstudio.com](https://community.rstudio.com%3E)
Also: [RStudio](https://twitter.com/rstudio) on Twitter
#### 9\.2\.0\.2 RLadies
RLadies is a world\-wide organization to promote gender diversity in the R community.
Location: online at [rladies.org](https://rladies.org/), on Twitter at [rladiesglobal](https://twitter.com/rladiesglobal)
Also: [WeAreRLadies](https://twitter.com/WeAreRLadies)
#### 9\.2\.0\.3 rOpenSci
What is it: rOpenSci builds software with a community of users and developers, and educate scientists about transparent research practices.
Location: online at [ropensci.org](https://ropensci.org/), on Twitter at [ropensci](https://twitter.com/ropensci)
Also: [roknowtifier](https://twitter.com/roknowtifier), [rocitations](https://twitter.com/rocitations)
#### 9\.2\.0\.4 R User Groups
What is it: R User Groups (“RUGs”) are in\-person meetups supported by [The R Consortium](https://www.r-consortium.org/projects/r-user-group-support-program).
Location: local chapters. See a [list of RUGs and conferences](https://jumpingrivers.github.io/meetingsR/r-user-groups.html).
Also: example: [Los Angeles R Users Group](https://twitter.com/la_rusers)
#### 9\.2\.0\.5 The Carpentries
What is it: Network teaching foundational data science skills to researchers worldwide
Location: online at [carpentries.org](https://carpentries.org), on Twitter at [thecarpentries](https://twitter.com/thecarpentries), local workshops worldwide
#### 9\.2\.0\.6 R4DS Community
What is it: A community of R learners at all skill levels working together to improve our skills.
Location: on Twitter: [R4DScommunity](https://twitter.com/R4DScommunity), on Slack — sign up from [rfordatasci.com](https://www.rfordatasci.com/)
Also: [\#tidytuesday](https://twitter.com/search?q=%23tidytuesday&src=typed_query), [R4DS\_es](https://twitter.com/RFDS_es)
### 9\.2\.1 Community awesomeness
Example with Sam Firke’s janitor package: [sfirke.github.io/janitor](http://sfirke.github.io/janitor/), highlighting the [`excel_numeric_to_date`](http://sfirke.github.io/janitor/reference/excel_numeric_to_date.html) function and learning about it through Twitter.
9\.3 How to use Twitter for \#rstats
------------------------------------
Twitter is how we connect with other R users, learn from each other, develop together, and become friends. Especially at an event like RStudio::conf, it is a great way to stay connect and stay connected with folks you meet.
Twitter is definitely a firehose of information, but if you use it deliberately, you can hear the signal through the noise.
I was super skeptical of Twitter. I thought it was a megaphone for angry people. But it turns out it is a place to have small, thoughtful conversations and be part of innovative and friendly communities.
### 9\.3\.1 Examples
Here are a few examples of how to use Twitter for \#rstats.
When I saw [this tweet](https://twitter.com/Md_Harris/status/1074469302974193665/photo/1) by [Md\_Harris](https://twitter.com/Md_Harris), this was my internal monologue:
1. Cool visualization!
2. I want to represent my data this way
3. He includes his [code](https://gist.github.com/mrecos) that I can look at to understand what he did, and I can run and remix
4. The package is from [sckottie](https://twitter.com/sckottie) — who I know from [rOpenSci](https://ropensci.org), which is a really amazing software developer community for science
5. [`rnoaa`](https://cran.r-project.org/web/packages/rnoaa/index.html) is a package making NOAA \[US environmental] data more accessible! I didn’t know about this, it will be so useful for my colleagues
6. I will retweet so my network can benefit as well
Another example, [this tweet](https://twitter.com/JennyBryan/status/1074339217986138113) where [JennyBryan](https://twitter.com/JennyBryan/) is asking for feedback on a super useful package for interfacing between R and excel: [`readxl`](https://readxl.tidyverse.org/).
My internal monologue:
1. Yay, `readxl` is awesome, and also getting better thanks to Jenny
2. Do I have any spreadsheets to contribute?
3. In any case, I will retweet so others can contribute. And I’ll like it too because I appreciate this work
### 9\.3\.2 How to Twitter
My advice for Twitter is to start off small and deliberately. Curate who you follow and start by listening. I use Twitter deliberately for R and science communities, so that is the majority of the folks I follow (but of course I also follow [Mark Hamill](https://twitter.com/HamillHimself).
So start using Twitter to listen and learn, and then as you gradually build up courage, you can like and retweet things. And remember that liking and retweeting is not only a way to engage with the community yourself, but it is also a way to welcome and amplify other people. Sometimes I just reply saying how cool something is. Sometimes I like it. Sometimes I retweet. Sometimes I retweet with a quote/comment. But I also miss a lot of things since I limit how much time I give to Twitter, and that’s OK. You will always miss things but you are part of the community and they are there for you like you are for them.
If you’re joining twitter to learn R, I suggest following:
* [hadleywickham](https://twitter.com/hadleywickham)
* [JennyBryan](https://twitter.com/JennyBryan)
* [rOpenSci](https://twitter.com/ropensci)
* [WeAreRLadies](https://twitter.com/https://twitter.com/WeAreRLadies)
Listen to what they say and who joins those conversations, and follow other people and organizations. You could also look at who they are following. Also, check out the [\#rstats](https://twitter.com/search?q=%23rstats&src=typed_query) hashtag. This is not something that you can follow (although you can have it as a column in software like TweetDeck), but you can search it and you’ll see that the people you follow use it to help tag conversations. You’ll find other useful tags as well, within your domain, as well as other R\-related interests, e.g. [\#rspatial](https://twitter.com/search?q=%23rspatial&src=typed_query). When I read marine science papers, I see if the authors are on Twitter; I sometimes follow them, ask them questions, or just tell them I liked their work!
You can also follow us:
* [juliesquid](https://twitter.com/juliesquid)
* [allison\_horst](https://twitter.com/allison_horst)
* [jamiecmonty](https://twitter.com/jamiecmonty)
* [ECOuture9](https://twitter.com/ECOuture9)
These are just a few ways to learn and build community on Twitter. And as you feel comfortable, you can start sharing your ideas or your links too. Live\-tweeting is a really great way to engage as well, and bridge in\-person conferences with online communities. And of course, in addition to engaging on Twitter, check whether there are local RLadies chapters or other R meetups, and join! Or perhaps [start one](https://openscapes.org/blog/2018/11/16/how-to-start-a-coding-club/)?
So Twitter is a place to engage with folks and learn, and while it is also a place to ask questions, there are other places to look first, depending on your question.
### 9\.3\.1 Examples
Here are a few examples of how to use Twitter for \#rstats.
When I saw [this tweet](https://twitter.com/Md_Harris/status/1074469302974193665/photo/1) by [Md\_Harris](https://twitter.com/Md_Harris), this was my internal monologue:
1. Cool visualization!
2. I want to represent my data this way
3. He includes his [code](https://gist.github.com/mrecos) that I can look at to understand what he did, and I can run and remix
4. The package is from [sckottie](https://twitter.com/sckottie) — who I know from [rOpenSci](https://ropensci.org), which is a really amazing software developer community for science
5. [`rnoaa`](https://cran.r-project.org/web/packages/rnoaa/index.html) is a package making NOAA \[US environmental] data more accessible! I didn’t know about this, it will be so useful for my colleagues
6. I will retweet so my network can benefit as well
Another example, [this tweet](https://twitter.com/JennyBryan/status/1074339217986138113) where [JennyBryan](https://twitter.com/JennyBryan/) is asking for feedback on a super useful package for interfacing between R and excel: [`readxl`](https://readxl.tidyverse.org/).
My internal monologue:
1. Yay, `readxl` is awesome, and also getting better thanks to Jenny
2. Do I have any spreadsheets to contribute?
3. In any case, I will retweet so others can contribute. And I’ll like it too because I appreciate this work
### 9\.3\.2 How to Twitter
My advice for Twitter is to start off small and deliberately. Curate who you follow and start by listening. I use Twitter deliberately for R and science communities, so that is the majority of the folks I follow (but of course I also follow [Mark Hamill](https://twitter.com/HamillHimself).
So start using Twitter to listen and learn, and then as you gradually build up courage, you can like and retweet things. And remember that liking and retweeting is not only a way to engage with the community yourself, but it is also a way to welcome and amplify other people. Sometimes I just reply saying how cool something is. Sometimes I like it. Sometimes I retweet. Sometimes I retweet with a quote/comment. But I also miss a lot of things since I limit how much time I give to Twitter, and that’s OK. You will always miss things but you are part of the community and they are there for you like you are for them.
If you’re joining twitter to learn R, I suggest following:
* [hadleywickham](https://twitter.com/hadleywickham)
* [JennyBryan](https://twitter.com/JennyBryan)
* [rOpenSci](https://twitter.com/ropensci)
* [WeAreRLadies](https://twitter.com/https://twitter.com/WeAreRLadies)
Listen to what they say and who joins those conversations, and follow other people and organizations. You could also look at who they are following. Also, check out the [\#rstats](https://twitter.com/search?q=%23rstats&src=typed_query) hashtag. This is not something that you can follow (although you can have it as a column in software like TweetDeck), but you can search it and you’ll see that the people you follow use it to help tag conversations. You’ll find other useful tags as well, within your domain, as well as other R\-related interests, e.g. [\#rspatial](https://twitter.com/search?q=%23rspatial&src=typed_query). When I read marine science papers, I see if the authors are on Twitter; I sometimes follow them, ask them questions, or just tell them I liked their work!
You can also follow us:
* [juliesquid](https://twitter.com/juliesquid)
* [allison\_horst](https://twitter.com/allison_horst)
* [jamiecmonty](https://twitter.com/jamiecmonty)
* [ECOuture9](https://twitter.com/ECOuture9)
These are just a few ways to learn and build community on Twitter. And as you feel comfortable, you can start sharing your ideas or your links too. Live\-tweeting is a really great way to engage as well, and bridge in\-person conferences with online communities. And of course, in addition to engaging on Twitter, check whether there are local RLadies chapters or other R meetups, and join! Or perhaps [start one](https://openscapes.org/blog/2018/11/16/how-to-start-a-coding-club/)?
So Twitter is a place to engage with folks and learn, and while it is also a place to ask questions, there are other places to look first, depending on your question.
9\.4 Getting help
-----------------
Getting help, or really helping you help yourself, means moving beyond “it’s not working” and towards solution\-oriented approaches. Part of this is the mindset where you **expect that someone has encountered this problem before** and that **most likely the problem is your typo or misuse**, and not that R is broken or hates you.
We’re going to talk about how to ask for help, how to interpret responses, and how to act upon the help you receive.
### 9\.4\.1 Read the error message
As we’ve talked about before, they may be red, they may be unfamiliar, but **error messages are your friends**. There are multiple types of messages that R will print. Read the message to figure out what it’s trying to tell you.
**Error:** There’s a fatal error in your code that prevented it from being run through successfully. You need to fix it for the code to run.
**Warning:** Non\-fatal errors (don’t stop the code from running, but this is a potential problem that you should know about).
**Message:** Here’s some helpful information about the code you just ran (you can hide these if you want to)
### 9\.4\.2 Googling
The internet has the answer to all of your R questions, hopes, and dreams.
When you get an error you don’t understand, copy it and paste it into Google. You can also add “rstats” or “tidyverse” or something to help Google (although it’s getting really good without it too).
For error messages, copy\-pasting the exact message is best. But if you have a “how do I…?” type question you can also enter this into Google. You’ll develop the vocabulary you need to refine your search terms as you become more familiar with R. It’s a continued learning process.
And just as important as Googling your error message is being able to identify a useful result.
Something I can’t emphasize enough: **pay attention to filepaths**. They tell you the source, they help you find pages again. Often remembering a few things about it will let you either google it again or navigate back there yourself.
**Check the date, check the source, check the relevance.** Is this a modern solution, or one from 2013? Do I trust the person responding? Is this about my question or on a different topic?
You will see links from many places, particularly:
* RStudio Community
* Stack Overflow
* Books, blogs, tutorials, courses, webinars
* GitHub Issues
### 9\.4\.3 Create a reprex
A “reprex” is a REPRoducible EXample: code that you need help with and want to ask someone about.
Jenny Bryan made the `reprex` package because “conversations about code are more productive with code that ***actually runs***, that ***I don’t have to run***, and that ***I can easily run***.”
Let me demo an example, and then you will do it yourself. This is Jenny’s summary from her [reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) of what I’ll do:
`reprex` is part of the Tidyverse, so we all already have it installed, but we do need to attach it:
```
library(reprex)
```
First let me create a little example that I have a question about. I want to know how I change the color of the geom\_points in my ggplot. (Reminder: this example is to illustrate reprex, not how you would actually look in the help pages!!!)
I’ll type into our RMarkdown script:
```
library(tidyverse)
ggplot(cars, aes(speed, dist)) +
geom_point()
```
So this is the code I have a question about it. My next step is to select it all and copy it into my clipboard.
Then I go to my Console and type:
```
reprex()
```
Reprex does its thing, making sure this is a reproducible example — this wouldn’t be without `library(tidyverse)`! — and displaying it in my Viewer on the bottom\-right of the RStudio IDE.
reprex includes the output — experienced programmers who you might be asking for help can often read your code and know where the problem lies, especially when they can see the output.
When it finishes I also have what I see in the Viewer copied in my clipboard. I can paste it anywhere! In an email, Google Doc, in Slack. I’m going to paste mine in an Issue for my r\-workshop repository.
When I paste it:
Notice that following the backticks, there is only `r`, not `r{}`. This is because what we have pasted is so GitHub can format as R code, but it won’t be executed by R through RMarkdown.
I can click on the Preview button in the Issues to see how this will render: and it will show my code, nicely formatted for R.
So in this example I might write at the top of the comment: “Practicing a reprex and Issues.”allison\_horst how do I change the point color to cyan?"
**`reprex` is a “workflow package”**. That means that it’s something we don’t put in Rmds, scripts, or anything else. We use is in the Console when we are preparing to ask for help — from ourselves or someone else.
### 9\.4\.4 Activity
Make a reprex using the built\-in `mtcars` dataset and paste it in the Issues for your repository. (Have a look: `head(mtcars); skimr::skim(mtcars)`)
1. install and attach the `reprex` package
2. For your reprex: take the `mtcars` dataset and then filter it for observations where `mpg` is more than 26\.
3. Navigate to github.com/your\_username/r\-workshop/issues
Hint: remember to read the error message. “could not find function `%>%`” means you’ve forgotten to attach the appropriate package with `library()`
#### 9\.4\.4\.1 Solution (no peeking)
```
## setup: run in Rmd or Console
library(reprex)
## reprex code: run in Rmd or Console
library(tidyverse) # or library(dplyr) or library(magrittr)
mtcars %>% filter(mpg > 26)
## copy the above
## reprex call: run in Console
reprex()
## paste in Issue!
```
### 9\.4\.1 Read the error message
As we’ve talked about before, they may be red, they may be unfamiliar, but **error messages are your friends**. There are multiple types of messages that R will print. Read the message to figure out what it’s trying to tell you.
**Error:** There’s a fatal error in your code that prevented it from being run through successfully. You need to fix it for the code to run.
**Warning:** Non\-fatal errors (don’t stop the code from running, but this is a potential problem that you should know about).
**Message:** Here’s some helpful information about the code you just ran (you can hide these if you want to)
### 9\.4\.2 Googling
The internet has the answer to all of your R questions, hopes, and dreams.
When you get an error you don’t understand, copy it and paste it into Google. You can also add “rstats” or “tidyverse” or something to help Google (although it’s getting really good without it too).
For error messages, copy\-pasting the exact message is best. But if you have a “how do I…?” type question you can also enter this into Google. You’ll develop the vocabulary you need to refine your search terms as you become more familiar with R. It’s a continued learning process.
And just as important as Googling your error message is being able to identify a useful result.
Something I can’t emphasize enough: **pay attention to filepaths**. They tell you the source, they help you find pages again. Often remembering a few things about it will let you either google it again or navigate back there yourself.
**Check the date, check the source, check the relevance.** Is this a modern solution, or one from 2013? Do I trust the person responding? Is this about my question or on a different topic?
You will see links from many places, particularly:
* RStudio Community
* Stack Overflow
* Books, blogs, tutorials, courses, webinars
* GitHub Issues
### 9\.4\.3 Create a reprex
A “reprex” is a REPRoducible EXample: code that you need help with and want to ask someone about.
Jenny Bryan made the `reprex` package because “conversations about code are more productive with code that ***actually runs***, that ***I don’t have to run***, and that ***I can easily run***.”
Let me demo an example, and then you will do it yourself. This is Jenny’s summary from her [reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) of what I’ll do:
`reprex` is part of the Tidyverse, so we all already have it installed, but we do need to attach it:
```
library(reprex)
```
First let me create a little example that I have a question about. I want to know how I change the color of the geom\_points in my ggplot. (Reminder: this example is to illustrate reprex, not how you would actually look in the help pages!!!)
I’ll type into our RMarkdown script:
```
library(tidyverse)
ggplot(cars, aes(speed, dist)) +
geom_point()
```
So this is the code I have a question about it. My next step is to select it all and copy it into my clipboard.
Then I go to my Console and type:
```
reprex()
```
Reprex does its thing, making sure this is a reproducible example — this wouldn’t be without `library(tidyverse)`! — and displaying it in my Viewer on the bottom\-right of the RStudio IDE.
reprex includes the output — experienced programmers who you might be asking for help can often read your code and know where the problem lies, especially when they can see the output.
When it finishes I also have what I see in the Viewer copied in my clipboard. I can paste it anywhere! In an email, Google Doc, in Slack. I’m going to paste mine in an Issue for my r\-workshop repository.
When I paste it:
Notice that following the backticks, there is only `r`, not `r{}`. This is because what we have pasted is so GitHub can format as R code, but it won’t be executed by R through RMarkdown.
I can click on the Preview button in the Issues to see how this will render: and it will show my code, nicely formatted for R.
So in this example I might write at the top of the comment: “Practicing a reprex and Issues.”allison\_horst how do I change the point color to cyan?"
**`reprex` is a “workflow package”**. That means that it’s something we don’t put in Rmds, scripts, or anything else. We use is in the Console when we are preparing to ask for help — from ourselves or someone else.
### 9\.4\.4 Activity
Make a reprex using the built\-in `mtcars` dataset and paste it in the Issues for your repository. (Have a look: `head(mtcars); skimr::skim(mtcars)`)
1. install and attach the `reprex` package
2. For your reprex: take the `mtcars` dataset and then filter it for observations where `mpg` is more than 26\.
3. Navigate to github.com/your\_username/r\-workshop/issues
Hint: remember to read the error message. “could not find function `%>%`” means you’ve forgotten to attach the appropriate package with `library()`
#### 9\.4\.4\.1 Solution (no peeking)
```
## setup: run in Rmd or Console
library(reprex)
## reprex code: run in Rmd or Console
library(tidyverse) # or library(dplyr) or library(magrittr)
mtcars %>% filter(mpg > 26)
## copy the above
## reprex call: run in Console
reprex()
## paste in Issue!
```
#### 9\.4\.4\.1 Solution (no peeking)
```
## setup: run in Rmd or Console
library(reprex)
## reprex code: run in Rmd or Console
library(tidyverse) # or library(dplyr) or library(magrittr)
mtcars %>% filter(mpg > 26)
## copy the above
## reprex call: run in Console
reprex()
## paste in Issue!
```
9\.5 Collaborating with GitHub
------------------------------
Now we’re going to collaborate with a partner and set up for our last session, which will tie together everything we’ve been learning.
### 9\.5\.1 Create repo (Partner 1\)
Team up with a partner sitting next to you. Partner 1 will create a new repository. We will do this in the same way that we did in Chapter [4](github.html#github): [Create a repository on Github.com](github.html#create-a-repository-on-github.com).
Let’s name it `r-collab`.
### 9\.5\.2 Create a gh\-pages branch (Partner 1\)
We aren’t going to talk about branches very much, but they are a powerful feature of git/GitHub. I think of it as creating a copy of your work that becomes a parallel universe that you can modify safely because it’s not affecting your original work. And then you can choose to merge the universes back together if and when you want. By default, when you create a new repo you begin with one branch, and it is named `master`. When you create new branches, you can name them whatever you want. However, if you name one `gh-pages` (all lowercase, with a `-` and no spaces), this will let you create a website. And that’s our plan. So, Partner 1, do this to create a `gh-pages` branch:
On the homepage for your repo on GitHub.com, click the button that says “Branch:master.” Here, you can switch to another branch (right now there aren’t any others besides `master`), or create one by typing a new name.
Let’s type `gh-pages`.
Let’s also change `gh-pages` to the default branch and delete the master branch: this will be a one\-time\-only thing that we do here:
First click to control branches:
And then click to change the default branch to `gh-pages`. I like to then delete the `master` branch when it has the little red trash can next to it. It will make you confirm that you really want to delete it, which I do!
### 9\.5\.3 Give your collaborator privileges (Partner 1 and 2\)
Now, Partner 1, go into Settings \> Collaborators \> enter Partner 2’s (your collaborator’s) username.
Partner 2 then needs to check their email and accept as a collaborator. Notice that your collaborator has “Push access to the repository” (highlighted below):
### 9\.5\.4 Clone to a new R Project (Partner 1\)
Now let’s have Partner 1 clone the repository to their local computer. We’ll do this through RStudio like we did before (see Chapter [4](github.html#github): [Clone your repository using RStudio](github.html#clone-your-repository-using-rstudio)), but with a final additional step before hitting “Create Project”: select “Open in a new Session.”
Opening this Project in a new Session opens up a new world of awesomeness from RStudio. Having different RStudio project sessions allows you to keep your work separate and organized. So you can collaborate with this collaborator on this repository while also working on your other repository from this morning. I tend to have a lot of projects going at one time:
Have a look in your git tab.
Like we saw this morning, when you first clone a repo through RStudio, RStudio will add an `.Rproj` file to your repo. And if you didn’t add a `.gitignore` file when you originally created the repo on GitHub.com, RStudio will also add this for you. So, Partner 1, let’s go ahead and sync this back to GitHub.com.
Remember:
Let’s confirm that this was synced by looking at GitHub.com again. You may have to refresh the page, but you should see this commit where you added the `.Rproj` file.
### 9\.5\.5 Clone to a new R Project (Partner 2\)
Now it’s Partner 2’s turn! Partner 2, clone this repository following the same steps that Partner 1 just did. When you clone it, RStudio should not create any new files — why? Partner 1 already created and pushed the `.Rproj` and `.gitignore` files so they already exist in the repo.
### 9\.5\.6 Create data folder (Partner 2\)
Partner 2, let’s create a folder for our data and copy our `noaa_landings.csv` there.
And now let’s sync back to GitHub: Pull, Stage, Commit, Push
When we inspect on GitHub.com, click to view all the commits, you’ll see commits logged from both Partner 1 and 2!
> Question: Would you still be able clone a repository that you are not a collaborator on? What do you think would happen? Try it! Can you sync back?
### 9\.5\.7 State of the Repository
OK, so where do things stand right now? GitHub.com has the most recent versions of all the repository’s files. Partner 2 also has these most recent versions locally. How about Partner 1?
Partner 1 does not have the most recent versions of everything on their computer.
Question: How can we change that? Or how could we even check?
Answer: PULL.
Let’s have Partner 1 go back to RStudio and Pull. If their files aren’t up\-to\-date, this will pull the most recent versions to their local computer. And if they already did have the most recent versions? Well, pulling doesn’t cost anything (other than an internet connection), so if everything is up\-to\-date, pulling is fine too.
I recommend pulling every time you come back to a collaborative repository. Whether you haven’t opened RStudio in a month or you’ve just been away for a lunch break, pull. It might not be necessary, but it can save a lot of heartache later.
### 9\.5\.1 Create repo (Partner 1\)
Team up with a partner sitting next to you. Partner 1 will create a new repository. We will do this in the same way that we did in Chapter [4](github.html#github): [Create a repository on Github.com](github.html#create-a-repository-on-github.com).
Let’s name it `r-collab`.
### 9\.5\.2 Create a gh\-pages branch (Partner 1\)
We aren’t going to talk about branches very much, but they are a powerful feature of git/GitHub. I think of it as creating a copy of your work that becomes a parallel universe that you can modify safely because it’s not affecting your original work. And then you can choose to merge the universes back together if and when you want. By default, when you create a new repo you begin with one branch, and it is named `master`. When you create new branches, you can name them whatever you want. However, if you name one `gh-pages` (all lowercase, with a `-` and no spaces), this will let you create a website. And that’s our plan. So, Partner 1, do this to create a `gh-pages` branch:
On the homepage for your repo on GitHub.com, click the button that says “Branch:master.” Here, you can switch to another branch (right now there aren’t any others besides `master`), or create one by typing a new name.
Let’s type `gh-pages`.
Let’s also change `gh-pages` to the default branch and delete the master branch: this will be a one\-time\-only thing that we do here:
First click to control branches:
And then click to change the default branch to `gh-pages`. I like to then delete the `master` branch when it has the little red trash can next to it. It will make you confirm that you really want to delete it, which I do!
### 9\.5\.3 Give your collaborator privileges (Partner 1 and 2\)
Now, Partner 1, go into Settings \> Collaborators \> enter Partner 2’s (your collaborator’s) username.
Partner 2 then needs to check their email and accept as a collaborator. Notice that your collaborator has “Push access to the repository” (highlighted below):
### 9\.5\.4 Clone to a new R Project (Partner 1\)
Now let’s have Partner 1 clone the repository to their local computer. We’ll do this through RStudio like we did before (see Chapter [4](github.html#github): [Clone your repository using RStudio](github.html#clone-your-repository-using-rstudio)), but with a final additional step before hitting “Create Project”: select “Open in a new Session.”
Opening this Project in a new Session opens up a new world of awesomeness from RStudio. Having different RStudio project sessions allows you to keep your work separate and organized. So you can collaborate with this collaborator on this repository while also working on your other repository from this morning. I tend to have a lot of projects going at one time:
Have a look in your git tab.
Like we saw this morning, when you first clone a repo through RStudio, RStudio will add an `.Rproj` file to your repo. And if you didn’t add a `.gitignore` file when you originally created the repo on GitHub.com, RStudio will also add this for you. So, Partner 1, let’s go ahead and sync this back to GitHub.com.
Remember:
Let’s confirm that this was synced by looking at GitHub.com again. You may have to refresh the page, but you should see this commit where you added the `.Rproj` file.
### 9\.5\.5 Clone to a new R Project (Partner 2\)
Now it’s Partner 2’s turn! Partner 2, clone this repository following the same steps that Partner 1 just did. When you clone it, RStudio should not create any new files — why? Partner 1 already created and pushed the `.Rproj` and `.gitignore` files so they already exist in the repo.
### 9\.5\.6 Create data folder (Partner 2\)
Partner 2, let’s create a folder for our data and copy our `noaa_landings.csv` there.
And now let’s sync back to GitHub: Pull, Stage, Commit, Push
When we inspect on GitHub.com, click to view all the commits, you’ll see commits logged from both Partner 1 and 2!
> Question: Would you still be able clone a repository that you are not a collaborator on? What do you think would happen? Try it! Can you sync back?
### 9\.5\.7 State of the Repository
OK, so where do things stand right now? GitHub.com has the most recent versions of all the repository’s files. Partner 2 also has these most recent versions locally. How about Partner 1?
Partner 1 does not have the most recent versions of everything on their computer.
Question: How can we change that? Or how could we even check?
Answer: PULL.
Let’s have Partner 1 go back to RStudio and Pull. If their files aren’t up\-to\-date, this will pull the most recent versions to their local computer. And if they already did have the most recent versions? Well, pulling doesn’t cost anything (other than an internet connection), so if everything is up\-to\-date, pulling is fine too.
I recommend pulling every time you come back to a collaborative repository. Whether you haven’t opened RStudio in a month or you’ve just been away for a lunch break, pull. It might not be necessary, but it can save a lot of heartache later.
9\.6 Merge conflicts
--------------------
What kind of heartache are we talking about? Merge conflicts.
Within a file, GitHub tracks changes line\-by\-line. So you can also have collaborators working on different lines within the same file and GitHub will be able to weave those changes into each other – that’s it’s job!
It’s when you have collaborators working on *the same lines within the same file* that you can have **merge conflicts**. This is when there is a conflict within the same line so that GitHub can’t merge automatically. They need a human to help decide what information to keep (which is good because you don’t want GitHub to decide for you). Merge conflicts can be frustrating, but like R’s error messages, they are actually trying to help you.
So let’s experience this together: we will create and solve a merge conflict. **Stop and watch me demo how to create and solve a merge conflict with my Partner 2, and then you will do the same with your partner.** Here’s what I am going to do:
### 9\.6\.1 Pull (Partners 1 and 2\)
Both partners go to RStudio and pull so you have the most recent versions of all your files.
### 9\.6\.2 Create a conflict (Partners 1 and 2\)
Now, Partners 1 and 2, both go to the README.md, and on Line 4, write something, anything. Save the README.
I’m not going to give any examples because when you do this I want to be sure that both Partners to write something different. Save the README.
### 9\.6\.3 Sync (Partner 2\)
OK. Now, let’s have Partner 2 sync: pull, stage, commit, push. Just like normal.
Great.
### 9\.6\.4 Sync attempts \& fixes (Partner 1\)
Now, let’s have Partner 1 (me) try.
When I try to Pull, I get the first error we will see today: “Your local changes to README.md would be overwritten by merge.” GitHub is telling me that it knows I’ve modified my README, but since I haven’t staged and committed them, it can’t do its job and merge my conflicts with whatever is different about the version from GitHub.com.
This is good: the alternative would be GitHub deciding which one to keep and it’s better that we have that kind of control and decision making.
GitHub provides some guidance: either commit this work first, or “stash it,” which you can interpret that as moving the README temporarily to another folder somewhere outside of this GitHub repository so that you can successfully pull and then decide your next steps.
Let’s follow their advice and have Partner 1 commit. Great. Now let’s try pulling again.
New error: “Merge conflict in README…fix conflicts and then commit the result.”
So this error is different from the previous: GitHub knows what has changed line\-by\-line in my file here, and it knows what has changed line\-by\-line in the version on GitHub.com. And it knows there is a conflict between them. So it’s asking me to now compare these changes, choose a preference, and commit.
**Note:** if Partner 2 and I were not intentionally in this demo editing exactly the same lines, GitHub likely could have done its job and merged this file successfully after our first error fix above.
We will again follow GitHub’s advice to fix the conflicts. Let’s close this window and inspect.
Did you notice two other things that happened along with this message?
First\< in the Git tab, next to the README listing there are orange `U`s; this means that there is an unresolved conflict. It means my file is not staged with a check anymore because modifications have occurred to the file since it has been staged.
Second, the README file itself changed; there is new text and symbols. (We got a preview in the diff pane also).
```
<<<<<<< HEAD
Julie is collaborating on this README.
=======
**Allison is adding text here.**
>>>>>>> 05a189b23372f0bdb5b42630f8cb318003cee19b
```
In this example, Partner 1 is Julie and Partner 2 is Allison. GitHub is displaying the line that Julie wrote and the line Allison. wrote separated by `=======`. These are the two choices that I (Partner 1\) has to decide between, which one do you want to keep? Where where does this decision start and end? The lines are bounded by `<<<<<<<HEAD` and `>>>>>>>long commit identifier`.
So, to resolve this merge conflict, Partner 1 has to chose which one to keep. And I tell GitHub my choice by deleting everything in this bundle of tex except the line they want. So, Partner 1 will delete the `<<<<<<HEAD`, `=====`, `>>>>long commit identifier` and either Julie or Allison’s line that I don’t want to keep.
I’ll do this, and then commit again. In this example, we’ve kept Allison’s line:
Then I’ll stage, and write a commit message. I often write “resolving merge conflict” or something similar. When I stage the file, notice how now my edits look like a simple line replacement (compare with the image above before it was re\-staged):
And we’re done! We can inspect on GitHub.com that I am the most recent contributor to this repository. And if we look in the commit history we will see both Allison and my original commits, along with our merge conflict fix.
### 9\.6\.5 Activity
Create a merge conflict with your partner, following the steps that we just did in the demo above. Practice different approaches to solving errors: for example, try stashing instead of committing.
### 9\.6\.6 How do you avoid merge conflicts?
Merge conflicts can occur when you collaborate with others — I find most often it is collaborating with ME from a different computer. They will happen, but you can minimize them by getting into good habits.
To minimize merge conflicts, pull often so that you are aware of anything that is different, and deal with it early. Similarly, commit and push often so that your contributions do not become too unweildly for yourself or others later on.
Also, talk with your collaborators. Are they working on the exact same file right now that you need to be? If so, coordinate with them (in person, GChat, Slack, email). For example: “I’m working on X part and will push my changes before my meeting — then you can work on it and I’ll pull when I’m back.” Also, if you find yourself always working on the exact same file, you could consider breaking it into different files to minimize problems.
But merge conflicts will occur and some of them will be heartbreaking and demoralizing. They happen to me when I collaborate with myself between my work computer and laptop. We demoed small conflicts with just one file, but they can occur across many files, particularly when your code is generating figures, scripts, or HTML files. Sometimes the best approach is the [burn it all down method](https://happygitwithr.com/burn.html), where you delete your local copy of the repo and re\-clone.
Protect yourself by pulling and syncing often!
### 9\.6\.1 Pull (Partners 1 and 2\)
Both partners go to RStudio and pull so you have the most recent versions of all your files.
### 9\.6\.2 Create a conflict (Partners 1 and 2\)
Now, Partners 1 and 2, both go to the README.md, and on Line 4, write something, anything. Save the README.
I’m not going to give any examples because when you do this I want to be sure that both Partners to write something different. Save the README.
### 9\.6\.3 Sync (Partner 2\)
OK. Now, let’s have Partner 2 sync: pull, stage, commit, push. Just like normal.
Great.
### 9\.6\.4 Sync attempts \& fixes (Partner 1\)
Now, let’s have Partner 1 (me) try.
When I try to Pull, I get the first error we will see today: “Your local changes to README.md would be overwritten by merge.” GitHub is telling me that it knows I’ve modified my README, but since I haven’t staged and committed them, it can’t do its job and merge my conflicts with whatever is different about the version from GitHub.com.
This is good: the alternative would be GitHub deciding which one to keep and it’s better that we have that kind of control and decision making.
GitHub provides some guidance: either commit this work first, or “stash it,” which you can interpret that as moving the README temporarily to another folder somewhere outside of this GitHub repository so that you can successfully pull and then decide your next steps.
Let’s follow their advice and have Partner 1 commit. Great. Now let’s try pulling again.
New error: “Merge conflict in README…fix conflicts and then commit the result.”
So this error is different from the previous: GitHub knows what has changed line\-by\-line in my file here, and it knows what has changed line\-by\-line in the version on GitHub.com. And it knows there is a conflict between them. So it’s asking me to now compare these changes, choose a preference, and commit.
**Note:** if Partner 2 and I were not intentionally in this demo editing exactly the same lines, GitHub likely could have done its job and merged this file successfully after our first error fix above.
We will again follow GitHub’s advice to fix the conflicts. Let’s close this window and inspect.
Did you notice two other things that happened along with this message?
First\< in the Git tab, next to the README listing there are orange `U`s; this means that there is an unresolved conflict. It means my file is not staged with a check anymore because modifications have occurred to the file since it has been staged.
Second, the README file itself changed; there is new text and symbols. (We got a preview in the diff pane also).
```
<<<<<<< HEAD
Julie is collaborating on this README.
=======
**Allison is adding text here.**
>>>>>>> 05a189b23372f0bdb5b42630f8cb318003cee19b
```
In this example, Partner 1 is Julie and Partner 2 is Allison. GitHub is displaying the line that Julie wrote and the line Allison. wrote separated by `=======`. These are the two choices that I (Partner 1\) has to decide between, which one do you want to keep? Where where does this decision start and end? The lines are bounded by `<<<<<<<HEAD` and `>>>>>>>long commit identifier`.
So, to resolve this merge conflict, Partner 1 has to chose which one to keep. And I tell GitHub my choice by deleting everything in this bundle of tex except the line they want. So, Partner 1 will delete the `<<<<<<HEAD`, `=====`, `>>>>long commit identifier` and either Julie or Allison’s line that I don’t want to keep.
I’ll do this, and then commit again. In this example, we’ve kept Allison’s line:
Then I’ll stage, and write a commit message. I often write “resolving merge conflict” or something similar. When I stage the file, notice how now my edits look like a simple line replacement (compare with the image above before it was re\-staged):
And we’re done! We can inspect on GitHub.com that I am the most recent contributor to this repository. And if we look in the commit history we will see both Allison and my original commits, along with our merge conflict fix.
### 9\.6\.5 Activity
Create a merge conflict with your partner, following the steps that we just did in the demo above. Practice different approaches to solving errors: for example, try stashing instead of committing.
### 9\.6\.6 How do you avoid merge conflicts?
Merge conflicts can occur when you collaborate with others — I find most often it is collaborating with ME from a different computer. They will happen, but you can minimize them by getting into good habits.
To minimize merge conflicts, pull often so that you are aware of anything that is different, and deal with it early. Similarly, commit and push often so that your contributions do not become too unweildly for yourself or others later on.
Also, talk with your collaborators. Are they working on the exact same file right now that you need to be? If so, coordinate with them (in person, GChat, Slack, email). For example: “I’m working on X part and will push my changes before my meeting — then you can work on it and I’ll pull when I’m back.” Also, if you find yourself always working on the exact same file, you could consider breaking it into different files to minimize problems.
But merge conflicts will occur and some of them will be heartbreaking and demoralizing. They happen to me when I collaborate with myself between my work computer and laptop. We demoed small conflicts with just one file, but they can occur across many files, particularly when your code is generating figures, scripts, or HTML files. Sometimes the best approach is the [burn it all down method](https://happygitwithr.com/burn.html), where you delete your local copy of the repo and re\-clone.
Protect yourself by pulling and syncing often!
9\.7 Create your collaborative website
--------------------------------------
OK. Let’s have both Partners create a new RMarkdown file and name it `my_name_fisheries.Rmd`. Here’s what you will do:
1. Pull
2. Create a new RMarkdown file **and name it `my_name_fisheries.Rmd`**. Let’s do it all lowercase. These will become pages for our website
3. We’ll start by testing: let’s simply change the title inside the Rmd, call it “My Name’s Fisheries Analysis”
4. Knit
5. Save and sync your .Rmd and your .html files
* (pull, stage, commit, push)
6. Go to Partner 1’s repo, mine is [https://github.com/jules32/r\-collab/](https://github.com/jules32/r-collab/)
7. GitHub also supports this as a website (because we set up our gh\-pages branch)
Where is it? Figure out your website’s url from your github repo’s url — pay attention to urls. \- note that the url starts with my **username.github.io**
* my github repo: [https://github.com/jules32/r\-collab/](https://github.com/jules32/r-collab/)
* my website url: [https://jules32\.github.io/r\-collab/](https://jules32.github.io/r-collab/)
* right now this displays the README as the “home page” for our website.
8. Now navigate to your web page! For example:
* my github repo: [https://github.com/jules32/r\-collab/julie\_fisheries](https://github.com/jules32/r-collab/julie_fisheries)
* my website url: [https://jules32\.github.io/r\-collab/julie\_fisheries](https://jules32.github.io/r-collab/julie_fisheries)
> ***ProTip*** Pay attention to URLs. An unsung skill of the modern analyst is to be able to navigate the internet by keeping an eye on patterns.
So cool!
You and your partner have created individual webpages here, but they do not talk to each other (i.e. you can’t navigate between them or even know that one exists from the other). We will not organize these pages into a website today, but you can practice this on your own with this hour\-long tutorial: [Making free websites with RStudio’s R Markdown](https://jules32.github.io/rmarkdown-website-tutorial/).
> **Aside:** On websites, if something is called `index.html`, that defaults to the home page. So [https://jules32\.github.io/r\-collab/](https://jules32.github.io/r-collab/) is the same as [https://jules32\.github.io/r\-collab/index.html](https://jules32.github.io/r-collab/index.html). So as you think about building websites you can develop your index.Rmd file rather than your README.md as your homepage.
#### 9\.7\.0\.1 Troubleshooting
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 9\.7\.1 END **collaborating** session!
| always\_allow\_html: true |
| --- |
#### 9\.7\.0\.1 Troubleshooting
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 9\.7\.1 END **collaborating** session!
| always\_allow\_html: true |
| --- |
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/collaborating.html |
Chapter 9 Collaborating \& getting help
=======================================
9\.1 Summary
------------
Since the GitHub session (Chapter [4](github.html#github)), we have been practicing using GitHub with RStudio to collaborate with our most important collaborator: Future You.
Here we will practice using GitHub with RStudio to collaborate with others now, with a mindset towards Future Us (your colleagues that you know and have yet to meet). We will also how to engage with the \#rstats community, including how to engage on Twitter, and how to ask for help.
We are going to teach you the simplest way to collaborate with someone, which is for both of you to have privileges to directly edit and add files to a repository. GitHub is built for software developer teams, and there is a lot of features that limit who can directly edit files (which lead to “pull requests”), but we won’t cover that today.
### 9\.1\.1 Objectives
* intro to R communities
* How to effectively ask for help
+ Googling. Error messages are your friends
+ How to use Twitter for \#rstats
+ Create a reproducible example with `reprex`
* create a new repo and give permission to a collaborator
* publish webpages online
### 9\.1\.2 Resources
* [ESM 206 Intro to data science \& stats](https://allisonhorst.github.io), specifically [ESM Lecture 2](https://docs.google.com/presentation/d/1u1DdhU_WTv1b-sbQgqVGAE-bA2Nq_Yym8BzcPW4lS3k/edit#slide=id.g63942ead2d_0_219) \- by Allison Horst
* [Finding the YOU in the R community](https://github.com/jthomasmock/presentations/blob/master/r-community2.pdf) \- by Thomas Mock
* [reprex.tidyverse.org](https://reprex.tidyverse.org/)
* [Reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) \- by Jenny Bryan
* [Getting help in R: do as I say, not as I’ve done](https://sctyner.github.io/rhelp.html) by Sam Tyner
* [Making free websites with RStudio’s R Markdown](https://jules32.github.io/rmarkdown-website-tutorial/) \- by Julie Lowndes
9\.2 R communities
------------------
We are going to start off by talking about communities that exist around R and how you can engage with them.
R communities connect online and in person. And we use Twitter as a platform to connect with each other. Yes, Twitter is a legit tool for data science. Most communities have some degree of in\-person and online presence, with Twitter being a big part of that online presence, and it enables you to talk directly with people. On Twitter, we connect using the \#rstats hashtag, and thus often called the “rstats community” (more on Twitter in a moment).
This is a small (and incomplete!) sampling to give you a sense of a few communities. Please see Thomas Mock’s presentation [Finding the YOU in the R community](https://github.com/jthomasmock/presentations/blob/master/r-community2.pdf) for more details.
#### 9\.2\.0\.1 RStudio Community
What is it: Online community forum for all questions R \& RStudio
Location: online at [community.rstudio.com](https://community.rstudio.com%3E)
Also: [RStudio](https://twitter.com/rstudio) on Twitter
#### 9\.2\.0\.2 RLadies
RLadies is a world\-wide organization to promote gender diversity in the R community.
Location: online at [rladies.org](https://rladies.org/), on Twitter at [rladiesglobal](https://twitter.com/rladiesglobal)
Also: [WeAreRLadies](https://twitter.com/WeAreRLadies)
#### 9\.2\.0\.3 rOpenSci
What is it: rOpenSci builds software with a community of users and developers, and educate scientists about transparent research practices.
Location: online at [ropensci.org](https://ropensci.org/), on Twitter at [ropensci](https://twitter.com/ropensci)
Also: [roknowtifier](https://twitter.com/roknowtifier), [rocitations](https://twitter.com/rocitations)
#### 9\.2\.0\.4 R User Groups
What is it: R User Groups (“RUGs”) are in\-person meetups supported by [The R Consortium](https://www.r-consortium.org/projects/r-user-group-support-program).
Location: local chapters. See a [list of RUGs and conferences](https://jumpingrivers.github.io/meetingsR/r-user-groups.html).
Also: example: [Los Angeles R Users Group](https://twitter.com/la_rusers)
#### 9\.2\.0\.5 The Carpentries
What is it: Network teaching foundational data science skills to researchers worldwide
Location: online at [carpentries.org](https://carpentries.org), on Twitter at [thecarpentries](https://twitter.com/thecarpentries), local workshops worldwide
#### 9\.2\.0\.6 R4DS Community
What is it: A community of R learners at all skill levels working together to improve our skills.
Location: on Twitter: [R4DScommunity](https://twitter.com/R4DScommunity), on Slack — sign up from [rfordatasci.com](https://www.rfordatasci.com/)
Also: [\#tidytuesday](https://twitter.com/search?q=%23tidytuesday&src=typed_query), [R4DS\_es](https://twitter.com/RFDS_es)
### 9\.2\.1 Community awesomeness
Example with Sam Firke’s janitor package: [sfirke.github.io/janitor](http://sfirke.github.io/janitor/), highlighting the [`excel_numeric_to_date`](http://sfirke.github.io/janitor/reference/excel_numeric_to_date.html) function and learning about it through Twitter.
9\.3 How to use Twitter for \#rstats
------------------------------------
Twitter is how we connect with other R users, learn from each other, develop together, and become friends. Especially at an event like RStudio::conf, it is a great way to stay connect and stay connected with folks you meet.
Twitter is definitely a firehose of information, but if you use it deliberately, you can hear the signal through the noise.
I was super skeptical of Twitter. I thought it was a megaphone for angry people. But it turns out it is a place to have small, thoughtful conversations and be part of innovative and friendly communities.
### 9\.3\.1 Examples
Here are a few examples of how to use Twitter for \#rstats.
When I saw [this tweet](https://twitter.com/Md_Harris/status/1074469302974193665/photo/1) by [Md\_Harris](https://twitter.com/Md_Harris), this was my internal monologue:
1. Cool visualization!
2. I want to represent my data this way
3. He includes his [code](https://gist.github.com/mrecos) that I can look at to understand what he did, and I can run and remix
4. The package is from [sckottie](https://twitter.com/sckottie) — who I know from [rOpenSci](https://ropensci.org), which is a really amazing software developer community for science
5. [`rnoaa`](https://cran.r-project.org/web/packages/rnoaa/index.html) is a package making NOAA \[US environmental] data more accessible! I didn’t know about this, it will be so useful for my colleagues
6. I will retweet so my network can benefit as well
Another example, [this tweet](https://twitter.com/JennyBryan/status/1074339217986138113) where [JennyBryan](https://twitter.com/JennyBryan/) is asking for feedback on a super useful package for interfacing between R and excel: [`readxl`](https://readxl.tidyverse.org/).
My internal monologue:
1. Yay, `readxl` is awesome, and also getting better thanks to Jenny
2. Do I have any spreadsheets to contribute?
3. In any case, I will retweet so others can contribute. And I’ll like it too because I appreciate this work
### 9\.3\.2 How to Twitter
My advice for Twitter is to start off small and deliberately. Curate who you follow and start by listening. I use Twitter deliberately for R and science communities, so that is the majority of the folks I follow (but of course I also follow [Mark Hamill](https://twitter.com/HamillHimself).
So start using Twitter to listen and learn, and then as you gradually build up courage, you can like and retweet things. And remember that liking and retweeting is not only a way to engage with the community yourself, but it is also a way to welcome and amplify other people. Sometimes I just reply saying how cool something is. Sometimes I like it. Sometimes I retweet. Sometimes I retweet with a quote/comment. But I also miss a lot of things since I limit how much time I give to Twitter, and that’s OK. You will always miss things but you are part of the community and they are there for you like you are for them.
If you’re joining twitter to learn R, I suggest following:
* [hadleywickham](https://twitter.com/hadleywickham)
* [JennyBryan](https://twitter.com/JennyBryan)
* [rOpenSci](https://twitter.com/ropensci)
* [WeAreRLadies](https://twitter.com/https://twitter.com/WeAreRLadies)
Listen to what they say and who joins those conversations, and follow other people and organizations. You could also look at who they are following. Also, check out the [\#rstats](https://twitter.com/search?q=%23rstats&src=typed_query) hashtag. This is not something that you can follow (although you can have it as a column in software like TweetDeck), but you can search it and you’ll see that the people you follow use it to help tag conversations. You’ll find other useful tags as well, within your domain, as well as other R\-related interests, e.g. [\#rspatial](https://twitter.com/search?q=%23rspatial&src=typed_query). When I read marine science papers, I see if the authors are on Twitter; I sometimes follow them, ask them questions, or just tell them I liked their work!
You can also follow us:
* [juliesquid](https://twitter.com/juliesquid)
* [allison\_horst](https://twitter.com/allison_horst)
* [jamiecmonty](https://twitter.com/jamiecmonty)
* [ECOuture9](https://twitter.com/ECOuture9)
These are just a few ways to learn and build community on Twitter. And as you feel comfortable, you can start sharing your ideas or your links too. Live\-tweeting is a really great way to engage as well, and bridge in\-person conferences with online communities. And of course, in addition to engaging on Twitter, check whether there are local RLadies chapters or other R meetups, and join! Or perhaps [start one](https://openscapes.org/blog/2018/11/16/how-to-start-a-coding-club/)?
So Twitter is a place to engage with folks and learn, and while it is also a place to ask questions, there are other places to look first, depending on your question.
9\.4 Getting help
-----------------
Getting help, or really helping you help yourself, means moving beyond “it’s not working” and towards solution\-oriented approaches. Part of this is the mindset where you **expect that someone has encountered this problem before** and that **most likely the problem is your typo or misuse**, and not that R is broken or hates you.
We’re going to talk about how to ask for help, how to interpret responses, and how to act upon the help you receive.
### 9\.4\.1 Read the error message
As we’ve talked about before, they may be red, they may be unfamiliar, but **error messages are your friends**. There are multiple types of messages that R will print. Read the message to figure out what it’s trying to tell you.
**Error:** There’s a fatal error in your code that prevented it from being run through successfully. You need to fix it for the code to run.
**Warning:** Non\-fatal errors (don’t stop the code from running, but this is a potential problem that you should know about).
**Message:** Here’s some helpful information about the code you just ran (you can hide these if you want to)
### 9\.4\.2 Googling
The internet has the answer to all of your R questions, hopes, and dreams.
When you get an error you don’t understand, copy it and paste it into Google. You can also add “rstats” or “tidyverse” or something to help Google (although it’s getting really good without it too).
For error messages, copy\-pasting the exact message is best. But if you have a “how do I…?” type question you can also enter this into Google. You’ll develop the vocabulary you need to refine your search terms as you become more familiar with R. It’s a continued learning process.
And just as important as Googling your error message is being able to identify a useful result.
Something I can’t emphasize enough: **pay attention to filepaths**. They tell you the source, they help you find pages again. Often remembering a few things about it will let you either google it again or navigate back there yourself.
**Check the date, check the source, check the relevance.** Is this a modern solution, or one from 2013? Do I trust the person responding? Is this about my question or on a different topic?
You will see links from many places, particularly:
* RStudio Community
* Stack Overflow
* Books, blogs, tutorials, courses, webinars
* GitHub Issues
### 9\.4\.3 Create a reprex
A “reprex” is a REPRoducible EXample: code that you need help with and want to ask someone about.
Jenny Bryan made the `reprex` package because “conversations about code are more productive with code that ***actually runs***, that ***I don’t have to run***, and that ***I can easily run***.”
Let me demo an example, and then you will do it yourself. This is Jenny’s summary from her [reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) of what I’ll do:
`reprex` is part of the Tidyverse, so we all already have it installed, but we do need to attach it:
```
library(reprex)
```
First let me create a little example that I have a question about. I want to know how I change the color of the geom\_points in my ggplot. (Reminder: this example is to illustrate reprex, not how you would actually look in the help pages!!!)
I’ll type into our RMarkdown script:
```
library(tidyverse)
ggplot(cars, aes(speed, dist)) +
geom_point()
```
So this is the code I have a question about it. My next step is to select it all and copy it into my clipboard.
Then I go to my Console and type:
```
reprex()
```
Reprex does its thing, making sure this is a reproducible example — this wouldn’t be without `library(tidyverse)`! — and displaying it in my Viewer on the bottom\-right of the RStudio IDE.
reprex includes the output — experienced programmers who you might be asking for help can often read your code and know where the problem lies, especially when they can see the output.
When it finishes I also have what I see in the Viewer copied in my clipboard. I can paste it anywhere! In an email, Google Doc, in Slack. I’m going to paste mine in an Issue for my r\-workshop repository.
When I paste it:
Notice that following the backticks, there is only `r`, not `r{}`. This is because what we have pasted is so GitHub can format as R code, but it won’t be executed by R through RMarkdown.
I can click on the Preview button in the Issues to see how this will render: and it will show my code, nicely formatted for R.
So in this example I might write at the top of the comment: “Practicing a reprex and Issues.”allison\_horst how do I change the point color to cyan?"
**`reprex` is a “workflow package”**. That means that it’s something we don’t put in Rmds, scripts, or anything else. We use is in the Console when we are preparing to ask for help — from ourselves or someone else.
### 9\.4\.4 Activity
Make a reprex using the built\-in `mtcars` dataset and paste it in the Issues for your repository. (Have a look: `head(mtcars); skimr::skim(mtcars)`)
1. install and attach the `reprex` package
2. For your reprex: take the `mtcars` dataset and then filter it for observations where `mpg` is more than 26\.
3. Navigate to github.com/your\_username/r\-workshop/issues
Hint: remember to read the error message. “could not find function `%>%`” means you’ve forgotten to attach the appropriate package with `library()`
#### 9\.4\.4\.1 Solution (no peeking)
```
## setup: run in Rmd or Console
library(reprex)
## reprex code: run in Rmd or Console
library(tidyverse) # or library(dplyr) or library(magrittr)
mtcars %>% filter(mpg > 26)
## copy the above
## reprex call: run in Console
reprex()
## paste in Issue!
```
9\.5 Collaborating with GitHub
------------------------------
Now we’re going to collaborate with a partner and set up for our last session, which will tie together everything we’ve been learning.
### 9\.5\.1 Create repo (Partner 1\)
Team up with a partner sitting next to you. Partner 1 will create a new repository. We will do this in the same way that we did in Chapter [4](github.html#github): [Create a repository on Github.com](github.html#create-a-repository-on-github.com).
Let’s name it `r-collab`.
### 9\.5\.2 Create a gh\-pages branch (Partner 1\)
We aren’t going to talk about branches very much, but they are a powerful feature of git/GitHub. I think of it as creating a copy of your work that becomes a parallel universe that you can modify safely because it’s not affecting your original work. And then you can choose to merge the universes back together if and when you want. By default, when you create a new repo you begin with one branch, and it is named `master`. When you create new branches, you can name them whatever you want. However, if you name one `gh-pages` (all lowercase, with a `-` and no spaces), this will let you create a website. And that’s our plan. So, Partner 1, do this to create a `gh-pages` branch:
On the homepage for your repo on GitHub.com, click the button that says “Branch:master.” Here, you can switch to another branch (right now there aren’t any others besides `master`), or create one by typing a new name.
Let’s type `gh-pages`.
Let’s also change `gh-pages` to the default branch and delete the master branch: this will be a one\-time\-only thing that we do here:
First click to control branches:
And then click to change the default branch to `gh-pages`. I like to then delete the `master` branch when it has the little red trash can next to it. It will make you confirm that you really want to delete it, which I do!
### 9\.5\.3 Give your collaborator privileges (Partner 1 and 2\)
Now, Partner 1, go into Settings \> Collaborators \> enter Partner 2’s (your collaborator’s) username.
Partner 2 then needs to check their email and accept as a collaborator. Notice that your collaborator has “Push access to the repository” (highlighted below):
### 9\.5\.4 Clone to a new R Project (Partner 1\)
Now let’s have Partner 1 clone the repository to their local computer. We’ll do this through RStudio like we did before (see Chapter [4](github.html#github): [Clone your repository using RStudio](github.html#clone-your-repository-using-rstudio)), but with a final additional step before hitting “Create Project”: select “Open in a new Session.”
Opening this Project in a new Session opens up a new world of awesomeness from RStudio. Having different RStudio project sessions allows you to keep your work separate and organized. So you can collaborate with this collaborator on this repository while also working on your other repository from this morning. I tend to have a lot of projects going at one time:
Have a look in your git tab.
Like we saw this morning, when you first clone a repo through RStudio, RStudio will add an `.Rproj` file to your repo. And if you didn’t add a `.gitignore` file when you originally created the repo on GitHub.com, RStudio will also add this for you. So, Partner 1, let’s go ahead and sync this back to GitHub.com.
Remember:
Let’s confirm that this was synced by looking at GitHub.com again. You may have to refresh the page, but you should see this commit where you added the `.Rproj` file.
### 9\.5\.5 Clone to a new R Project (Partner 2\)
Now it’s Partner 2’s turn! Partner 2, clone this repository following the same steps that Partner 1 just did. When you clone it, RStudio should not create any new files — why? Partner 1 already created and pushed the `.Rproj` and `.gitignore` files so they already exist in the repo.
### 9\.5\.6 Create data folder (Partner 2\)
Partner 2, let’s create a folder for our data and copy our `noaa_landings.csv` there.
And now let’s sync back to GitHub: Pull, Stage, Commit, Push
When we inspect on GitHub.com, click to view all the commits, you’ll see commits logged from both Partner 1 and 2!
> Question: Would you still be able clone a repository that you are not a collaborator on? What do you think would happen? Try it! Can you sync back?
### 9\.5\.7 State of the Repository
OK, so where do things stand right now? GitHub.com has the most recent versions of all the repository’s files. Partner 2 also has these most recent versions locally. How about Partner 1?
Partner 1 does not have the most recent versions of everything on their computer.
Question: How can we change that? Or how could we even check?
Answer: PULL.
Let’s have Partner 1 go back to RStudio and Pull. If their files aren’t up\-to\-date, this will pull the most recent versions to their local computer. And if they already did have the most recent versions? Well, pulling doesn’t cost anything (other than an internet connection), so if everything is up\-to\-date, pulling is fine too.
I recommend pulling every time you come back to a collaborative repository. Whether you haven’t opened RStudio in a month or you’ve just been away for a lunch break, pull. It might not be necessary, but it can save a lot of heartache later.
9\.6 Merge conflicts
--------------------
What kind of heartache are we talking about? Merge conflicts.
Within a file, GitHub tracks changes line\-by\-line. So you can also have collaborators working on different lines within the same file and GitHub will be able to weave those changes into each other – that’s it’s job!
It’s when you have collaborators working on *the same lines within the same file* that you can have **merge conflicts**. This is when there is a conflict within the same line so that GitHub can’t merge automatically. They need a human to help decide what information to keep (which is good because you don’t want GitHub to decide for you). Merge conflicts can be frustrating, but like R’s error messages, they are actually trying to help you.
So let’s experience this together: we will create and solve a merge conflict. **Stop and watch me demo how to create and solve a merge conflict with my Partner 2, and then you will do the same with your partner.** Here’s what I am going to do:
### 9\.6\.1 Pull (Partners 1 and 2\)
Both partners go to RStudio and pull so you have the most recent versions of all your files.
### 9\.6\.2 Create a conflict (Partners 1 and 2\)
Now, Partners 1 and 2, both go to the README.md, and on Line 4, write something, anything. Save the README.
I’m not going to give any examples because when you do this I want to be sure that both Partners to write something different. Save the README.
### 9\.6\.3 Sync (Partner 2\)
OK. Now, let’s have Partner 2 sync: pull, stage, commit, push. Just like normal.
Great.
### 9\.6\.4 Sync attempts \& fixes (Partner 1\)
Now, let’s have Partner 1 (me) try.
When I try to Pull, I get the first error we will see today: “Your local changes to README.md would be overwritten by merge.” GitHub is telling me that it knows I’ve modified my README, but since I haven’t staged and committed them, it can’t do its job and merge my conflicts with whatever is different about the version from GitHub.com.
This is good: the alternative would be GitHub deciding which one to keep and it’s better that we have that kind of control and decision making.
GitHub provides some guidance: either commit this work first, or “stash it,” which you can interpret that as moving the README temporarily to another folder somewhere outside of this GitHub repository so that you can successfully pull and then decide your next steps.
Let’s follow their advice and have Partner 1 commit. Great. Now let’s try pulling again.
New error: “Merge conflict in README…fix conflicts and then commit the result.”
So this error is different from the previous: GitHub knows what has changed line\-by\-line in my file here, and it knows what has changed line\-by\-line in the version on GitHub.com. And it knows there is a conflict between them. So it’s asking me to now compare these changes, choose a preference, and commit.
**Note:** if Partner 2 and I were not intentionally in this demo editing exactly the same lines, GitHub likely could have done its job and merged this file successfully after our first error fix above.
We will again follow GitHub’s advice to fix the conflicts. Let’s close this window and inspect.
Did you notice two other things that happened along with this message?
First\< in the Git tab, next to the README listing there are orange `U`s; this means that there is an unresolved conflict. It means my file is not staged with a check anymore because modifications have occurred to the file since it has been staged.
Second, the README file itself changed; there is new text and symbols. (We got a preview in the diff pane also).
```
<<<<<<< HEAD
Julie is collaborating on this README.
=======
**Allison is adding text here.**
>>>>>>> 05a189b23372f0bdb5b42630f8cb318003cee19b
```
In this example, Partner 1 is Julie and Partner 2 is Allison. GitHub is displaying the line that Julie wrote and the line Allison. wrote separated by `=======`. These are the two choices that I (Partner 1\) has to decide between, which one do you want to keep? Where where does this decision start and end? The lines are bounded by `<<<<<<<HEAD` and `>>>>>>>long commit identifier`.
So, to resolve this merge conflict, Partner 1 has to chose which one to keep. And I tell GitHub my choice by deleting everything in this bundle of tex except the line they want. So, Partner 1 will delete the `<<<<<<HEAD`, `=====`, `>>>>long commit identifier` and either Julie or Allison’s line that I don’t want to keep.
I’ll do this, and then commit again. In this example, we’ve kept Allison’s line:
Then I’ll stage, and write a commit message. I often write “resolving merge conflict” or something similar. When I stage the file, notice how now my edits look like a simple line replacement (compare with the image above before it was re\-staged):
And we’re done! We can inspect on GitHub.com that I am the most recent contributor to this repository. And if we look in the commit history we will see both Allison and my original commits, along with our merge conflict fix.
### 9\.6\.5 Activity
Create a merge conflict with your partner, following the steps that we just did in the demo above. Practice different approaches to solving errors: for example, try stashing instead of committing.
### 9\.6\.6 How do you avoid merge conflicts?
Merge conflicts can occur when you collaborate with others — I find most often it is collaborating with ME from a different computer. They will happen, but you can minimize them by getting into good habits.
To minimize merge conflicts, pull often so that you are aware of anything that is different, and deal with it early. Similarly, commit and push often so that your contributions do not become too unweildly for yourself or others later on.
Also, talk with your collaborators. Are they working on the exact same file right now that you need to be? If so, coordinate with them (in person, GChat, Slack, email). For example: “I’m working on X part and will push my changes before my meeting — then you can work on it and I’ll pull when I’m back.” Also, if you find yourself always working on the exact same file, you could consider breaking it into different files to minimize problems.
But merge conflicts will occur and some of them will be heartbreaking and demoralizing. They happen to me when I collaborate with myself between my work computer and laptop. We demoed small conflicts with just one file, but they can occur across many files, particularly when your code is generating figures, scripts, or HTML files. Sometimes the best approach is the [burn it all down method](https://happygitwithr.com/burn.html), where you delete your local copy of the repo and re\-clone.
Protect yourself by pulling and syncing often!
9\.7 Create your collaborative website
--------------------------------------
OK. Let’s have both Partners create a new RMarkdown file and name it `my_name_fisheries.Rmd`. Here’s what you will do:
1. Pull
2. Create a new RMarkdown file **and name it `my_name_fisheries.Rmd`**. Let’s do it all lowercase. These will become pages for our website
3. We’ll start by testing: let’s simply change the title inside the Rmd, call it “My Name’s Fisheries Analysis”
4. Knit
5. Save and sync your .Rmd and your .html files
* (pull, stage, commit, push)
6. Go to Partner 1’s repo, mine is [https://github.com/jules32/r\-collab/](https://github.com/jules32/r-collab/)
7. GitHub also supports this as a website (because we set up our gh\-pages branch)
Where is it? Figure out your website’s url from your github repo’s url — pay attention to urls. \- note that the url starts with my **username.github.io**
* my github repo: [https://github.com/jules32/r\-collab/](https://github.com/jules32/r-collab/)
* my website url: [https://jules32\.github.io/r\-collab/](https://jules32.github.io/r-collab/)
* right now this displays the README as the “home page” for our website.
8. Now navigate to your web page! For example:
* my github repo: [https://github.com/jules32/r\-collab/julie\_fisheries](https://github.com/jules32/r-collab/julie_fisheries)
* my website url: [https://jules32\.github.io/r\-collab/julie\_fisheries](https://jules32.github.io/r-collab/julie_fisheries)
> ***ProTip*** Pay attention to URLs. An unsung skill of the modern analyst is to be able to navigate the internet by keeping an eye on patterns.
So cool!
You and your partner have created individual webpages here, but they do not talk to each other (i.e. you can’t navigate between them or even know that one exists from the other). We will not organize these pages into a website today, but you can practice this on your own with this hour\-long tutorial: [Making free websites with RStudio’s R Markdown](https://jules32.github.io/rmarkdown-website-tutorial/).
> **Aside:** On websites, if something is called `index.html`, that defaults to the home page. So [https://jules32\.github.io/r\-collab/](https://jules32.github.io/r-collab/) is the same as [https://jules32\.github.io/r\-collab/index.html](https://jules32.github.io/r-collab/index.html). So as you think about building websites you can develop your index.Rmd file rather than your README.md as your homepage.
#### 9\.7\.0\.1 Troubleshooting
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 9\.7\.1 END **collaborating** session!
| always\_allow\_html: true |
| --- |
9\.1 Summary
------------
Since the GitHub session (Chapter [4](github.html#github)), we have been practicing using GitHub with RStudio to collaborate with our most important collaborator: Future You.
Here we will practice using GitHub with RStudio to collaborate with others now, with a mindset towards Future Us (your colleagues that you know and have yet to meet). We will also how to engage with the \#rstats community, including how to engage on Twitter, and how to ask for help.
We are going to teach you the simplest way to collaborate with someone, which is for both of you to have privileges to directly edit and add files to a repository. GitHub is built for software developer teams, and there is a lot of features that limit who can directly edit files (which lead to “pull requests”), but we won’t cover that today.
### 9\.1\.1 Objectives
* intro to R communities
* How to effectively ask for help
+ Googling. Error messages are your friends
+ How to use Twitter for \#rstats
+ Create a reproducible example with `reprex`
* create a new repo and give permission to a collaborator
* publish webpages online
### 9\.1\.2 Resources
* [ESM 206 Intro to data science \& stats](https://allisonhorst.github.io), specifically [ESM Lecture 2](https://docs.google.com/presentation/d/1u1DdhU_WTv1b-sbQgqVGAE-bA2Nq_Yym8BzcPW4lS3k/edit#slide=id.g63942ead2d_0_219) \- by Allison Horst
* [Finding the YOU in the R community](https://github.com/jthomasmock/presentations/blob/master/r-community2.pdf) \- by Thomas Mock
* [reprex.tidyverse.org](https://reprex.tidyverse.org/)
* [Reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) \- by Jenny Bryan
* [Getting help in R: do as I say, not as I’ve done](https://sctyner.github.io/rhelp.html) by Sam Tyner
* [Making free websites with RStudio’s R Markdown](https://jules32.github.io/rmarkdown-website-tutorial/) \- by Julie Lowndes
### 9\.1\.1 Objectives
* intro to R communities
* How to effectively ask for help
+ Googling. Error messages are your friends
+ How to use Twitter for \#rstats
+ Create a reproducible example with `reprex`
* create a new repo and give permission to a collaborator
* publish webpages online
### 9\.1\.2 Resources
* [ESM 206 Intro to data science \& stats](https://allisonhorst.github.io), specifically [ESM Lecture 2](https://docs.google.com/presentation/d/1u1DdhU_WTv1b-sbQgqVGAE-bA2Nq_Yym8BzcPW4lS3k/edit#slide=id.g63942ead2d_0_219) \- by Allison Horst
* [Finding the YOU in the R community](https://github.com/jthomasmock/presentations/blob/master/r-community2.pdf) \- by Thomas Mock
* [reprex.tidyverse.org](https://reprex.tidyverse.org/)
* [Reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) \- by Jenny Bryan
* [Getting help in R: do as I say, not as I’ve done](https://sctyner.github.io/rhelp.html) by Sam Tyner
* [Making free websites with RStudio’s R Markdown](https://jules32.github.io/rmarkdown-website-tutorial/) \- by Julie Lowndes
9\.2 R communities
------------------
We are going to start off by talking about communities that exist around R and how you can engage with them.
R communities connect online and in person. And we use Twitter as a platform to connect with each other. Yes, Twitter is a legit tool for data science. Most communities have some degree of in\-person and online presence, with Twitter being a big part of that online presence, and it enables you to talk directly with people. On Twitter, we connect using the \#rstats hashtag, and thus often called the “rstats community” (more on Twitter in a moment).
This is a small (and incomplete!) sampling to give you a sense of a few communities. Please see Thomas Mock’s presentation [Finding the YOU in the R community](https://github.com/jthomasmock/presentations/blob/master/r-community2.pdf) for more details.
#### 9\.2\.0\.1 RStudio Community
What is it: Online community forum for all questions R \& RStudio
Location: online at [community.rstudio.com](https://community.rstudio.com%3E)
Also: [RStudio](https://twitter.com/rstudio) on Twitter
#### 9\.2\.0\.2 RLadies
RLadies is a world\-wide organization to promote gender diversity in the R community.
Location: online at [rladies.org](https://rladies.org/), on Twitter at [rladiesglobal](https://twitter.com/rladiesglobal)
Also: [WeAreRLadies](https://twitter.com/WeAreRLadies)
#### 9\.2\.0\.3 rOpenSci
What is it: rOpenSci builds software with a community of users and developers, and educate scientists about transparent research practices.
Location: online at [ropensci.org](https://ropensci.org/), on Twitter at [ropensci](https://twitter.com/ropensci)
Also: [roknowtifier](https://twitter.com/roknowtifier), [rocitations](https://twitter.com/rocitations)
#### 9\.2\.0\.4 R User Groups
What is it: R User Groups (“RUGs”) are in\-person meetups supported by [The R Consortium](https://www.r-consortium.org/projects/r-user-group-support-program).
Location: local chapters. See a [list of RUGs and conferences](https://jumpingrivers.github.io/meetingsR/r-user-groups.html).
Also: example: [Los Angeles R Users Group](https://twitter.com/la_rusers)
#### 9\.2\.0\.5 The Carpentries
What is it: Network teaching foundational data science skills to researchers worldwide
Location: online at [carpentries.org](https://carpentries.org), on Twitter at [thecarpentries](https://twitter.com/thecarpentries), local workshops worldwide
#### 9\.2\.0\.6 R4DS Community
What is it: A community of R learners at all skill levels working together to improve our skills.
Location: on Twitter: [R4DScommunity](https://twitter.com/R4DScommunity), on Slack — sign up from [rfordatasci.com](https://www.rfordatasci.com/)
Also: [\#tidytuesday](https://twitter.com/search?q=%23tidytuesday&src=typed_query), [R4DS\_es](https://twitter.com/RFDS_es)
### 9\.2\.1 Community awesomeness
Example with Sam Firke’s janitor package: [sfirke.github.io/janitor](http://sfirke.github.io/janitor/), highlighting the [`excel_numeric_to_date`](http://sfirke.github.io/janitor/reference/excel_numeric_to_date.html) function and learning about it through Twitter.
#### 9\.2\.0\.1 RStudio Community
What is it: Online community forum for all questions R \& RStudio
Location: online at [community.rstudio.com](https://community.rstudio.com%3E)
Also: [RStudio](https://twitter.com/rstudio) on Twitter
#### 9\.2\.0\.2 RLadies
RLadies is a world\-wide organization to promote gender diversity in the R community.
Location: online at [rladies.org](https://rladies.org/), on Twitter at [rladiesglobal](https://twitter.com/rladiesglobal)
Also: [WeAreRLadies](https://twitter.com/WeAreRLadies)
#### 9\.2\.0\.3 rOpenSci
What is it: rOpenSci builds software with a community of users and developers, and educate scientists about transparent research practices.
Location: online at [ropensci.org](https://ropensci.org/), on Twitter at [ropensci](https://twitter.com/ropensci)
Also: [roknowtifier](https://twitter.com/roknowtifier), [rocitations](https://twitter.com/rocitations)
#### 9\.2\.0\.4 R User Groups
What is it: R User Groups (“RUGs”) are in\-person meetups supported by [The R Consortium](https://www.r-consortium.org/projects/r-user-group-support-program).
Location: local chapters. See a [list of RUGs and conferences](https://jumpingrivers.github.io/meetingsR/r-user-groups.html).
Also: example: [Los Angeles R Users Group](https://twitter.com/la_rusers)
#### 9\.2\.0\.5 The Carpentries
What is it: Network teaching foundational data science skills to researchers worldwide
Location: online at [carpentries.org](https://carpentries.org), on Twitter at [thecarpentries](https://twitter.com/thecarpentries), local workshops worldwide
#### 9\.2\.0\.6 R4DS Community
What is it: A community of R learners at all skill levels working together to improve our skills.
Location: on Twitter: [R4DScommunity](https://twitter.com/R4DScommunity), on Slack — sign up from [rfordatasci.com](https://www.rfordatasci.com/)
Also: [\#tidytuesday](https://twitter.com/search?q=%23tidytuesday&src=typed_query), [R4DS\_es](https://twitter.com/RFDS_es)
### 9\.2\.1 Community awesomeness
Example with Sam Firke’s janitor package: [sfirke.github.io/janitor](http://sfirke.github.io/janitor/), highlighting the [`excel_numeric_to_date`](http://sfirke.github.io/janitor/reference/excel_numeric_to_date.html) function and learning about it through Twitter.
9\.3 How to use Twitter for \#rstats
------------------------------------
Twitter is how we connect with other R users, learn from each other, develop together, and become friends. Especially at an event like RStudio::conf, it is a great way to stay connect and stay connected with folks you meet.
Twitter is definitely a firehose of information, but if you use it deliberately, you can hear the signal through the noise.
I was super skeptical of Twitter. I thought it was a megaphone for angry people. But it turns out it is a place to have small, thoughtful conversations and be part of innovative and friendly communities.
### 9\.3\.1 Examples
Here are a few examples of how to use Twitter for \#rstats.
When I saw [this tweet](https://twitter.com/Md_Harris/status/1074469302974193665/photo/1) by [Md\_Harris](https://twitter.com/Md_Harris), this was my internal monologue:
1. Cool visualization!
2. I want to represent my data this way
3. He includes his [code](https://gist.github.com/mrecos) that I can look at to understand what he did, and I can run and remix
4. The package is from [sckottie](https://twitter.com/sckottie) — who I know from [rOpenSci](https://ropensci.org), which is a really amazing software developer community for science
5. [`rnoaa`](https://cran.r-project.org/web/packages/rnoaa/index.html) is a package making NOAA \[US environmental] data more accessible! I didn’t know about this, it will be so useful for my colleagues
6. I will retweet so my network can benefit as well
Another example, [this tweet](https://twitter.com/JennyBryan/status/1074339217986138113) where [JennyBryan](https://twitter.com/JennyBryan/) is asking for feedback on a super useful package for interfacing between R and excel: [`readxl`](https://readxl.tidyverse.org/).
My internal monologue:
1. Yay, `readxl` is awesome, and also getting better thanks to Jenny
2. Do I have any spreadsheets to contribute?
3. In any case, I will retweet so others can contribute. And I’ll like it too because I appreciate this work
### 9\.3\.2 How to Twitter
My advice for Twitter is to start off small and deliberately. Curate who you follow and start by listening. I use Twitter deliberately for R and science communities, so that is the majority of the folks I follow (but of course I also follow [Mark Hamill](https://twitter.com/HamillHimself).
So start using Twitter to listen and learn, and then as you gradually build up courage, you can like and retweet things. And remember that liking and retweeting is not only a way to engage with the community yourself, but it is also a way to welcome and amplify other people. Sometimes I just reply saying how cool something is. Sometimes I like it. Sometimes I retweet. Sometimes I retweet with a quote/comment. But I also miss a lot of things since I limit how much time I give to Twitter, and that’s OK. You will always miss things but you are part of the community and they are there for you like you are for them.
If you’re joining twitter to learn R, I suggest following:
* [hadleywickham](https://twitter.com/hadleywickham)
* [JennyBryan](https://twitter.com/JennyBryan)
* [rOpenSci](https://twitter.com/ropensci)
* [WeAreRLadies](https://twitter.com/https://twitter.com/WeAreRLadies)
Listen to what they say and who joins those conversations, and follow other people and organizations. You could also look at who they are following. Also, check out the [\#rstats](https://twitter.com/search?q=%23rstats&src=typed_query) hashtag. This is not something that you can follow (although you can have it as a column in software like TweetDeck), but you can search it and you’ll see that the people you follow use it to help tag conversations. You’ll find other useful tags as well, within your domain, as well as other R\-related interests, e.g. [\#rspatial](https://twitter.com/search?q=%23rspatial&src=typed_query). When I read marine science papers, I see if the authors are on Twitter; I sometimes follow them, ask them questions, or just tell them I liked their work!
You can also follow us:
* [juliesquid](https://twitter.com/juliesquid)
* [allison\_horst](https://twitter.com/allison_horst)
* [jamiecmonty](https://twitter.com/jamiecmonty)
* [ECOuture9](https://twitter.com/ECOuture9)
These are just a few ways to learn and build community on Twitter. And as you feel comfortable, you can start sharing your ideas or your links too. Live\-tweeting is a really great way to engage as well, and bridge in\-person conferences with online communities. And of course, in addition to engaging on Twitter, check whether there are local RLadies chapters or other R meetups, and join! Or perhaps [start one](https://openscapes.org/blog/2018/11/16/how-to-start-a-coding-club/)?
So Twitter is a place to engage with folks and learn, and while it is also a place to ask questions, there are other places to look first, depending on your question.
### 9\.3\.1 Examples
Here are a few examples of how to use Twitter for \#rstats.
When I saw [this tweet](https://twitter.com/Md_Harris/status/1074469302974193665/photo/1) by [Md\_Harris](https://twitter.com/Md_Harris), this was my internal monologue:
1. Cool visualization!
2. I want to represent my data this way
3. He includes his [code](https://gist.github.com/mrecos) that I can look at to understand what he did, and I can run and remix
4. The package is from [sckottie](https://twitter.com/sckottie) — who I know from [rOpenSci](https://ropensci.org), which is a really amazing software developer community for science
5. [`rnoaa`](https://cran.r-project.org/web/packages/rnoaa/index.html) is a package making NOAA \[US environmental] data more accessible! I didn’t know about this, it will be so useful for my colleagues
6. I will retweet so my network can benefit as well
Another example, [this tweet](https://twitter.com/JennyBryan/status/1074339217986138113) where [JennyBryan](https://twitter.com/JennyBryan/) is asking for feedback on a super useful package for interfacing between R and excel: [`readxl`](https://readxl.tidyverse.org/).
My internal monologue:
1. Yay, `readxl` is awesome, and also getting better thanks to Jenny
2. Do I have any spreadsheets to contribute?
3. In any case, I will retweet so others can contribute. And I’ll like it too because I appreciate this work
### 9\.3\.2 How to Twitter
My advice for Twitter is to start off small and deliberately. Curate who you follow and start by listening. I use Twitter deliberately for R and science communities, so that is the majority of the folks I follow (but of course I also follow [Mark Hamill](https://twitter.com/HamillHimself).
So start using Twitter to listen and learn, and then as you gradually build up courage, you can like and retweet things. And remember that liking and retweeting is not only a way to engage with the community yourself, but it is also a way to welcome and amplify other people. Sometimes I just reply saying how cool something is. Sometimes I like it. Sometimes I retweet. Sometimes I retweet with a quote/comment. But I also miss a lot of things since I limit how much time I give to Twitter, and that’s OK. You will always miss things but you are part of the community and they are there for you like you are for them.
If you’re joining twitter to learn R, I suggest following:
* [hadleywickham](https://twitter.com/hadleywickham)
* [JennyBryan](https://twitter.com/JennyBryan)
* [rOpenSci](https://twitter.com/ropensci)
* [WeAreRLadies](https://twitter.com/https://twitter.com/WeAreRLadies)
Listen to what they say and who joins those conversations, and follow other people and organizations. You could also look at who they are following. Also, check out the [\#rstats](https://twitter.com/search?q=%23rstats&src=typed_query) hashtag. This is not something that you can follow (although you can have it as a column in software like TweetDeck), but you can search it and you’ll see that the people you follow use it to help tag conversations. You’ll find other useful tags as well, within your domain, as well as other R\-related interests, e.g. [\#rspatial](https://twitter.com/search?q=%23rspatial&src=typed_query). When I read marine science papers, I see if the authors are on Twitter; I sometimes follow them, ask them questions, or just tell them I liked their work!
You can also follow us:
* [juliesquid](https://twitter.com/juliesquid)
* [allison\_horst](https://twitter.com/allison_horst)
* [jamiecmonty](https://twitter.com/jamiecmonty)
* [ECOuture9](https://twitter.com/ECOuture9)
These are just a few ways to learn and build community on Twitter. And as you feel comfortable, you can start sharing your ideas or your links too. Live\-tweeting is a really great way to engage as well, and bridge in\-person conferences with online communities. And of course, in addition to engaging on Twitter, check whether there are local RLadies chapters or other R meetups, and join! Or perhaps [start one](https://openscapes.org/blog/2018/11/16/how-to-start-a-coding-club/)?
So Twitter is a place to engage with folks and learn, and while it is also a place to ask questions, there are other places to look first, depending on your question.
9\.4 Getting help
-----------------
Getting help, or really helping you help yourself, means moving beyond “it’s not working” and towards solution\-oriented approaches. Part of this is the mindset where you **expect that someone has encountered this problem before** and that **most likely the problem is your typo or misuse**, and not that R is broken or hates you.
We’re going to talk about how to ask for help, how to interpret responses, and how to act upon the help you receive.
### 9\.4\.1 Read the error message
As we’ve talked about before, they may be red, they may be unfamiliar, but **error messages are your friends**. There are multiple types of messages that R will print. Read the message to figure out what it’s trying to tell you.
**Error:** There’s a fatal error in your code that prevented it from being run through successfully. You need to fix it for the code to run.
**Warning:** Non\-fatal errors (don’t stop the code from running, but this is a potential problem that you should know about).
**Message:** Here’s some helpful information about the code you just ran (you can hide these if you want to)
### 9\.4\.2 Googling
The internet has the answer to all of your R questions, hopes, and dreams.
When you get an error you don’t understand, copy it and paste it into Google. You can also add “rstats” or “tidyverse” or something to help Google (although it’s getting really good without it too).
For error messages, copy\-pasting the exact message is best. But if you have a “how do I…?” type question you can also enter this into Google. You’ll develop the vocabulary you need to refine your search terms as you become more familiar with R. It’s a continued learning process.
And just as important as Googling your error message is being able to identify a useful result.
Something I can’t emphasize enough: **pay attention to filepaths**. They tell you the source, they help you find pages again. Often remembering a few things about it will let you either google it again or navigate back there yourself.
**Check the date, check the source, check the relevance.** Is this a modern solution, or one from 2013? Do I trust the person responding? Is this about my question or on a different topic?
You will see links from many places, particularly:
* RStudio Community
* Stack Overflow
* Books, blogs, tutorials, courses, webinars
* GitHub Issues
### 9\.4\.3 Create a reprex
A “reprex” is a REPRoducible EXample: code that you need help with and want to ask someone about.
Jenny Bryan made the `reprex` package because “conversations about code are more productive with code that ***actually runs***, that ***I don’t have to run***, and that ***I can easily run***.”
Let me demo an example, and then you will do it yourself. This is Jenny’s summary from her [reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) of what I’ll do:
`reprex` is part of the Tidyverse, so we all already have it installed, but we do need to attach it:
```
library(reprex)
```
First let me create a little example that I have a question about. I want to know how I change the color of the geom\_points in my ggplot. (Reminder: this example is to illustrate reprex, not how you would actually look in the help pages!!!)
I’ll type into our RMarkdown script:
```
library(tidyverse)
ggplot(cars, aes(speed, dist)) +
geom_point()
```
So this is the code I have a question about it. My next step is to select it all and copy it into my clipboard.
Then I go to my Console and type:
```
reprex()
```
Reprex does its thing, making sure this is a reproducible example — this wouldn’t be without `library(tidyverse)`! — and displaying it in my Viewer on the bottom\-right of the RStudio IDE.
reprex includes the output — experienced programmers who you might be asking for help can often read your code and know where the problem lies, especially when they can see the output.
When it finishes I also have what I see in the Viewer copied in my clipboard. I can paste it anywhere! In an email, Google Doc, in Slack. I’m going to paste mine in an Issue for my r\-workshop repository.
When I paste it:
Notice that following the backticks, there is only `r`, not `r{}`. This is because what we have pasted is so GitHub can format as R code, but it won’t be executed by R through RMarkdown.
I can click on the Preview button in the Issues to see how this will render: and it will show my code, nicely formatted for R.
So in this example I might write at the top of the comment: “Practicing a reprex and Issues.”allison\_horst how do I change the point color to cyan?"
**`reprex` is a “workflow package”**. That means that it’s something we don’t put in Rmds, scripts, or anything else. We use is in the Console when we are preparing to ask for help — from ourselves or someone else.
### 9\.4\.4 Activity
Make a reprex using the built\-in `mtcars` dataset and paste it in the Issues for your repository. (Have a look: `head(mtcars); skimr::skim(mtcars)`)
1. install and attach the `reprex` package
2. For your reprex: take the `mtcars` dataset and then filter it for observations where `mpg` is more than 26\.
3. Navigate to github.com/your\_username/r\-workshop/issues
Hint: remember to read the error message. “could not find function `%>%`” means you’ve forgotten to attach the appropriate package with `library()`
#### 9\.4\.4\.1 Solution (no peeking)
```
## setup: run in Rmd or Console
library(reprex)
## reprex code: run in Rmd or Console
library(tidyverse) # or library(dplyr) or library(magrittr)
mtcars %>% filter(mpg > 26)
## copy the above
## reprex call: run in Console
reprex()
## paste in Issue!
```
### 9\.4\.1 Read the error message
As we’ve talked about before, they may be red, they may be unfamiliar, but **error messages are your friends**. There are multiple types of messages that R will print. Read the message to figure out what it’s trying to tell you.
**Error:** There’s a fatal error in your code that prevented it from being run through successfully. You need to fix it for the code to run.
**Warning:** Non\-fatal errors (don’t stop the code from running, but this is a potential problem that you should know about).
**Message:** Here’s some helpful information about the code you just ran (you can hide these if you want to)
### 9\.4\.2 Googling
The internet has the answer to all of your R questions, hopes, and dreams.
When you get an error you don’t understand, copy it and paste it into Google. You can also add “rstats” or “tidyverse” or something to help Google (although it’s getting really good without it too).
For error messages, copy\-pasting the exact message is best. But if you have a “how do I…?” type question you can also enter this into Google. You’ll develop the vocabulary you need to refine your search terms as you become more familiar with R. It’s a continued learning process.
And just as important as Googling your error message is being able to identify a useful result.
Something I can’t emphasize enough: **pay attention to filepaths**. They tell you the source, they help you find pages again. Often remembering a few things about it will let you either google it again or navigate back there yourself.
**Check the date, check the source, check the relevance.** Is this a modern solution, or one from 2013? Do I trust the person responding? Is this about my question or on a different topic?
You will see links from many places, particularly:
* RStudio Community
* Stack Overflow
* Books, blogs, tutorials, courses, webinars
* GitHub Issues
### 9\.4\.3 Create a reprex
A “reprex” is a REPRoducible EXample: code that you need help with and want to ask someone about.
Jenny Bryan made the `reprex` package because “conversations about code are more productive with code that ***actually runs***, that ***I don’t have to run***, and that ***I can easily run***.”
Let me demo an example, and then you will do it yourself. This is Jenny’s summary from her [reprex webinar](https://resources.rstudio.com/webinars/help-me-help-you-creating-reproducible-examples-jenny-bryan) of what I’ll do:
`reprex` is part of the Tidyverse, so we all already have it installed, but we do need to attach it:
```
library(reprex)
```
First let me create a little example that I have a question about. I want to know how I change the color of the geom\_points in my ggplot. (Reminder: this example is to illustrate reprex, not how you would actually look in the help pages!!!)
I’ll type into our RMarkdown script:
```
library(tidyverse)
ggplot(cars, aes(speed, dist)) +
geom_point()
```
So this is the code I have a question about it. My next step is to select it all and copy it into my clipboard.
Then I go to my Console and type:
```
reprex()
```
Reprex does its thing, making sure this is a reproducible example — this wouldn’t be without `library(tidyverse)`! — and displaying it in my Viewer on the bottom\-right of the RStudio IDE.
reprex includes the output — experienced programmers who you might be asking for help can often read your code and know where the problem lies, especially when they can see the output.
When it finishes I also have what I see in the Viewer copied in my clipboard. I can paste it anywhere! In an email, Google Doc, in Slack. I’m going to paste mine in an Issue for my r\-workshop repository.
When I paste it:
Notice that following the backticks, there is only `r`, not `r{}`. This is because what we have pasted is so GitHub can format as R code, but it won’t be executed by R through RMarkdown.
I can click on the Preview button in the Issues to see how this will render: and it will show my code, nicely formatted for R.
So in this example I might write at the top of the comment: “Practicing a reprex and Issues.”allison\_horst how do I change the point color to cyan?"
**`reprex` is a “workflow package”**. That means that it’s something we don’t put in Rmds, scripts, or anything else. We use is in the Console when we are preparing to ask for help — from ourselves or someone else.
### 9\.4\.4 Activity
Make a reprex using the built\-in `mtcars` dataset and paste it in the Issues for your repository. (Have a look: `head(mtcars); skimr::skim(mtcars)`)
1. install and attach the `reprex` package
2. For your reprex: take the `mtcars` dataset and then filter it for observations where `mpg` is more than 26\.
3. Navigate to github.com/your\_username/r\-workshop/issues
Hint: remember to read the error message. “could not find function `%>%`” means you’ve forgotten to attach the appropriate package with `library()`
#### 9\.4\.4\.1 Solution (no peeking)
```
## setup: run in Rmd or Console
library(reprex)
## reprex code: run in Rmd or Console
library(tidyverse) # or library(dplyr) or library(magrittr)
mtcars %>% filter(mpg > 26)
## copy the above
## reprex call: run in Console
reprex()
## paste in Issue!
```
#### 9\.4\.4\.1 Solution (no peeking)
```
## setup: run in Rmd or Console
library(reprex)
## reprex code: run in Rmd or Console
library(tidyverse) # or library(dplyr) or library(magrittr)
mtcars %>% filter(mpg > 26)
## copy the above
## reprex call: run in Console
reprex()
## paste in Issue!
```
9\.5 Collaborating with GitHub
------------------------------
Now we’re going to collaborate with a partner and set up for our last session, which will tie together everything we’ve been learning.
### 9\.5\.1 Create repo (Partner 1\)
Team up with a partner sitting next to you. Partner 1 will create a new repository. We will do this in the same way that we did in Chapter [4](github.html#github): [Create a repository on Github.com](github.html#create-a-repository-on-github.com).
Let’s name it `r-collab`.
### 9\.5\.2 Create a gh\-pages branch (Partner 1\)
We aren’t going to talk about branches very much, but they are a powerful feature of git/GitHub. I think of it as creating a copy of your work that becomes a parallel universe that you can modify safely because it’s not affecting your original work. And then you can choose to merge the universes back together if and when you want. By default, when you create a new repo you begin with one branch, and it is named `master`. When you create new branches, you can name them whatever you want. However, if you name one `gh-pages` (all lowercase, with a `-` and no spaces), this will let you create a website. And that’s our plan. So, Partner 1, do this to create a `gh-pages` branch:
On the homepage for your repo on GitHub.com, click the button that says “Branch:master.” Here, you can switch to another branch (right now there aren’t any others besides `master`), or create one by typing a new name.
Let’s type `gh-pages`.
Let’s also change `gh-pages` to the default branch and delete the master branch: this will be a one\-time\-only thing that we do here:
First click to control branches:
And then click to change the default branch to `gh-pages`. I like to then delete the `master` branch when it has the little red trash can next to it. It will make you confirm that you really want to delete it, which I do!
### 9\.5\.3 Give your collaborator privileges (Partner 1 and 2\)
Now, Partner 1, go into Settings \> Collaborators \> enter Partner 2’s (your collaborator’s) username.
Partner 2 then needs to check their email and accept as a collaborator. Notice that your collaborator has “Push access to the repository” (highlighted below):
### 9\.5\.4 Clone to a new R Project (Partner 1\)
Now let’s have Partner 1 clone the repository to their local computer. We’ll do this through RStudio like we did before (see Chapter [4](github.html#github): [Clone your repository using RStudio](github.html#clone-your-repository-using-rstudio)), but with a final additional step before hitting “Create Project”: select “Open in a new Session.”
Opening this Project in a new Session opens up a new world of awesomeness from RStudio. Having different RStudio project sessions allows you to keep your work separate and organized. So you can collaborate with this collaborator on this repository while also working on your other repository from this morning. I tend to have a lot of projects going at one time:
Have a look in your git tab.
Like we saw this morning, when you first clone a repo through RStudio, RStudio will add an `.Rproj` file to your repo. And if you didn’t add a `.gitignore` file when you originally created the repo on GitHub.com, RStudio will also add this for you. So, Partner 1, let’s go ahead and sync this back to GitHub.com.
Remember:
Let’s confirm that this was synced by looking at GitHub.com again. You may have to refresh the page, but you should see this commit where you added the `.Rproj` file.
### 9\.5\.5 Clone to a new R Project (Partner 2\)
Now it’s Partner 2’s turn! Partner 2, clone this repository following the same steps that Partner 1 just did. When you clone it, RStudio should not create any new files — why? Partner 1 already created and pushed the `.Rproj` and `.gitignore` files so they already exist in the repo.
### 9\.5\.6 Create data folder (Partner 2\)
Partner 2, let’s create a folder for our data and copy our `noaa_landings.csv` there.
And now let’s sync back to GitHub: Pull, Stage, Commit, Push
When we inspect on GitHub.com, click to view all the commits, you’ll see commits logged from both Partner 1 and 2!
> Question: Would you still be able clone a repository that you are not a collaborator on? What do you think would happen? Try it! Can you sync back?
### 9\.5\.7 State of the Repository
OK, so where do things stand right now? GitHub.com has the most recent versions of all the repository’s files. Partner 2 also has these most recent versions locally. How about Partner 1?
Partner 1 does not have the most recent versions of everything on their computer.
Question: How can we change that? Or how could we even check?
Answer: PULL.
Let’s have Partner 1 go back to RStudio and Pull. If their files aren’t up\-to\-date, this will pull the most recent versions to their local computer. And if they already did have the most recent versions? Well, pulling doesn’t cost anything (other than an internet connection), so if everything is up\-to\-date, pulling is fine too.
I recommend pulling every time you come back to a collaborative repository. Whether you haven’t opened RStudio in a month or you’ve just been away for a lunch break, pull. It might not be necessary, but it can save a lot of heartache later.
### 9\.5\.1 Create repo (Partner 1\)
Team up with a partner sitting next to you. Partner 1 will create a new repository. We will do this in the same way that we did in Chapter [4](github.html#github): [Create a repository on Github.com](github.html#create-a-repository-on-github.com).
Let’s name it `r-collab`.
### 9\.5\.2 Create a gh\-pages branch (Partner 1\)
We aren’t going to talk about branches very much, but they are a powerful feature of git/GitHub. I think of it as creating a copy of your work that becomes a parallel universe that you can modify safely because it’s not affecting your original work. And then you can choose to merge the universes back together if and when you want. By default, when you create a new repo you begin with one branch, and it is named `master`. When you create new branches, you can name them whatever you want. However, if you name one `gh-pages` (all lowercase, with a `-` and no spaces), this will let you create a website. And that’s our plan. So, Partner 1, do this to create a `gh-pages` branch:
On the homepage for your repo on GitHub.com, click the button that says “Branch:master.” Here, you can switch to another branch (right now there aren’t any others besides `master`), or create one by typing a new name.
Let’s type `gh-pages`.
Let’s also change `gh-pages` to the default branch and delete the master branch: this will be a one\-time\-only thing that we do here:
First click to control branches:
And then click to change the default branch to `gh-pages`. I like to then delete the `master` branch when it has the little red trash can next to it. It will make you confirm that you really want to delete it, which I do!
### 9\.5\.3 Give your collaborator privileges (Partner 1 and 2\)
Now, Partner 1, go into Settings \> Collaborators \> enter Partner 2’s (your collaborator’s) username.
Partner 2 then needs to check their email and accept as a collaborator. Notice that your collaborator has “Push access to the repository” (highlighted below):
### 9\.5\.4 Clone to a new R Project (Partner 1\)
Now let’s have Partner 1 clone the repository to their local computer. We’ll do this through RStudio like we did before (see Chapter [4](github.html#github): [Clone your repository using RStudio](github.html#clone-your-repository-using-rstudio)), but with a final additional step before hitting “Create Project”: select “Open in a new Session.”
Opening this Project in a new Session opens up a new world of awesomeness from RStudio. Having different RStudio project sessions allows you to keep your work separate and organized. So you can collaborate with this collaborator on this repository while also working on your other repository from this morning. I tend to have a lot of projects going at one time:
Have a look in your git tab.
Like we saw this morning, when you first clone a repo through RStudio, RStudio will add an `.Rproj` file to your repo. And if you didn’t add a `.gitignore` file when you originally created the repo on GitHub.com, RStudio will also add this for you. So, Partner 1, let’s go ahead and sync this back to GitHub.com.
Remember:
Let’s confirm that this was synced by looking at GitHub.com again. You may have to refresh the page, but you should see this commit where you added the `.Rproj` file.
### 9\.5\.5 Clone to a new R Project (Partner 2\)
Now it’s Partner 2’s turn! Partner 2, clone this repository following the same steps that Partner 1 just did. When you clone it, RStudio should not create any new files — why? Partner 1 already created and pushed the `.Rproj` and `.gitignore` files so they already exist in the repo.
### 9\.5\.6 Create data folder (Partner 2\)
Partner 2, let’s create a folder for our data and copy our `noaa_landings.csv` there.
And now let’s sync back to GitHub: Pull, Stage, Commit, Push
When we inspect on GitHub.com, click to view all the commits, you’ll see commits logged from both Partner 1 and 2!
> Question: Would you still be able clone a repository that you are not a collaborator on? What do you think would happen? Try it! Can you sync back?
### 9\.5\.7 State of the Repository
OK, so where do things stand right now? GitHub.com has the most recent versions of all the repository’s files. Partner 2 also has these most recent versions locally. How about Partner 1?
Partner 1 does not have the most recent versions of everything on their computer.
Question: How can we change that? Or how could we even check?
Answer: PULL.
Let’s have Partner 1 go back to RStudio and Pull. If their files aren’t up\-to\-date, this will pull the most recent versions to their local computer. And if they already did have the most recent versions? Well, pulling doesn’t cost anything (other than an internet connection), so if everything is up\-to\-date, pulling is fine too.
I recommend pulling every time you come back to a collaborative repository. Whether you haven’t opened RStudio in a month or you’ve just been away for a lunch break, pull. It might not be necessary, but it can save a lot of heartache later.
9\.6 Merge conflicts
--------------------
What kind of heartache are we talking about? Merge conflicts.
Within a file, GitHub tracks changes line\-by\-line. So you can also have collaborators working on different lines within the same file and GitHub will be able to weave those changes into each other – that’s it’s job!
It’s when you have collaborators working on *the same lines within the same file* that you can have **merge conflicts**. This is when there is a conflict within the same line so that GitHub can’t merge automatically. They need a human to help decide what information to keep (which is good because you don’t want GitHub to decide for you). Merge conflicts can be frustrating, but like R’s error messages, they are actually trying to help you.
So let’s experience this together: we will create and solve a merge conflict. **Stop and watch me demo how to create and solve a merge conflict with my Partner 2, and then you will do the same with your partner.** Here’s what I am going to do:
### 9\.6\.1 Pull (Partners 1 and 2\)
Both partners go to RStudio and pull so you have the most recent versions of all your files.
### 9\.6\.2 Create a conflict (Partners 1 and 2\)
Now, Partners 1 and 2, both go to the README.md, and on Line 4, write something, anything. Save the README.
I’m not going to give any examples because when you do this I want to be sure that both Partners to write something different. Save the README.
### 9\.6\.3 Sync (Partner 2\)
OK. Now, let’s have Partner 2 sync: pull, stage, commit, push. Just like normal.
Great.
### 9\.6\.4 Sync attempts \& fixes (Partner 1\)
Now, let’s have Partner 1 (me) try.
When I try to Pull, I get the first error we will see today: “Your local changes to README.md would be overwritten by merge.” GitHub is telling me that it knows I’ve modified my README, but since I haven’t staged and committed them, it can’t do its job and merge my conflicts with whatever is different about the version from GitHub.com.
This is good: the alternative would be GitHub deciding which one to keep and it’s better that we have that kind of control and decision making.
GitHub provides some guidance: either commit this work first, or “stash it,” which you can interpret that as moving the README temporarily to another folder somewhere outside of this GitHub repository so that you can successfully pull and then decide your next steps.
Let’s follow their advice and have Partner 1 commit. Great. Now let’s try pulling again.
New error: “Merge conflict in README…fix conflicts and then commit the result.”
So this error is different from the previous: GitHub knows what has changed line\-by\-line in my file here, and it knows what has changed line\-by\-line in the version on GitHub.com. And it knows there is a conflict between them. So it’s asking me to now compare these changes, choose a preference, and commit.
**Note:** if Partner 2 and I were not intentionally in this demo editing exactly the same lines, GitHub likely could have done its job and merged this file successfully after our first error fix above.
We will again follow GitHub’s advice to fix the conflicts. Let’s close this window and inspect.
Did you notice two other things that happened along with this message?
First\< in the Git tab, next to the README listing there are orange `U`s; this means that there is an unresolved conflict. It means my file is not staged with a check anymore because modifications have occurred to the file since it has been staged.
Second, the README file itself changed; there is new text and symbols. (We got a preview in the diff pane also).
```
<<<<<<< HEAD
Julie is collaborating on this README.
=======
**Allison is adding text here.**
>>>>>>> 05a189b23372f0bdb5b42630f8cb318003cee19b
```
In this example, Partner 1 is Julie and Partner 2 is Allison. GitHub is displaying the line that Julie wrote and the line Allison. wrote separated by `=======`. These are the two choices that I (Partner 1\) has to decide between, which one do you want to keep? Where where does this decision start and end? The lines are bounded by `<<<<<<<HEAD` and `>>>>>>>long commit identifier`.
So, to resolve this merge conflict, Partner 1 has to chose which one to keep. And I tell GitHub my choice by deleting everything in this bundle of tex except the line they want. So, Partner 1 will delete the `<<<<<<HEAD`, `=====`, `>>>>long commit identifier` and either Julie or Allison’s line that I don’t want to keep.
I’ll do this, and then commit again. In this example, we’ve kept Allison’s line:
Then I’ll stage, and write a commit message. I often write “resolving merge conflict” or something similar. When I stage the file, notice how now my edits look like a simple line replacement (compare with the image above before it was re\-staged):
And we’re done! We can inspect on GitHub.com that I am the most recent contributor to this repository. And if we look in the commit history we will see both Allison and my original commits, along with our merge conflict fix.
### 9\.6\.5 Activity
Create a merge conflict with your partner, following the steps that we just did in the demo above. Practice different approaches to solving errors: for example, try stashing instead of committing.
### 9\.6\.6 How do you avoid merge conflicts?
Merge conflicts can occur when you collaborate with others — I find most often it is collaborating with ME from a different computer. They will happen, but you can minimize them by getting into good habits.
To minimize merge conflicts, pull often so that you are aware of anything that is different, and deal with it early. Similarly, commit and push often so that your contributions do not become too unweildly for yourself or others later on.
Also, talk with your collaborators. Are they working on the exact same file right now that you need to be? If so, coordinate with them (in person, GChat, Slack, email). For example: “I’m working on X part and will push my changes before my meeting — then you can work on it and I’ll pull when I’m back.” Also, if you find yourself always working on the exact same file, you could consider breaking it into different files to minimize problems.
But merge conflicts will occur and some of them will be heartbreaking and demoralizing. They happen to me when I collaborate with myself between my work computer and laptop. We demoed small conflicts with just one file, but they can occur across many files, particularly when your code is generating figures, scripts, or HTML files. Sometimes the best approach is the [burn it all down method](https://happygitwithr.com/burn.html), where you delete your local copy of the repo and re\-clone.
Protect yourself by pulling and syncing often!
### 9\.6\.1 Pull (Partners 1 and 2\)
Both partners go to RStudio and pull so you have the most recent versions of all your files.
### 9\.6\.2 Create a conflict (Partners 1 and 2\)
Now, Partners 1 and 2, both go to the README.md, and on Line 4, write something, anything. Save the README.
I’m not going to give any examples because when you do this I want to be sure that both Partners to write something different. Save the README.
### 9\.6\.3 Sync (Partner 2\)
OK. Now, let’s have Partner 2 sync: pull, stage, commit, push. Just like normal.
Great.
### 9\.6\.4 Sync attempts \& fixes (Partner 1\)
Now, let’s have Partner 1 (me) try.
When I try to Pull, I get the first error we will see today: “Your local changes to README.md would be overwritten by merge.” GitHub is telling me that it knows I’ve modified my README, but since I haven’t staged and committed them, it can’t do its job and merge my conflicts with whatever is different about the version from GitHub.com.
This is good: the alternative would be GitHub deciding which one to keep and it’s better that we have that kind of control and decision making.
GitHub provides some guidance: either commit this work first, or “stash it,” which you can interpret that as moving the README temporarily to another folder somewhere outside of this GitHub repository so that you can successfully pull and then decide your next steps.
Let’s follow their advice and have Partner 1 commit. Great. Now let’s try pulling again.
New error: “Merge conflict in README…fix conflicts and then commit the result.”
So this error is different from the previous: GitHub knows what has changed line\-by\-line in my file here, and it knows what has changed line\-by\-line in the version on GitHub.com. And it knows there is a conflict between them. So it’s asking me to now compare these changes, choose a preference, and commit.
**Note:** if Partner 2 and I were not intentionally in this demo editing exactly the same lines, GitHub likely could have done its job and merged this file successfully after our first error fix above.
We will again follow GitHub’s advice to fix the conflicts. Let’s close this window and inspect.
Did you notice two other things that happened along with this message?
First\< in the Git tab, next to the README listing there are orange `U`s; this means that there is an unresolved conflict. It means my file is not staged with a check anymore because modifications have occurred to the file since it has been staged.
Second, the README file itself changed; there is new text and symbols. (We got a preview in the diff pane also).
```
<<<<<<< HEAD
Julie is collaborating on this README.
=======
**Allison is adding text here.**
>>>>>>> 05a189b23372f0bdb5b42630f8cb318003cee19b
```
In this example, Partner 1 is Julie and Partner 2 is Allison. GitHub is displaying the line that Julie wrote and the line Allison. wrote separated by `=======`. These are the two choices that I (Partner 1\) has to decide between, which one do you want to keep? Where where does this decision start and end? The lines are bounded by `<<<<<<<HEAD` and `>>>>>>>long commit identifier`.
So, to resolve this merge conflict, Partner 1 has to chose which one to keep. And I tell GitHub my choice by deleting everything in this bundle of tex except the line they want. So, Partner 1 will delete the `<<<<<<HEAD`, `=====`, `>>>>long commit identifier` and either Julie or Allison’s line that I don’t want to keep.
I’ll do this, and then commit again. In this example, we’ve kept Allison’s line:
Then I’ll stage, and write a commit message. I often write “resolving merge conflict” or something similar. When I stage the file, notice how now my edits look like a simple line replacement (compare with the image above before it was re\-staged):
And we’re done! We can inspect on GitHub.com that I am the most recent contributor to this repository. And if we look in the commit history we will see both Allison and my original commits, along with our merge conflict fix.
### 9\.6\.5 Activity
Create a merge conflict with your partner, following the steps that we just did in the demo above. Practice different approaches to solving errors: for example, try stashing instead of committing.
### 9\.6\.6 How do you avoid merge conflicts?
Merge conflicts can occur when you collaborate with others — I find most often it is collaborating with ME from a different computer. They will happen, but you can minimize them by getting into good habits.
To minimize merge conflicts, pull often so that you are aware of anything that is different, and deal with it early. Similarly, commit and push often so that your contributions do not become too unweildly for yourself or others later on.
Also, talk with your collaborators. Are they working on the exact same file right now that you need to be? If so, coordinate with them (in person, GChat, Slack, email). For example: “I’m working on X part and will push my changes before my meeting — then you can work on it and I’ll pull when I’m back.” Also, if you find yourself always working on the exact same file, you could consider breaking it into different files to minimize problems.
But merge conflicts will occur and some of them will be heartbreaking and demoralizing. They happen to me when I collaborate with myself between my work computer and laptop. We demoed small conflicts with just one file, but they can occur across many files, particularly when your code is generating figures, scripts, or HTML files. Sometimes the best approach is the [burn it all down method](https://happygitwithr.com/burn.html), where you delete your local copy of the repo and re\-clone.
Protect yourself by pulling and syncing often!
9\.7 Create your collaborative website
--------------------------------------
OK. Let’s have both Partners create a new RMarkdown file and name it `my_name_fisheries.Rmd`. Here’s what you will do:
1. Pull
2. Create a new RMarkdown file **and name it `my_name_fisheries.Rmd`**. Let’s do it all lowercase. These will become pages for our website
3. We’ll start by testing: let’s simply change the title inside the Rmd, call it “My Name’s Fisheries Analysis”
4. Knit
5. Save and sync your .Rmd and your .html files
* (pull, stage, commit, push)
6. Go to Partner 1’s repo, mine is [https://github.com/jules32/r\-collab/](https://github.com/jules32/r-collab/)
7. GitHub also supports this as a website (because we set up our gh\-pages branch)
Where is it? Figure out your website’s url from your github repo’s url — pay attention to urls. \- note that the url starts with my **username.github.io**
* my github repo: [https://github.com/jules32/r\-collab/](https://github.com/jules32/r-collab/)
* my website url: [https://jules32\.github.io/r\-collab/](https://jules32.github.io/r-collab/)
* right now this displays the README as the “home page” for our website.
8. Now navigate to your web page! For example:
* my github repo: [https://github.com/jules32/r\-collab/julie\_fisheries](https://github.com/jules32/r-collab/julie_fisheries)
* my website url: [https://jules32\.github.io/r\-collab/julie\_fisheries](https://jules32.github.io/r-collab/julie_fisheries)
> ***ProTip*** Pay attention to URLs. An unsung skill of the modern analyst is to be able to navigate the internet by keeping an eye on patterns.
So cool!
You and your partner have created individual webpages here, but they do not talk to each other (i.e. you can’t navigate between them or even know that one exists from the other). We will not organize these pages into a website today, but you can practice this on your own with this hour\-long tutorial: [Making free websites with RStudio’s R Markdown](https://jules32.github.io/rmarkdown-website-tutorial/).
> **Aside:** On websites, if something is called `index.html`, that defaults to the home page. So [https://jules32\.github.io/r\-collab/](https://jules32.github.io/r-collab/) is the same as [https://jules32\.github.io/r\-collab/index.html](https://jules32.github.io/r-collab/index.html). So as you think about building websites you can develop your index.Rmd file rather than your README.md as your homepage.
#### 9\.7\.0\.1 Troubleshooting
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 9\.7\.1 END **collaborating** session!
| always\_allow\_html: true |
| --- |
#### 9\.7\.0\.1 Troubleshooting
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 9\.7\.1 END **collaborating** session!
| always\_allow\_html: true |
| --- |
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/synthesis.html |
Chapter 10 Synthesis
====================
10\.1 Summary
-------------
In this session, we’ll pull together many of the skills that we’ve learned so far. Working in our existing `yourname_fisheries.Rmd` file within your collaborative project/repo from the previous session (`r-collab`), we’ll wrangle and visualize data from spreadsheets in R Markdown, communicate between RStudio (locally) and GitHub (remotely) to keep our updates safe, then add something new to our collaborator’s document. And we’ll learn a few new things along the way!
**Data used in the synthesis section:**
File name: noaa\_fisheries.csv
Description: NOAA Commercial Fisheries Landing data (1950 \- 2017\)
Accessed from: [https://www.st.nmfs.noaa.gov/commercial\-fisheries/commercial\-landings/](https://www.st.nmfs.noaa.gov/commercial-fisheries/commercial-landings/)
Source: Fisheries Statistics Division of the NOAA Fisheries
*Note on the data:* “aggregate” here means “These names represent aggregations of more than one species. They are not inclusive, but rather represent landings where we do not have species\-specific data. Selecting”Sharks“, for example, will not return all sharks but only those where we do not have more specific information.”
### 10\.1\.1 Objectives
* Synthesize data wrangling and visualization skills learned so far
* Add a few new tools for data cleaning from `stringr`
* Work collaboratively in an R Markdown file
* Publish your collaborative work as a webpage
### 10\.1\.2 Resources
* [Project oriented workflows](https://www.tidyverse.org/blog/2017/12/workflow-vs-script/) by Jenny Bryan
10\.2 Attach packages, read in and explore the data
---------------------------------------------------
In **your** .Rmd, attach the necessary packages in the topmost code chunk:
```
library(tidyverse)
library(here)
library(janitor)
library(paletteer) # install.packages("paletteer")
```
Open the noaa\_landings.csv file in Excel. Note that cells we want to be stored as `NA` actually have the words “no data” \- but we can include an additional argument in `read_csv()` to specify what we want to replace with `NA`.
Read in the noaa\_landings.csv data as object **us\_landings**, adding argument `na = "no data"` to automatically reassign any “no data” entries to `NA` during import:
```
us_landings <- read_csv(here("data","noaa_landings.csv"),
na = "no data")
```
Go exploring a bit:
```
summary(us_landings)
View(us_landings)
names(us_landings)
head(us_landings)
tail(us_landings)
```
10\.3 Some data cleaning to get salmon landings by species
----------------------------------------------------------
Now that we have our data in R, let’s think about some ways that we might want to make it more coder\- and analysis\-friendly.
Brainstorm with your partner about ways you might clean the data up a bit. Things to consider:
* Do you like typing in all caps?
* Are the column names manageable?
* Do we want symbols alongside values?
If your answer to all three is “no,” then we’re flying on the same plane. Here we’ll do some wrangling led by your recommendations for step\-by\-step cleaning.
Which of these would it make sense to do *first*, to make any subsequent steps easier for coding? We’ll start with `janitor::clean_names()` to get all column names into lowercase\_snake\_case:
```
salmon_clean <- us_landings %>%
clean_names()
```
Continue building on that sequence to:
* Convert everything to lower case with `mutate()` \+ (`str_to_lower()`)
* Remove dollar signs in value column (`mutate()` \+ `parse_number()`)
* Keep only observations that include “salmon” (`filter()` \+ `str_detect()`)
* Separate “salmon” from any additional refined information on species (`separate()`)
The entire thing might look like this:
```
salmon_clean <- us_landings %>%
clean_names() %>% # Make column headers snake_case
mutate(
afs_name = str_to_lower(afs_name)
) %>% # Converts character columns to lowercase
mutate(dollars_num = parse_number(dollars_usd)) %>% # Just keep numbers from $ column
filter(str_detect(afs_name, pattern = "salmon")) %>% # Only keep entries w/"salmon"
separate(afs_name, into = c("group", "subgroup"), sep = ", ") %>% # Note comma-space
drop_na(dollars_num) # Drop (listwise deletion) any observations with NA for dollars_num
```
Explore **salmon\_clean**.
10\.4 Find total annual US value ($) for each salmon subgroup
-------------------------------------------------------------
Find the annual total US landings and dollar value (summing across all states) for each type of salmon using `group_by()` \+ `summarize()`.
Think about what data/variables we want to use here: If we want to find **annual values** by **subgroup**, then what variables are we going to group by? Are we going to start from us\_landings, or from salmon\_clean?
```
salmon_us_annual <- salmon_clean %>%
group_by(year, subgroup) %>%
summarize(
tot_value = sum(dollars_num, na.rm = TRUE),
)
```
```
## `summarise()` regrouping output by 'year' (override with `.groups` argument)
```
10\.5 Make a graph of US commercial fisheries value by species over time with `ggplot2`
---------------------------------------------------------------------------------------
```
salmon_gg <-
ggplot(salmon_us_annual,
aes(x = year, y = tot_value, group = subgroup)) +
geom_line(aes(color = subgroup)) +
theme_bw() +
labs(x = "year", y = "US commercial salmon value (USD)")
salmon_gg
```
10\.6 Built\-in color palettes
------------------------------
Want to change the color scheme of your graph? Using a consistent theme and color scheme is a great way to make reports more cohesive within groups or organizations, and means less time is spent manually updating graphs to maintain consistency!
Luckily, there are **many** already built color palettes. For a glimpse, check out the ReadMe for [`paletteer` by Emil Hvidtfelt](https://emilhvitfeldt.github.io/paletteer/), which is an aggregate package of many existing color palette packages.
In fact, let’s go ahead and install it by running `install.packages("paletteer")` in the Console.
**Question:** Once we have `paletteer` installed, what do have to do to actually **use** the functions \& palettes in `paletteer`?
**Answer:** Attach the package! Update the topmost code chunk with `library(paletteer)` to make sure all of its functions are available.
Now, explore the different available packages and color palettes by typing (in the Console) `View(palettes_d_names)`. Then, add a new color palette from the list to your discrete series with an adding `ggplot` layer that looks like this:
```
scale_color_paletteer_d("package_name::palette_name")
```
**Note:** Beware of the palette *length* \- we have 7 subgroups of salmon, so we will want to pick palettes that have at least a length of 7\.
Once we add that layer, our entire graph code will look something like this (here, using the OkabeIto palette from the `colorblindr` package):
```
salmon_gg <-
ggplot(salmon_us_annual,
aes(x = year, y = tot_value, group = subgroup)) +
geom_line(aes(color = subgroup)) +
theme_bw() +
labs(x = "year", y = "US commercial salmon value (USD)") +
scale_color_paletteer_d("colorblindr::OkabeIto")
salmon_gg
```
Looking again at `palettes_d_names`, choose another color palette and update your gg\-graph.
10\.7 Sync with GitHub remote
-----------------------------
Stage, commit, (pull), and push your updates to GitHub for safe storage \& sharing. Check to make sure that the changes have been stored in your shared `r-collab` repo.
10\.8 Add an image to your partner’s document
---------------------------------------------
Now, let’s collaborate with our partner.
First:
* Pull again to make sure you have your partner’s most updated versions (there shouldn’t be any conflicts since you’ve been working in different .Rmd files)
* Now, open your **partner’s** .Rmd in RStudio
Second:
* Go to [octodex.github.com](https://octodex.github.com/) and find a version of octocat that you like
* On the image, right click and choose “Copy image location”
* In your **partner’s** .Rmd, add the image at the end using:
`!()[paste_image_location_you_just_copied_here]`
* Knit the .Rmd and check to ensure that **your** octocat shows up in **their** document
* Save, stage, commit, pull, then push to send your contributions back
* Pull again to make sure **your** .Rmd has updates from **your collaborator**
* Check out out your document as a webpage published with gh\-pages!
#### 10\.8\.0\.1 Reminder for gh\-pages link:
`username.github.io/repo-name/file-name`
#### 10\.8\.0\.2 Troubleshooting gh\-pages viewing revisited:
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 10\.8\.1 End Synthesis session!
10\.1 Summary
-------------
In this session, we’ll pull together many of the skills that we’ve learned so far. Working in our existing `yourname_fisheries.Rmd` file within your collaborative project/repo from the previous session (`r-collab`), we’ll wrangle and visualize data from spreadsheets in R Markdown, communicate between RStudio (locally) and GitHub (remotely) to keep our updates safe, then add something new to our collaborator’s document. And we’ll learn a few new things along the way!
**Data used in the synthesis section:**
File name: noaa\_fisheries.csv
Description: NOAA Commercial Fisheries Landing data (1950 \- 2017\)
Accessed from: [https://www.st.nmfs.noaa.gov/commercial\-fisheries/commercial\-landings/](https://www.st.nmfs.noaa.gov/commercial-fisheries/commercial-landings/)
Source: Fisheries Statistics Division of the NOAA Fisheries
*Note on the data:* “aggregate” here means “These names represent aggregations of more than one species. They are not inclusive, but rather represent landings where we do not have species\-specific data. Selecting”Sharks“, for example, will not return all sharks but only those where we do not have more specific information.”
### 10\.1\.1 Objectives
* Synthesize data wrangling and visualization skills learned so far
* Add a few new tools for data cleaning from `stringr`
* Work collaboratively in an R Markdown file
* Publish your collaborative work as a webpage
### 10\.1\.2 Resources
* [Project oriented workflows](https://www.tidyverse.org/blog/2017/12/workflow-vs-script/) by Jenny Bryan
### 10\.1\.1 Objectives
* Synthesize data wrangling and visualization skills learned so far
* Add a few new tools for data cleaning from `stringr`
* Work collaboratively in an R Markdown file
* Publish your collaborative work as a webpage
### 10\.1\.2 Resources
* [Project oriented workflows](https://www.tidyverse.org/blog/2017/12/workflow-vs-script/) by Jenny Bryan
10\.2 Attach packages, read in and explore the data
---------------------------------------------------
In **your** .Rmd, attach the necessary packages in the topmost code chunk:
```
library(tidyverse)
library(here)
library(janitor)
library(paletteer) # install.packages("paletteer")
```
Open the noaa\_landings.csv file in Excel. Note that cells we want to be stored as `NA` actually have the words “no data” \- but we can include an additional argument in `read_csv()` to specify what we want to replace with `NA`.
Read in the noaa\_landings.csv data as object **us\_landings**, adding argument `na = "no data"` to automatically reassign any “no data” entries to `NA` during import:
```
us_landings <- read_csv(here("data","noaa_landings.csv"),
na = "no data")
```
Go exploring a bit:
```
summary(us_landings)
View(us_landings)
names(us_landings)
head(us_landings)
tail(us_landings)
```
10\.3 Some data cleaning to get salmon landings by species
----------------------------------------------------------
Now that we have our data in R, let’s think about some ways that we might want to make it more coder\- and analysis\-friendly.
Brainstorm with your partner about ways you might clean the data up a bit. Things to consider:
* Do you like typing in all caps?
* Are the column names manageable?
* Do we want symbols alongside values?
If your answer to all three is “no,” then we’re flying on the same plane. Here we’ll do some wrangling led by your recommendations for step\-by\-step cleaning.
Which of these would it make sense to do *first*, to make any subsequent steps easier for coding? We’ll start with `janitor::clean_names()` to get all column names into lowercase\_snake\_case:
```
salmon_clean <- us_landings %>%
clean_names()
```
Continue building on that sequence to:
* Convert everything to lower case with `mutate()` \+ (`str_to_lower()`)
* Remove dollar signs in value column (`mutate()` \+ `parse_number()`)
* Keep only observations that include “salmon” (`filter()` \+ `str_detect()`)
* Separate “salmon” from any additional refined information on species (`separate()`)
The entire thing might look like this:
```
salmon_clean <- us_landings %>%
clean_names() %>% # Make column headers snake_case
mutate(
afs_name = str_to_lower(afs_name)
) %>% # Converts character columns to lowercase
mutate(dollars_num = parse_number(dollars_usd)) %>% # Just keep numbers from $ column
filter(str_detect(afs_name, pattern = "salmon")) %>% # Only keep entries w/"salmon"
separate(afs_name, into = c("group", "subgroup"), sep = ", ") %>% # Note comma-space
drop_na(dollars_num) # Drop (listwise deletion) any observations with NA for dollars_num
```
Explore **salmon\_clean**.
10\.4 Find total annual US value ($) for each salmon subgroup
-------------------------------------------------------------
Find the annual total US landings and dollar value (summing across all states) for each type of salmon using `group_by()` \+ `summarize()`.
Think about what data/variables we want to use here: If we want to find **annual values** by **subgroup**, then what variables are we going to group by? Are we going to start from us\_landings, or from salmon\_clean?
```
salmon_us_annual <- salmon_clean %>%
group_by(year, subgroup) %>%
summarize(
tot_value = sum(dollars_num, na.rm = TRUE),
)
```
```
## `summarise()` regrouping output by 'year' (override with `.groups` argument)
```
10\.5 Make a graph of US commercial fisheries value by species over time with `ggplot2`
---------------------------------------------------------------------------------------
```
salmon_gg <-
ggplot(salmon_us_annual,
aes(x = year, y = tot_value, group = subgroup)) +
geom_line(aes(color = subgroup)) +
theme_bw() +
labs(x = "year", y = "US commercial salmon value (USD)")
salmon_gg
```
10\.6 Built\-in color palettes
------------------------------
Want to change the color scheme of your graph? Using a consistent theme and color scheme is a great way to make reports more cohesive within groups or organizations, and means less time is spent manually updating graphs to maintain consistency!
Luckily, there are **many** already built color palettes. For a glimpse, check out the ReadMe for [`paletteer` by Emil Hvidtfelt](https://emilhvitfeldt.github.io/paletteer/), which is an aggregate package of many existing color palette packages.
In fact, let’s go ahead and install it by running `install.packages("paletteer")` in the Console.
**Question:** Once we have `paletteer` installed, what do have to do to actually **use** the functions \& palettes in `paletteer`?
**Answer:** Attach the package! Update the topmost code chunk with `library(paletteer)` to make sure all of its functions are available.
Now, explore the different available packages and color palettes by typing (in the Console) `View(palettes_d_names)`. Then, add a new color palette from the list to your discrete series with an adding `ggplot` layer that looks like this:
```
scale_color_paletteer_d("package_name::palette_name")
```
**Note:** Beware of the palette *length* \- we have 7 subgroups of salmon, so we will want to pick palettes that have at least a length of 7\.
Once we add that layer, our entire graph code will look something like this (here, using the OkabeIto palette from the `colorblindr` package):
```
salmon_gg <-
ggplot(salmon_us_annual,
aes(x = year, y = tot_value, group = subgroup)) +
geom_line(aes(color = subgroup)) +
theme_bw() +
labs(x = "year", y = "US commercial salmon value (USD)") +
scale_color_paletteer_d("colorblindr::OkabeIto")
salmon_gg
```
Looking again at `palettes_d_names`, choose another color palette and update your gg\-graph.
10\.7 Sync with GitHub remote
-----------------------------
Stage, commit, (pull), and push your updates to GitHub for safe storage \& sharing. Check to make sure that the changes have been stored in your shared `r-collab` repo.
10\.8 Add an image to your partner’s document
---------------------------------------------
Now, let’s collaborate with our partner.
First:
* Pull again to make sure you have your partner’s most updated versions (there shouldn’t be any conflicts since you’ve been working in different .Rmd files)
* Now, open your **partner’s** .Rmd in RStudio
Second:
* Go to [octodex.github.com](https://octodex.github.com/) and find a version of octocat that you like
* On the image, right click and choose “Copy image location”
* In your **partner’s** .Rmd, add the image at the end using:
`!()[paste_image_location_you_just_copied_here]`
* Knit the .Rmd and check to ensure that **your** octocat shows up in **their** document
* Save, stage, commit, pull, then push to send your contributions back
* Pull again to make sure **your** .Rmd has updates from **your collaborator**
* Check out out your document as a webpage published with gh\-pages!
#### 10\.8\.0\.1 Reminder for gh\-pages link:
`username.github.io/repo-name/file-name`
#### 10\.8\.0\.2 Troubleshooting gh\-pages viewing revisited:
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 10\.8\.1 End Synthesis session!
#### 10\.8\.0\.1 Reminder for gh\-pages link:
`username.github.io/repo-name/file-name`
#### 10\.8\.0\.2 Troubleshooting gh\-pages viewing revisited:
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 10\.8\.1 End Synthesis session!
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/r-for-excel/synthesis.html |
Chapter 10 Synthesis
====================
10\.1 Summary
-------------
In this session, we’ll pull together many of the skills that we’ve learned so far. Working in our existing `yourname_fisheries.Rmd` file within your collaborative project/repo from the previous session (`r-collab`), we’ll wrangle and visualize data from spreadsheets in R Markdown, communicate between RStudio (locally) and GitHub (remotely) to keep our updates safe, then add something new to our collaborator’s document. And we’ll learn a few new things along the way!
**Data used in the synthesis section:**
File name: noaa\_fisheries.csv
Description: NOAA Commercial Fisheries Landing data (1950 \- 2017\)
Accessed from: [https://www.st.nmfs.noaa.gov/commercial\-fisheries/commercial\-landings/](https://www.st.nmfs.noaa.gov/commercial-fisheries/commercial-landings/)
Source: Fisheries Statistics Division of the NOAA Fisheries
*Note on the data:* “aggregate” here means “These names represent aggregations of more than one species. They are not inclusive, but rather represent landings where we do not have species\-specific data. Selecting”Sharks“, for example, will not return all sharks but only those where we do not have more specific information.”
### 10\.1\.1 Objectives
* Synthesize data wrangling and visualization skills learned so far
* Add a few new tools for data cleaning from `stringr`
* Work collaboratively in an R Markdown file
* Publish your collaborative work as a webpage
### 10\.1\.2 Resources
* [Project oriented workflows](https://www.tidyverse.org/blog/2017/12/workflow-vs-script/) by Jenny Bryan
10\.2 Attach packages, read in and explore the data
---------------------------------------------------
In **your** .Rmd, attach the necessary packages in the topmost code chunk:
```
library(tidyverse)
library(here)
library(janitor)
library(paletteer) # install.packages("paletteer")
```
Open the noaa\_landings.csv file in Excel. Note that cells we want to be stored as `NA` actually have the words “no data” \- but we can include an additional argument in `read_csv()` to specify what we want to replace with `NA`.
Read in the noaa\_landings.csv data as object **us\_landings**, adding argument `na = "no data"` to automatically reassign any “no data” entries to `NA` during import:
```
us_landings <- read_csv(here("data","noaa_landings.csv"),
na = "no data")
```
Go exploring a bit:
```
summary(us_landings)
View(us_landings)
names(us_landings)
head(us_landings)
tail(us_landings)
```
10\.3 Some data cleaning to get salmon landings by species
----------------------------------------------------------
Now that we have our data in R, let’s think about some ways that we might want to make it more coder\- and analysis\-friendly.
Brainstorm with your partner about ways you might clean the data up a bit. Things to consider:
* Do you like typing in all caps?
* Are the column names manageable?
* Do we want symbols alongside values?
If your answer to all three is “no,” then we’re flying on the same plane. Here we’ll do some wrangling led by your recommendations for step\-by\-step cleaning.
Which of these would it make sense to do *first*, to make any subsequent steps easier for coding? We’ll start with `janitor::clean_names()` to get all column names into lowercase\_snake\_case:
```
salmon_clean <- us_landings %>%
clean_names()
```
Continue building on that sequence to:
* Convert everything to lower case with `mutate()` \+ (`str_to_lower()`)
* Remove dollar signs in value column (`mutate()` \+ `parse_number()`)
* Keep only observations that include “salmon” (`filter()` \+ `str_detect()`)
* Separate “salmon” from any additional refined information on species (`separate()`)
The entire thing might look like this:
```
salmon_clean <- us_landings %>%
clean_names() %>% # Make column headers snake_case
mutate(
afs_name = str_to_lower(afs_name)
) %>% # Converts character columns to lowercase
mutate(dollars_num = parse_number(dollars_usd)) %>% # Just keep numbers from $ column
filter(str_detect(afs_name, pattern = "salmon")) %>% # Only keep entries w/"salmon"
separate(afs_name, into = c("group", "subgroup"), sep = ", ") %>% # Note comma-space
drop_na(dollars_num) # Drop (listwise deletion) any observations with NA for dollars_num
```
Explore **salmon\_clean**.
10\.4 Find total annual US value ($) for each salmon subgroup
-------------------------------------------------------------
Find the annual total US landings and dollar value (summing across all states) for each type of salmon using `group_by()` \+ `summarize()`.
Think about what data/variables we want to use here: If we want to find **annual values** by **subgroup**, then what variables are we going to group by? Are we going to start from us\_landings, or from salmon\_clean?
```
salmon_us_annual <- salmon_clean %>%
group_by(year, subgroup) %>%
summarize(
tot_value = sum(dollars_num, na.rm = TRUE),
)
```
```
## `summarise()` regrouping output by 'year' (override with `.groups` argument)
```
10\.5 Make a graph of US commercial fisheries value by species over time with `ggplot2`
---------------------------------------------------------------------------------------
```
salmon_gg <-
ggplot(salmon_us_annual,
aes(x = year, y = tot_value, group = subgroup)) +
geom_line(aes(color = subgroup)) +
theme_bw() +
labs(x = "year", y = "US commercial salmon value (USD)")
salmon_gg
```
10\.6 Built\-in color palettes
------------------------------
Want to change the color scheme of your graph? Using a consistent theme and color scheme is a great way to make reports more cohesive within groups or organizations, and means less time is spent manually updating graphs to maintain consistency!
Luckily, there are **many** already built color palettes. For a glimpse, check out the ReadMe for [`paletteer` by Emil Hvidtfelt](https://emilhvitfeldt.github.io/paletteer/), which is an aggregate package of many existing color palette packages.
In fact, let’s go ahead and install it by running `install.packages("paletteer")` in the Console.
**Question:** Once we have `paletteer` installed, what do have to do to actually **use** the functions \& palettes in `paletteer`?
**Answer:** Attach the package! Update the topmost code chunk with `library(paletteer)` to make sure all of its functions are available.
Now, explore the different available packages and color palettes by typing (in the Console) `View(palettes_d_names)`. Then, add a new color palette from the list to your discrete series with an adding `ggplot` layer that looks like this:
```
scale_color_paletteer_d("package_name::palette_name")
```
**Note:** Beware of the palette *length* \- we have 7 subgroups of salmon, so we will want to pick palettes that have at least a length of 7\.
Once we add that layer, our entire graph code will look something like this (here, using the OkabeIto palette from the `colorblindr` package):
```
salmon_gg <-
ggplot(salmon_us_annual,
aes(x = year, y = tot_value, group = subgroup)) +
geom_line(aes(color = subgroup)) +
theme_bw() +
labs(x = "year", y = "US commercial salmon value (USD)") +
scale_color_paletteer_d("colorblindr::OkabeIto")
salmon_gg
```
Looking again at `palettes_d_names`, choose another color palette and update your gg\-graph.
10\.7 Sync with GitHub remote
-----------------------------
Stage, commit, (pull), and push your updates to GitHub for safe storage \& sharing. Check to make sure that the changes have been stored in your shared `r-collab` repo.
10\.8 Add an image to your partner’s document
---------------------------------------------
Now, let’s collaborate with our partner.
First:
* Pull again to make sure you have your partner’s most updated versions (there shouldn’t be any conflicts since you’ve been working in different .Rmd files)
* Now, open your **partner’s** .Rmd in RStudio
Second:
* Go to [octodex.github.com](https://octodex.github.com/) and find a version of octocat that you like
* On the image, right click and choose “Copy image location”
* In your **partner’s** .Rmd, add the image at the end using:
`!()[paste_image_location_you_just_copied_here]`
* Knit the .Rmd and check to ensure that **your** octocat shows up in **their** document
* Save, stage, commit, pull, then push to send your contributions back
* Pull again to make sure **your** .Rmd has updates from **your collaborator**
* Check out out your document as a webpage published with gh\-pages!
#### 10\.8\.0\.1 Reminder for gh\-pages link:
`username.github.io/repo-name/file-name`
#### 10\.8\.0\.2 Troubleshooting gh\-pages viewing revisited:
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 10\.8\.1 End Synthesis session!
10\.1 Summary
-------------
In this session, we’ll pull together many of the skills that we’ve learned so far. Working in our existing `yourname_fisheries.Rmd` file within your collaborative project/repo from the previous session (`r-collab`), we’ll wrangle and visualize data from spreadsheets in R Markdown, communicate between RStudio (locally) and GitHub (remotely) to keep our updates safe, then add something new to our collaborator’s document. And we’ll learn a few new things along the way!
**Data used in the synthesis section:**
File name: noaa\_fisheries.csv
Description: NOAA Commercial Fisheries Landing data (1950 \- 2017\)
Accessed from: [https://www.st.nmfs.noaa.gov/commercial\-fisheries/commercial\-landings/](https://www.st.nmfs.noaa.gov/commercial-fisheries/commercial-landings/)
Source: Fisheries Statistics Division of the NOAA Fisheries
*Note on the data:* “aggregate” here means “These names represent aggregations of more than one species. They are not inclusive, but rather represent landings where we do not have species\-specific data. Selecting”Sharks“, for example, will not return all sharks but only those where we do not have more specific information.”
### 10\.1\.1 Objectives
* Synthesize data wrangling and visualization skills learned so far
* Add a few new tools for data cleaning from `stringr`
* Work collaboratively in an R Markdown file
* Publish your collaborative work as a webpage
### 10\.1\.2 Resources
* [Project oriented workflows](https://www.tidyverse.org/blog/2017/12/workflow-vs-script/) by Jenny Bryan
### 10\.1\.1 Objectives
* Synthesize data wrangling and visualization skills learned so far
* Add a few new tools for data cleaning from `stringr`
* Work collaboratively in an R Markdown file
* Publish your collaborative work as a webpage
### 10\.1\.2 Resources
* [Project oriented workflows](https://www.tidyverse.org/blog/2017/12/workflow-vs-script/) by Jenny Bryan
10\.2 Attach packages, read in and explore the data
---------------------------------------------------
In **your** .Rmd, attach the necessary packages in the topmost code chunk:
```
library(tidyverse)
library(here)
library(janitor)
library(paletteer) # install.packages("paletteer")
```
Open the noaa\_landings.csv file in Excel. Note that cells we want to be stored as `NA` actually have the words “no data” \- but we can include an additional argument in `read_csv()` to specify what we want to replace with `NA`.
Read in the noaa\_landings.csv data as object **us\_landings**, adding argument `na = "no data"` to automatically reassign any “no data” entries to `NA` during import:
```
us_landings <- read_csv(here("data","noaa_landings.csv"),
na = "no data")
```
Go exploring a bit:
```
summary(us_landings)
View(us_landings)
names(us_landings)
head(us_landings)
tail(us_landings)
```
10\.3 Some data cleaning to get salmon landings by species
----------------------------------------------------------
Now that we have our data in R, let’s think about some ways that we might want to make it more coder\- and analysis\-friendly.
Brainstorm with your partner about ways you might clean the data up a bit. Things to consider:
* Do you like typing in all caps?
* Are the column names manageable?
* Do we want symbols alongside values?
If your answer to all three is “no,” then we’re flying on the same plane. Here we’ll do some wrangling led by your recommendations for step\-by\-step cleaning.
Which of these would it make sense to do *first*, to make any subsequent steps easier for coding? We’ll start with `janitor::clean_names()` to get all column names into lowercase\_snake\_case:
```
salmon_clean <- us_landings %>%
clean_names()
```
Continue building on that sequence to:
* Convert everything to lower case with `mutate()` \+ (`str_to_lower()`)
* Remove dollar signs in value column (`mutate()` \+ `parse_number()`)
* Keep only observations that include “salmon” (`filter()` \+ `str_detect()`)
* Separate “salmon” from any additional refined information on species (`separate()`)
The entire thing might look like this:
```
salmon_clean <- us_landings %>%
clean_names() %>% # Make column headers snake_case
mutate(
afs_name = str_to_lower(afs_name)
) %>% # Converts character columns to lowercase
mutate(dollars_num = parse_number(dollars_usd)) %>% # Just keep numbers from $ column
filter(str_detect(afs_name, pattern = "salmon")) %>% # Only keep entries w/"salmon"
separate(afs_name, into = c("group", "subgroup"), sep = ", ") %>% # Note comma-space
drop_na(dollars_num) # Drop (listwise deletion) any observations with NA for dollars_num
```
Explore **salmon\_clean**.
10\.4 Find total annual US value ($) for each salmon subgroup
-------------------------------------------------------------
Find the annual total US landings and dollar value (summing across all states) for each type of salmon using `group_by()` \+ `summarize()`.
Think about what data/variables we want to use here: If we want to find **annual values** by **subgroup**, then what variables are we going to group by? Are we going to start from us\_landings, or from salmon\_clean?
```
salmon_us_annual <- salmon_clean %>%
group_by(year, subgroup) %>%
summarize(
tot_value = sum(dollars_num, na.rm = TRUE),
)
```
```
## `summarise()` regrouping output by 'year' (override with `.groups` argument)
```
10\.5 Make a graph of US commercial fisheries value by species over time with `ggplot2`
---------------------------------------------------------------------------------------
```
salmon_gg <-
ggplot(salmon_us_annual,
aes(x = year, y = tot_value, group = subgroup)) +
geom_line(aes(color = subgroup)) +
theme_bw() +
labs(x = "year", y = "US commercial salmon value (USD)")
salmon_gg
```
10\.6 Built\-in color palettes
------------------------------
Want to change the color scheme of your graph? Using a consistent theme and color scheme is a great way to make reports more cohesive within groups or organizations, and means less time is spent manually updating graphs to maintain consistency!
Luckily, there are **many** already built color palettes. For a glimpse, check out the ReadMe for [`paletteer` by Emil Hvidtfelt](https://emilhvitfeldt.github.io/paletteer/), which is an aggregate package of many existing color palette packages.
In fact, let’s go ahead and install it by running `install.packages("paletteer")` in the Console.
**Question:** Once we have `paletteer` installed, what do have to do to actually **use** the functions \& palettes in `paletteer`?
**Answer:** Attach the package! Update the topmost code chunk with `library(paletteer)` to make sure all of its functions are available.
Now, explore the different available packages and color palettes by typing (in the Console) `View(palettes_d_names)`. Then, add a new color palette from the list to your discrete series with an adding `ggplot` layer that looks like this:
```
scale_color_paletteer_d("package_name::palette_name")
```
**Note:** Beware of the palette *length* \- we have 7 subgroups of salmon, so we will want to pick palettes that have at least a length of 7\.
Once we add that layer, our entire graph code will look something like this (here, using the OkabeIto palette from the `colorblindr` package):
```
salmon_gg <-
ggplot(salmon_us_annual,
aes(x = year, y = tot_value, group = subgroup)) +
geom_line(aes(color = subgroup)) +
theme_bw() +
labs(x = "year", y = "US commercial salmon value (USD)") +
scale_color_paletteer_d("colorblindr::OkabeIto")
salmon_gg
```
Looking again at `palettes_d_names`, choose another color palette and update your gg\-graph.
10\.7 Sync with GitHub remote
-----------------------------
Stage, commit, (pull), and push your updates to GitHub for safe storage \& sharing. Check to make sure that the changes have been stored in your shared `r-collab` repo.
10\.8 Add an image to your partner’s document
---------------------------------------------
Now, let’s collaborate with our partner.
First:
* Pull again to make sure you have your partner’s most updated versions (there shouldn’t be any conflicts since you’ve been working in different .Rmd files)
* Now, open your **partner’s** .Rmd in RStudio
Second:
* Go to [octodex.github.com](https://octodex.github.com/) and find a version of octocat that you like
* On the image, right click and choose “Copy image location”
* In your **partner’s** .Rmd, add the image at the end using:
`!()[paste_image_location_you_just_copied_here]`
* Knit the .Rmd and check to ensure that **your** octocat shows up in **their** document
* Save, stage, commit, pull, then push to send your contributions back
* Pull again to make sure **your** .Rmd has updates from **your collaborator**
* Check out out your document as a webpage published with gh\-pages!
#### 10\.8\.0\.1 Reminder for gh\-pages link:
`username.github.io/repo-name/file-name`
#### 10\.8\.0\.2 Troubleshooting gh\-pages viewing revisited:
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 10\.8\.1 End Synthesis session!
#### 10\.8\.0\.1 Reminder for gh\-pages link:
`username.github.io/repo-name/file-name`
#### 10\.8\.0\.2 Troubleshooting gh\-pages viewing revisited:
* 404 error? Remove trailing / from the url
* Wants you to download? Remove trailing .Rmd from the url
### 10\.8\.1 End Synthesis session!
| Field Specific |
info201.github.io | https://info201.github.io/command-line.html |
Chapter 2 The Command Line
==========================
The **command\-line** is an *interface* to a computer—a way for you (the human) to communicate with the machine. But unlike common graphical interfaces that use [windows, icons, menus, and pointers](https://en.wikipedia.org/wiki/WIMP_(computing)), the command\-line is *text\-based*: you type commands instead of clicking on icons. The command\-line lets you do everything you’d normally do by clicking with a mouse, but by typing in a manner similar to programming!
An example of the command\-line in action (from Wikipedia).
The command\-line is not as friendly or intuitive as a graphical interface: it’s much harder to learn and figure out. However, it has the advantage of being both more powerful and more efficient in the hands of expert users. (It’s faster to type than to move a mouse, and you can do *lots* of “clicks” with a single command). The command\-line is also used when working on remote servers or other computers that for some reason do not have a graphical interface enabled. Thus, command line is an essential tool for all professional developers, particularly when working with large amounts of data or files.
This chapter will give you a brief introduction to basic tasks using the command\-line: enough to get you comfortable navigating the interface and able to interpret commands.
2\.1 Accessing the Command\-Line
--------------------------------
In order to use the command\-line, you will need to open a **command shell** (a.k.a. a *command prompt*). This is a program that provides the interface to type commands into. You should have installed a command shell (hereafter “the terminal”) as part of [setting up your machine](setup-machine.html#setup-machine).
Once you open up the shell (Terminal or Git Bash), you should see something like this (red notes are added):
A newly opened command\-line.
This is the textual equivalent of having opened up Finder or File Explorer and having it show you the user’s “Home” folder. The text shown lets you know:
* What **machine** you’re currently interfacing with (you can use the command\-line to control different computers across a network or the internet).
* What **directory** (folder) you are currently looking at (`~` is a shorthand for the “home directory”).
* What **user** you are logged in as.
After that you’ll see the **prompt** (typically denoted as the `$` symbol), which is where you will type in your commands.
2\.2 Navigating the Command Line
--------------------------------
Although the command\-prompt gives you the name of the folder you’re in, you might like more detail about where that folder is. Time to send your first command! At the prompt, type:
```
pwd
```
This stands for **p**rint **w**orking **d**irectory (shell commands are highly abbreviated to make them faster to type), and will tell the computer to print the folder you are currently “in”.
*Fun fact:* technically, this command usually starts a tiny program (app) that does exactly one thing: prints the working directory. When you run a command, you’re actually executing a tiny program! And when you run programs (tiny or large) on the command\-line, it looks like you’re typing in commands.
Folders on computers are stored in a hierarchy: each folder has more folders inside it, which have more folders inside them. This produces a [tree](https://en.wikipedia.org/wiki/Tree_(data_structure)) structure which on a Mac may look like:
A Directory Tree, from Bradnam and Korf.
You describe what folder you are in putting a slash `/` between each folder in the tree: thus `/Users/iguest` means “the `iguest` folder, which is inside the `Users` folder”.
At the very top (or bottom, depending on your point of view) is the **root** `/` directory‐which has no name, and so is just indicated with that single slash. So `/Users/iguest` really means “the `iguest` folder, which is inside the `Users` folder, which is inside the *root* folder”.
**A Note about Windows:** while Mac (and other unix computers) use `/` as the root directory of the while system, Windows normally uses `This PC` on Explorer and similar graphical frontends, followed by the drive letters like `C:\`, followed by folder names separeted by backslashes. *gitbash* uses a unix\-style filesystem tree approach with `/` replacing `This PC`, backslashes with forward slashes, and drive letters must be used without colon. So `C:\Users\Xiaotian\Desktop` can be accessed as `/c/Users/Xiatioan/Desktop`.
### 2\.2\.1 Changing Directories
What if you want to change folders? In a graphical system like Finder, you would just double\-click on the folder to open it. But there’s no clicking on the command\-line.
This includes clicking to move the cursor to an earlier part of the command you typed. You’ll need to use the left and right arrow keys to move the cursor instead!
**Protip:** The up and down arrow keys will let you cycle though your previous commands so you don’t need to re\-type them!
Since you can’t click on a folder, you’ll need to use another command:
```
cd folder_name
```
The first word is the **command**, or what you want the computer to do. In this case, you’re issuing the command that means **c**hange **d**irectory.
The second word is an example of an **argument**, which is a programming term that means “more details about what to do”. In this case, you’re providing a *required* argument of what folder you want to change to! (You’ll of course need to replace `folder_name` with the name of the folder).
* Try changing to the `Desktop` folder, which should be inside the home folder you started in—you could see it in Finder or File Explorer!
**A note about Desktop on Windows:** If you’re on Windows and the contents of your Desktop in the terminal doesn’t match the contents of your actual Desktop, your computer may be configured to have your Desktop in a different directory. This usually happens if you’ve set up software to back up your Desktop to an online service. Instead of `~/Desktop`, check if your Desktop folder is really in `~/OneDrive/Desktop` or `~/Dropbox/Desktop`.
* After you change folders, try printing your current location. Can you see that it has changed?
### 2\.2\.2 Listing Files
In a graphical system, once you’ve double\-clicked on a folder, Finder will show you the contents of that folder. The command\-line doesn’t do this automatically; instead you need another command:
```
ls [folder_name]
```
This command says to **l**i**s**t the folder contents. Note that the *argument* here is written in brackets (`[]`) to indicate that it is *optional*. If you just issue the **`ls`** command without an argument, it will list the contents of the current folder. If you include the optional argument (leaving off the brackets), you can “peek” at the contents of a folder you are not currently in.
**Warning**: The command\-line can be not great about giving **feedback** for your actions. For example, if there are no files in the folder, then `ls` will simply show nothing, potentially looking like it “didn’t work”. Or when typing a **password**, the letters you type won’t show (not even as `*`) as a security measure.
Just because you don’t see any results from your command/typing, doesn’t mean it didn’t work! Trust in yourself, and use basic commands like `ls` and `pwd` to confirm any changes if you’re unsure. Take it slow, one step at a time.
### 2\.2\.3 Paths
Note that both the **`cd`** and **`ls`** commands work even for folders that are not “immediately inside” the current directory! You can refer to *any* file or folder on the computer by specifying its **path**. A file’s path is “how you get to that file”: the list of folders you’d need to click through to get to the file, with each folder separated by a `/`:
```
cd /Users/iguest/Desktop/
```
This says to start at the root directory (that initial `/`), then go to `Users`, then go to `iguest`, then to `Desktop`.
Because this path starts with a specific directory (the root directory), it is referred to as an **absolute path**. No matter what folder you currently happen to be in, that path will refer to the correct file because it always starts on its journey from the root.
Contrast that with:
```
cd iguest/Desktop/
```
Because this path doesn’t have the leading slash, it just says to “go to the `iguest/Desktop` folder *from the current location*”. It is known as a **relative path**: it gives you directions to a file *relative to the current folder*. As such, the relative path `iguest/Desktop/` path will only refer to the correct location if you happen to be in the `/Users` folder; if you start somewhere else, who knows where you’ll end up!
You should **always** use relative paths, particularly when programming! Because you’ll almost always be managing multiples files in a project, you should refer to the files *relatively* within your project. That way, you program can easily work across computers. For example, if your code refers to `/Users/your-user-name/project-name/data`, it can only run on the `your-user-name` account. However, if you use a *relative path* within your code (i.e., `project-name/data`), the program will run on multiple computers (crucial for collaborative projects).
You can refer to the “current folder” by using a single dot **`.`**. So the command
```
ls .
```
means “list the contents of the current folder” (the same thing you get if you leave off the argument).
If you want to go *up* a directory, you use *two* dots: **`..`** to refer to the **parent** folder (that is, the one that contains this one). So the command
```
ls ..
```
means “list the contents of the folder that contains the current folder”.
Note that **`.`** and **`..`** act just like folder names, so you can include them anywhere in paths: `../../my_folder` says to go up two directories, and then into `my_folder`.
**Protip:** Most command shells like Terminal and Git Bash support **tab\-completion**. If you type out just the first few letters of a file or folder name and then hit the `tab` key, it will automatically fill in the rest of the name! If the name is ambiguous (e.g., you type `Do` and there is both a `Documents` and a `Downloads` folder), you can hit `tab` *twice* to see the list of matching folders. Then add enough letters to distinguish them and tab to complete! This will make your life *a lot* easier.
Additionally, you can use a tilde **`~`** as shorthand for the home directory of the current user. Just like `.` refers to “current folder”, `~` refers to the user’s home directory (usually `/Users/USERNAME` on mac or `/c/Users/USERNAME` on windows). And of course, you can use the tilde as part of a path as well (e.g., `~/Desktop` is an *absolute path* to the desktop for the current user).
**A note about home directory on Windows:** unlike unix, windows does not have a consistent concept of home directory, and different programs may interpret it in a different way. For instance, *gitbash* assumes the home directory is `/c/Users/USERNAME` while `R` assumes it is `/c/Users/USERNAME/Desktop`.
As you perhaps noticed above, command line uses space as a separator between the command and additional arguments. This means it is more complicated to work with paths and file names that contain a space. For instance, if you want to change into a folder called `my folder`, then issuing `cd my folder` results in an error (“cd: too many arguments”). This is because `cd` thinks that `my` is folder name, and `folder` after space is an additional argument. But there are two ways to handle spaces: first, you can put your pathname in quotes like `cd "my folder"`, and second, you can *escape* the space with a backslash: `cd my\ folder`. Both of these options work reasonably well but in general we recommend to avoid spaces in file names whenever you work with command line.
2\.3 File Commands
------------------
Once you’re comfortable navigating folders in the command\-line, you can start to use it to do all the same things you would do with Finder or File Explorer, simply by using the correct command. Here is an short list of commands to get you started using the command prompt, though there are [many more](http://www.lagmonster.org/docs/unix/intro-137.html):
| Command | Behavior |
| --- | --- |
| **`mkdir`** | **m**a**k**e a **dir**ectory |
| **`rm`** | **r**e**m**ove a file or folder |
| **`cp`** | **c**o**p**y a file from one location to another |
| **`open`** | opens a file or folder (Mac only) |
| **`start`** | opens a file or folder (Windows only) |
| **`cat`** | con**cat**enate (combine) file contents and display the results |
| **`history`** | show previous commands executed |
**Warning**: The command\-line makes it **dangerously easy** to *permanently delete* multiple files or folders and *will not* ask you to confirm that you want to delete them (or move them to the “recycling bin”). Be very careful when using the terminal to manage your files, as it is very powerful.
Be aware that many of these commands **won’t print anything** when you run them. This often means that they worked; they just did so quietly. If it *doesn’t* work, you’ll know because you’ll see a message telling you so (and why, if you read the message). So just because you didn’t get any output doesn’t mean you did something wrong—you can use another command (such as **`ls`**) to confirm that the files or folders changed the way you wanted!
### 2\.3\.1 Learning New Commands
How can you figure out what kind of arguments these commands take? You can look it up! This information is available online, but many command shells (though *not* Git Bash, unfortunately) also include their own manual you can use to look up commands!
```
man mkdir
```
Will show the **man**ual for the **`mkdir`** program/command.
Because manuals are often long, they are opened up in a command\-line viewer called [`less`](https://en.wikipedia.org/wiki/Less_(Unix)). You can “scroll” up and down by using the arrow keys. Hit the `q` key to **q**uit and return to the command\-prompt.
The `mkdir` man page.
If you look under “Synopsis” you can see a summary of all the different arguments this command understands. A few notes about reading this syntax:
* Recall that anything in brackets `[]` is optional. Arguments that are not in brackets (e.g., `directory_name`) are required.
* **“Options”** (or “flags”) for command\-line programs are often marked with a leading dash **`-`** to make them distinct from file or folder names. Options may change the way a command\-line program behaves—like how you might set “easy” or “hard” mode in a game. You can either write out each option individually, or combine them: **`mkdir -p -v`** and **`mkdir -pv`** are equivalent.
+ Some options may require an additional argument beyond just indicating a particular operation style. In this case, you can see that the `-m` option requires you to specify an additional `mode` parameter; see the details below for what this looks like.
* Underlined arguments are ones you choose: you don’t actually type the word `directory_name`, but instead your own directory name! Contrast this with the options: if you want to use the `-p` option, you need to type `-p` exactly.
Command\-line manuals (“man pages”) are often very difficult to read and understand: start by looking at just the required arguments (which are usually straightforward), and then search for and use a particular option if you’re looking to change a command’s behavior.
For practice, try to read the man page for `rm` and figure out how to delete a folder and not just a single file. Note that you’ll want to be careful, as this is a good way to [break things](http://www.pcworld.com/article/3057235/data-center-cloud/that-man-who-deleted-his-entire-company-with-a-line-of-code-it-was-a-hoax.html).
2\.4 Dealing With Errors
------------------------
Note that the syntax of these commands (how you write them out) is very important. Computers aren’t good at figuring out what you meant if you aren’t really specific; forgetting a space may result in an entirely different action.
Try another command: **`echo`** lets you “echo” (print out) some text. Try echoing `"Hello World"` (which is the traditional first computer program):
```
echo "Hello world"
```
What happens if you forget the closing quote? You keep hitting “enter” but you just get that `>` over and over again! What’s going on?
* Because you didn’t “close” the quote, the shell thinks you are still typing the message you want to echo! When you hit “enter” it adds a *line break* instead of ending the command, and the `>` marks that you’re still going. If you finally close the quote, you’ll see your multi\-line message printed!
**IMPORTANT TIP** If you ever get stuck in the command\-line, hit **`ctrl-c`** (The `control` and `c` keys together). This almost always means “cancel”, and will “stop” whatever program or command is currently running in the shell so that you can try again. Just remember: “**`ctrl-c`** to flee”.
(If that doesn’t work, try hitting the `esc` key, or typing `exit`, `q`, or `quit`. Those commands will cover *most* command\-line programs).
Throughout this book, we’ll discuss a variety of approaches to handling errors in computer programs. While it’s tempting to disregard dense error messages, many programs do provide **error messages** that explain what went wrong. If you enter an unrecognized command, the terminal will inform you of your mistake:
```
lx
> -bash: lx: command not found
```
However, forgetting arguments yields different results. In some cases, there will be a default behavior (see what happens if you enter `cd` without any arguments). If more information is *required* to run a command, your terminal will provide you with a brief summary of the command’s usage:
```
mkdir
> usage: mkdir [-pv] [-m mode] directory ...
```
Take the time to read the error message and think about what the problem might be before you try again.
Resources
---------
* [Learn Enough Command Line to be Dangerous](https://www.learnenough.com/command-line-tutorial#sec-basics)
* [Video series: Bash commands](https://www.youtube.com/watch?v=sqYUYHn-HKg&list=PLCAF7D691FFA25555)
* [List of Common Commands](http://www.lagmonster.org/docs/unix/intro-137.html) (also [here](http://www.math.utah.edu/lab/unix/unix-commands.html))
2\.1 Accessing the Command\-Line
--------------------------------
In order to use the command\-line, you will need to open a **command shell** (a.k.a. a *command prompt*). This is a program that provides the interface to type commands into. You should have installed a command shell (hereafter “the terminal”) as part of [setting up your machine](setup-machine.html#setup-machine).
Once you open up the shell (Terminal or Git Bash), you should see something like this (red notes are added):
A newly opened command\-line.
This is the textual equivalent of having opened up Finder or File Explorer and having it show you the user’s “Home” folder. The text shown lets you know:
* What **machine** you’re currently interfacing with (you can use the command\-line to control different computers across a network or the internet).
* What **directory** (folder) you are currently looking at (`~` is a shorthand for the “home directory”).
* What **user** you are logged in as.
After that you’ll see the **prompt** (typically denoted as the `$` symbol), which is where you will type in your commands.
2\.2 Navigating the Command Line
--------------------------------
Although the command\-prompt gives you the name of the folder you’re in, you might like more detail about where that folder is. Time to send your first command! At the prompt, type:
```
pwd
```
This stands for **p**rint **w**orking **d**irectory (shell commands are highly abbreviated to make them faster to type), and will tell the computer to print the folder you are currently “in”.
*Fun fact:* technically, this command usually starts a tiny program (app) that does exactly one thing: prints the working directory. When you run a command, you’re actually executing a tiny program! And when you run programs (tiny or large) on the command\-line, it looks like you’re typing in commands.
Folders on computers are stored in a hierarchy: each folder has more folders inside it, which have more folders inside them. This produces a [tree](https://en.wikipedia.org/wiki/Tree_(data_structure)) structure which on a Mac may look like:
A Directory Tree, from Bradnam and Korf.
You describe what folder you are in putting a slash `/` between each folder in the tree: thus `/Users/iguest` means “the `iguest` folder, which is inside the `Users` folder”.
At the very top (or bottom, depending on your point of view) is the **root** `/` directory‐which has no name, and so is just indicated with that single slash. So `/Users/iguest` really means “the `iguest` folder, which is inside the `Users` folder, which is inside the *root* folder”.
**A Note about Windows:** while Mac (and other unix computers) use `/` as the root directory of the while system, Windows normally uses `This PC` on Explorer and similar graphical frontends, followed by the drive letters like `C:\`, followed by folder names separeted by backslashes. *gitbash* uses a unix\-style filesystem tree approach with `/` replacing `This PC`, backslashes with forward slashes, and drive letters must be used without colon. So `C:\Users\Xiaotian\Desktop` can be accessed as `/c/Users/Xiatioan/Desktop`.
### 2\.2\.1 Changing Directories
What if you want to change folders? In a graphical system like Finder, you would just double\-click on the folder to open it. But there’s no clicking on the command\-line.
This includes clicking to move the cursor to an earlier part of the command you typed. You’ll need to use the left and right arrow keys to move the cursor instead!
**Protip:** The up and down arrow keys will let you cycle though your previous commands so you don’t need to re\-type them!
Since you can’t click on a folder, you’ll need to use another command:
```
cd folder_name
```
The first word is the **command**, or what you want the computer to do. In this case, you’re issuing the command that means **c**hange **d**irectory.
The second word is an example of an **argument**, which is a programming term that means “more details about what to do”. In this case, you’re providing a *required* argument of what folder you want to change to! (You’ll of course need to replace `folder_name` with the name of the folder).
* Try changing to the `Desktop` folder, which should be inside the home folder you started in—you could see it in Finder or File Explorer!
**A note about Desktop on Windows:** If you’re on Windows and the contents of your Desktop in the terminal doesn’t match the contents of your actual Desktop, your computer may be configured to have your Desktop in a different directory. This usually happens if you’ve set up software to back up your Desktop to an online service. Instead of `~/Desktop`, check if your Desktop folder is really in `~/OneDrive/Desktop` or `~/Dropbox/Desktop`.
* After you change folders, try printing your current location. Can you see that it has changed?
### 2\.2\.2 Listing Files
In a graphical system, once you’ve double\-clicked on a folder, Finder will show you the contents of that folder. The command\-line doesn’t do this automatically; instead you need another command:
```
ls [folder_name]
```
This command says to **l**i**s**t the folder contents. Note that the *argument* here is written in brackets (`[]`) to indicate that it is *optional*. If you just issue the **`ls`** command without an argument, it will list the contents of the current folder. If you include the optional argument (leaving off the brackets), you can “peek” at the contents of a folder you are not currently in.
**Warning**: The command\-line can be not great about giving **feedback** for your actions. For example, if there are no files in the folder, then `ls` will simply show nothing, potentially looking like it “didn’t work”. Or when typing a **password**, the letters you type won’t show (not even as `*`) as a security measure.
Just because you don’t see any results from your command/typing, doesn’t mean it didn’t work! Trust in yourself, and use basic commands like `ls` and `pwd` to confirm any changes if you’re unsure. Take it slow, one step at a time.
### 2\.2\.3 Paths
Note that both the **`cd`** and **`ls`** commands work even for folders that are not “immediately inside” the current directory! You can refer to *any* file or folder on the computer by specifying its **path**. A file’s path is “how you get to that file”: the list of folders you’d need to click through to get to the file, with each folder separated by a `/`:
```
cd /Users/iguest/Desktop/
```
This says to start at the root directory (that initial `/`), then go to `Users`, then go to `iguest`, then to `Desktop`.
Because this path starts with a specific directory (the root directory), it is referred to as an **absolute path**. No matter what folder you currently happen to be in, that path will refer to the correct file because it always starts on its journey from the root.
Contrast that with:
```
cd iguest/Desktop/
```
Because this path doesn’t have the leading slash, it just says to “go to the `iguest/Desktop` folder *from the current location*”. It is known as a **relative path**: it gives you directions to a file *relative to the current folder*. As such, the relative path `iguest/Desktop/` path will only refer to the correct location if you happen to be in the `/Users` folder; if you start somewhere else, who knows where you’ll end up!
You should **always** use relative paths, particularly when programming! Because you’ll almost always be managing multiples files in a project, you should refer to the files *relatively* within your project. That way, you program can easily work across computers. For example, if your code refers to `/Users/your-user-name/project-name/data`, it can only run on the `your-user-name` account. However, if you use a *relative path* within your code (i.e., `project-name/data`), the program will run on multiple computers (crucial for collaborative projects).
You can refer to the “current folder” by using a single dot **`.`**. So the command
```
ls .
```
means “list the contents of the current folder” (the same thing you get if you leave off the argument).
If you want to go *up* a directory, you use *two* dots: **`..`** to refer to the **parent** folder (that is, the one that contains this one). So the command
```
ls ..
```
means “list the contents of the folder that contains the current folder”.
Note that **`.`** and **`..`** act just like folder names, so you can include them anywhere in paths: `../../my_folder` says to go up two directories, and then into `my_folder`.
**Protip:** Most command shells like Terminal and Git Bash support **tab\-completion**. If you type out just the first few letters of a file or folder name and then hit the `tab` key, it will automatically fill in the rest of the name! If the name is ambiguous (e.g., you type `Do` and there is both a `Documents` and a `Downloads` folder), you can hit `tab` *twice* to see the list of matching folders. Then add enough letters to distinguish them and tab to complete! This will make your life *a lot* easier.
Additionally, you can use a tilde **`~`** as shorthand for the home directory of the current user. Just like `.` refers to “current folder”, `~` refers to the user’s home directory (usually `/Users/USERNAME` on mac or `/c/Users/USERNAME` on windows). And of course, you can use the tilde as part of a path as well (e.g., `~/Desktop` is an *absolute path* to the desktop for the current user).
**A note about home directory on Windows:** unlike unix, windows does not have a consistent concept of home directory, and different programs may interpret it in a different way. For instance, *gitbash* assumes the home directory is `/c/Users/USERNAME` while `R` assumes it is `/c/Users/USERNAME/Desktop`.
As you perhaps noticed above, command line uses space as a separator between the command and additional arguments. This means it is more complicated to work with paths and file names that contain a space. For instance, if you want to change into a folder called `my folder`, then issuing `cd my folder` results in an error (“cd: too many arguments”). This is because `cd` thinks that `my` is folder name, and `folder` after space is an additional argument. But there are two ways to handle spaces: first, you can put your pathname in quotes like `cd "my folder"`, and second, you can *escape* the space with a backslash: `cd my\ folder`. Both of these options work reasonably well but in general we recommend to avoid spaces in file names whenever you work with command line.
### 2\.2\.1 Changing Directories
What if you want to change folders? In a graphical system like Finder, you would just double\-click on the folder to open it. But there’s no clicking on the command\-line.
This includes clicking to move the cursor to an earlier part of the command you typed. You’ll need to use the left and right arrow keys to move the cursor instead!
**Protip:** The up and down arrow keys will let you cycle though your previous commands so you don’t need to re\-type them!
Since you can’t click on a folder, you’ll need to use another command:
```
cd folder_name
```
The first word is the **command**, or what you want the computer to do. In this case, you’re issuing the command that means **c**hange **d**irectory.
The second word is an example of an **argument**, which is a programming term that means “more details about what to do”. In this case, you’re providing a *required* argument of what folder you want to change to! (You’ll of course need to replace `folder_name` with the name of the folder).
* Try changing to the `Desktop` folder, which should be inside the home folder you started in—you could see it in Finder or File Explorer!
**A note about Desktop on Windows:** If you’re on Windows and the contents of your Desktop in the terminal doesn’t match the contents of your actual Desktop, your computer may be configured to have your Desktop in a different directory. This usually happens if you’ve set up software to back up your Desktop to an online service. Instead of `~/Desktop`, check if your Desktop folder is really in `~/OneDrive/Desktop` or `~/Dropbox/Desktop`.
* After you change folders, try printing your current location. Can you see that it has changed?
### 2\.2\.2 Listing Files
In a graphical system, once you’ve double\-clicked on a folder, Finder will show you the contents of that folder. The command\-line doesn’t do this automatically; instead you need another command:
```
ls [folder_name]
```
This command says to **l**i**s**t the folder contents. Note that the *argument* here is written in brackets (`[]`) to indicate that it is *optional*. If you just issue the **`ls`** command without an argument, it will list the contents of the current folder. If you include the optional argument (leaving off the brackets), you can “peek” at the contents of a folder you are not currently in.
**Warning**: The command\-line can be not great about giving **feedback** for your actions. For example, if there are no files in the folder, then `ls` will simply show nothing, potentially looking like it “didn’t work”. Or when typing a **password**, the letters you type won’t show (not even as `*`) as a security measure.
Just because you don’t see any results from your command/typing, doesn’t mean it didn’t work! Trust in yourself, and use basic commands like `ls` and `pwd` to confirm any changes if you’re unsure. Take it slow, one step at a time.
### 2\.2\.3 Paths
Note that both the **`cd`** and **`ls`** commands work even for folders that are not “immediately inside” the current directory! You can refer to *any* file or folder on the computer by specifying its **path**. A file’s path is “how you get to that file”: the list of folders you’d need to click through to get to the file, with each folder separated by a `/`:
```
cd /Users/iguest/Desktop/
```
This says to start at the root directory (that initial `/`), then go to `Users`, then go to `iguest`, then to `Desktop`.
Because this path starts with a specific directory (the root directory), it is referred to as an **absolute path**. No matter what folder you currently happen to be in, that path will refer to the correct file because it always starts on its journey from the root.
Contrast that with:
```
cd iguest/Desktop/
```
Because this path doesn’t have the leading slash, it just says to “go to the `iguest/Desktop` folder *from the current location*”. It is known as a **relative path**: it gives you directions to a file *relative to the current folder*. As such, the relative path `iguest/Desktop/` path will only refer to the correct location if you happen to be in the `/Users` folder; if you start somewhere else, who knows where you’ll end up!
You should **always** use relative paths, particularly when programming! Because you’ll almost always be managing multiples files in a project, you should refer to the files *relatively* within your project. That way, you program can easily work across computers. For example, if your code refers to `/Users/your-user-name/project-name/data`, it can only run on the `your-user-name` account. However, if you use a *relative path* within your code (i.e., `project-name/data`), the program will run on multiple computers (crucial for collaborative projects).
You can refer to the “current folder” by using a single dot **`.`**. So the command
```
ls .
```
means “list the contents of the current folder” (the same thing you get if you leave off the argument).
If you want to go *up* a directory, you use *two* dots: **`..`** to refer to the **parent** folder (that is, the one that contains this one). So the command
```
ls ..
```
means “list the contents of the folder that contains the current folder”.
Note that **`.`** and **`..`** act just like folder names, so you can include them anywhere in paths: `../../my_folder` says to go up two directories, and then into `my_folder`.
**Protip:** Most command shells like Terminal and Git Bash support **tab\-completion**. If you type out just the first few letters of a file or folder name and then hit the `tab` key, it will automatically fill in the rest of the name! If the name is ambiguous (e.g., you type `Do` and there is both a `Documents` and a `Downloads` folder), you can hit `tab` *twice* to see the list of matching folders. Then add enough letters to distinguish them and tab to complete! This will make your life *a lot* easier.
Additionally, you can use a tilde **`~`** as shorthand for the home directory of the current user. Just like `.` refers to “current folder”, `~` refers to the user’s home directory (usually `/Users/USERNAME` on mac or `/c/Users/USERNAME` on windows). And of course, you can use the tilde as part of a path as well (e.g., `~/Desktop` is an *absolute path* to the desktop for the current user).
**A note about home directory on Windows:** unlike unix, windows does not have a consistent concept of home directory, and different programs may interpret it in a different way. For instance, *gitbash* assumes the home directory is `/c/Users/USERNAME` while `R` assumes it is `/c/Users/USERNAME/Desktop`.
As you perhaps noticed above, command line uses space as a separator between the command and additional arguments. This means it is more complicated to work with paths and file names that contain a space. For instance, if you want to change into a folder called `my folder`, then issuing `cd my folder` results in an error (“cd: too many arguments”). This is because `cd` thinks that `my` is folder name, and `folder` after space is an additional argument. But there are two ways to handle spaces: first, you can put your pathname in quotes like `cd "my folder"`, and second, you can *escape* the space with a backslash: `cd my\ folder`. Both of these options work reasonably well but in general we recommend to avoid spaces in file names whenever you work with command line.
2\.3 File Commands
------------------
Once you’re comfortable navigating folders in the command\-line, you can start to use it to do all the same things you would do with Finder or File Explorer, simply by using the correct command. Here is an short list of commands to get you started using the command prompt, though there are [many more](http://www.lagmonster.org/docs/unix/intro-137.html):
| Command | Behavior |
| --- | --- |
| **`mkdir`** | **m**a**k**e a **dir**ectory |
| **`rm`** | **r**e**m**ove a file or folder |
| **`cp`** | **c**o**p**y a file from one location to another |
| **`open`** | opens a file or folder (Mac only) |
| **`start`** | opens a file or folder (Windows only) |
| **`cat`** | con**cat**enate (combine) file contents and display the results |
| **`history`** | show previous commands executed |
**Warning**: The command\-line makes it **dangerously easy** to *permanently delete* multiple files or folders and *will not* ask you to confirm that you want to delete them (or move them to the “recycling bin”). Be very careful when using the terminal to manage your files, as it is very powerful.
Be aware that many of these commands **won’t print anything** when you run them. This often means that they worked; they just did so quietly. If it *doesn’t* work, you’ll know because you’ll see a message telling you so (and why, if you read the message). So just because you didn’t get any output doesn’t mean you did something wrong—you can use another command (such as **`ls`**) to confirm that the files or folders changed the way you wanted!
### 2\.3\.1 Learning New Commands
How can you figure out what kind of arguments these commands take? You can look it up! This information is available online, but many command shells (though *not* Git Bash, unfortunately) also include their own manual you can use to look up commands!
```
man mkdir
```
Will show the **man**ual for the **`mkdir`** program/command.
Because manuals are often long, they are opened up in a command\-line viewer called [`less`](https://en.wikipedia.org/wiki/Less_(Unix)). You can “scroll” up and down by using the arrow keys. Hit the `q` key to **q**uit and return to the command\-prompt.
The `mkdir` man page.
If you look under “Synopsis” you can see a summary of all the different arguments this command understands. A few notes about reading this syntax:
* Recall that anything in brackets `[]` is optional. Arguments that are not in brackets (e.g., `directory_name`) are required.
* **“Options”** (or “flags”) for command\-line programs are often marked with a leading dash **`-`** to make them distinct from file or folder names. Options may change the way a command\-line program behaves—like how you might set “easy” or “hard” mode in a game. You can either write out each option individually, or combine them: **`mkdir -p -v`** and **`mkdir -pv`** are equivalent.
+ Some options may require an additional argument beyond just indicating a particular operation style. In this case, you can see that the `-m` option requires you to specify an additional `mode` parameter; see the details below for what this looks like.
* Underlined arguments are ones you choose: you don’t actually type the word `directory_name`, but instead your own directory name! Contrast this with the options: if you want to use the `-p` option, you need to type `-p` exactly.
Command\-line manuals (“man pages”) are often very difficult to read and understand: start by looking at just the required arguments (which are usually straightforward), and then search for and use a particular option if you’re looking to change a command’s behavior.
For practice, try to read the man page for `rm` and figure out how to delete a folder and not just a single file. Note that you’ll want to be careful, as this is a good way to [break things](http://www.pcworld.com/article/3057235/data-center-cloud/that-man-who-deleted-his-entire-company-with-a-line-of-code-it-was-a-hoax.html).
### 2\.3\.1 Learning New Commands
How can you figure out what kind of arguments these commands take? You can look it up! This information is available online, but many command shells (though *not* Git Bash, unfortunately) also include their own manual you can use to look up commands!
```
man mkdir
```
Will show the **man**ual for the **`mkdir`** program/command.
Because manuals are often long, they are opened up in a command\-line viewer called [`less`](https://en.wikipedia.org/wiki/Less_(Unix)). You can “scroll” up and down by using the arrow keys. Hit the `q` key to **q**uit and return to the command\-prompt.
The `mkdir` man page.
If you look under “Synopsis” you can see a summary of all the different arguments this command understands. A few notes about reading this syntax:
* Recall that anything in brackets `[]` is optional. Arguments that are not in brackets (e.g., `directory_name`) are required.
* **“Options”** (or “flags”) for command\-line programs are often marked with a leading dash **`-`** to make them distinct from file or folder names. Options may change the way a command\-line program behaves—like how you might set “easy” or “hard” mode in a game. You can either write out each option individually, or combine them: **`mkdir -p -v`** and **`mkdir -pv`** are equivalent.
+ Some options may require an additional argument beyond just indicating a particular operation style. In this case, you can see that the `-m` option requires you to specify an additional `mode` parameter; see the details below for what this looks like.
* Underlined arguments are ones you choose: you don’t actually type the word `directory_name`, but instead your own directory name! Contrast this with the options: if you want to use the `-p` option, you need to type `-p` exactly.
Command\-line manuals (“man pages”) are often very difficult to read and understand: start by looking at just the required arguments (which are usually straightforward), and then search for and use a particular option if you’re looking to change a command’s behavior.
For practice, try to read the man page for `rm` and figure out how to delete a folder and not just a single file. Note that you’ll want to be careful, as this is a good way to [break things](http://www.pcworld.com/article/3057235/data-center-cloud/that-man-who-deleted-his-entire-company-with-a-line-of-code-it-was-a-hoax.html).
2\.4 Dealing With Errors
------------------------
Note that the syntax of these commands (how you write them out) is very important. Computers aren’t good at figuring out what you meant if you aren’t really specific; forgetting a space may result in an entirely different action.
Try another command: **`echo`** lets you “echo” (print out) some text. Try echoing `"Hello World"` (which is the traditional first computer program):
```
echo "Hello world"
```
What happens if you forget the closing quote? You keep hitting “enter” but you just get that `>` over and over again! What’s going on?
* Because you didn’t “close” the quote, the shell thinks you are still typing the message you want to echo! When you hit “enter” it adds a *line break* instead of ending the command, and the `>` marks that you’re still going. If you finally close the quote, you’ll see your multi\-line message printed!
**IMPORTANT TIP** If you ever get stuck in the command\-line, hit **`ctrl-c`** (The `control` and `c` keys together). This almost always means “cancel”, and will “stop” whatever program or command is currently running in the shell so that you can try again. Just remember: “**`ctrl-c`** to flee”.
(If that doesn’t work, try hitting the `esc` key, or typing `exit`, `q`, or `quit`. Those commands will cover *most* command\-line programs).
Throughout this book, we’ll discuss a variety of approaches to handling errors in computer programs. While it’s tempting to disregard dense error messages, many programs do provide **error messages** that explain what went wrong. If you enter an unrecognized command, the terminal will inform you of your mistake:
```
lx
> -bash: lx: command not found
```
However, forgetting arguments yields different results. In some cases, there will be a default behavior (see what happens if you enter `cd` without any arguments). If more information is *required* to run a command, your terminal will provide you with a brief summary of the command’s usage:
```
mkdir
> usage: mkdir [-pv] [-m mode] directory ...
```
Take the time to read the error message and think about what the problem might be before you try again.
Resources
---------
* [Learn Enough Command Line to be Dangerous](https://www.learnenough.com/command-line-tutorial#sec-basics)
* [Video series: Bash commands](https://www.youtube.com/watch?v=sqYUYHn-HKg&list=PLCAF7D691FFA25555)
* [List of Common Commands](http://www.lagmonster.org/docs/unix/intro-137.html) (also [here](http://www.math.utah.edu/lab/unix/unix-commands.html))
| Field Specific |
info201.github.io | https://info201.github.io/git-basics.html |
Chapter 4 Git and GitHub
========================
A frightening number of people still email their code to each other, have dozens of versions of the same file, and lack any structured way of backing up their work for inevitable computer failures. This is both time consuming and error prone.
And that is why they should be using **git**.
This chapter will introduce you to `git` command\-line program and the GitHub cloud storage service, two wonderful tools that track changes to your code (`git`) and facilitate collaboration (GitHub). Git and GitHub are the industry standards for the family of tasks known as **version control**. Being able to manage changes to your code and share it with others is one of the most important technical skills a programmer can learn, and is the focus of this (lengthy) chapter.
4\.1 What is this *git* thing anyway?
-------------------------------------
[
Git is an example of a **version control system**. [Eric Raymond](https://en.wikipedia.org/wiki/Eric_S._Raymond) defines version control as
> A version control system (VCS) is a tool for managing a collection of program code that provides you with three important capabilities: **reversibility**, **concurrency**, and **annotation**.
Version control systems work a lot like Dropbox or Google Docs: they allow multiple people to work on the same files at the same time, to view and “roll back” to previous versions. However, systems like git different from Dropbox in a couple of key ways:
1. New versions of your files must be explicitly “committed” when they are ready. Git doesn’t save a new version every time you save a file to disk. That approach works fine for word\-processing documents, but not for programming files. You typically need to write some code, save it, test it, debug, make some fixes, and test again before you’re ready to save a new version.
2. For text files (which almost all programming files are), git tracks changes *line\-by\-line*. This means it can easily and automatically combine changes from multiple people, and gives you very precise information what what lines of code changes.
Like Dropbox and Google Docs, git can show you all previous versions of a file and can quickly rollback to one of those previous versions. This is often helpful in programming, especially if you embark on making a massive set of changes, only to discover part way through that those changes were a bad idea (we speak from experience here 😱 ).
But where git really comes in handy is in team development. Almost all professional development work is done in teams, which involves multiple people working on the same set of files at the same time. Git helps the team coordinate all these changes, and provides a record so that anyone can see how a given file ended up the way it did.
There are a number of different version control systems in the world, but [git](http://git-scm.com/) is the de facto standard—particularly when used in combination with the cloud\-based service [GitHub](https://github.com/).
### 4\.1\.1 Git Core Concepts
To understand how git works, you need to understand its core concepts. Read this section carefully, and come back to it if you forget what these terms mean.
* **repository (repo):**
A database containing all the committed versions of all your files, along with some additional metadata, stored in a hidden subdirectory named `.git` within your project directory. If you want to sound cool and in\-the\-know, call a project folder a “repo.”
* **commit:**
A set of file versions that have been added to the repository (saved in the database), along with the name of the person who did the commit, a message describing the commit, and a timestamp. This extra tracking information allows you to see when, why, and by whom changes were made to a given file. Committing a set of changes creates a “snapshot” of what that work looks like at the time—it’s like saving the files, but more so.
* **remote:**
A link to a copy of this same repository on a different machine. Typically this will be a central version of the repository that all local copies on your various development machines point to. You can push (upload) commits to, and pull (download) commits from, a remote repository to keep everything in sync.
* **merging:**
Git supports having multiple different versions of your work that all live side by side (in what are called **branches**), whether those versions are created by one person or many collaborators. Git allows the commits saved in different versions of the code to be easily *merged* (combined) back together without you needing to manually copy and paste different pieces of the code. This makes it easy to separate and then recombine work from different developers.
### 4\.1\.2 Wait, but what is GitHub then?
Git was made to support completely decentralized development, where developers pull commits (sets of changes) from each other’s machines directly. But most professional teams take the approach of creating one central repository on a server that all developers push to and pull from. This repository contains the authoritative version the source code, and all deployments to the “rest of the world” are done by downloading from this centralized repository.
Teams can setup their own servers to host these centralized repositories, but many choose to use a server maintained by someone else. The most popular of these in the open\-source world is [GitHub](https://github.com/). In addition to hosting centralized repositories, GitHub also offers other team development features, such as issue tracking, wiki pages, and notifications. Public repositories on GitHub are free, but you have to pay for private ones.
In short: GitHub is a site that provides as a central authority (or clearing\-house) for multiple people collaborating with git. Git is what you use to do version control; GitHub is one possible place where repositories of code can be stored.
4\.2 Installation \& Setup
--------------------------
This chapter will walk you through all the commands you’ll need to do version control with git. It is written as a “tutorial” to help you practice what you’re reading!
If you haven’t yet, the first thing you’ll need to do is [install git](http://git-scm.com/downloads). You should already have done this as part of [setting up your machine](setup-machine.html#setup-machine).
The first time you use `git` on your machine, you’ll need [to configure](https://help.github.com/articles/set-up-git/) the installation, telling git who you are so you can commit changes to a repository. You can do this by using the `git` command with the `config` option (i.e., running the `git config` command):
```
# enter your full name (without the dashes)
git config --global user.name "your-full-name"
# enter your email address (the one associated with your GitHub account)
git config --global user.email "your-email-address"
```
Setting up an [SSH key](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for GitHub on your own machine is also a huge time saver. If you don’t set up the key, you’ll need to enter your GitHub password each time you want to push changes up to GitHub (which may be multiple times a day). Simply follow the instructions on [this page](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) to set up a key, and make sure to only do this on *your machine*.
### 4\.2\.1 Creating a Repo
The first thing you’ll need in order to work with git is to create a **repository**. A repository acts as a “database” of changes that you make to files in a directory.
In order to have a repository, you’ll need to have a directory of files. Create a new folder `git_practice` on your computer’s Desktop. Since you’ll be using the command\-line for this course, you might as well practice creating a new directory programmatically:
Making a folder with the command\-line.
You can turn this directory *into* a repository by telling the `git` program to run the `init` action:
```
# run IN the directory of project
# you can easily check this with the "pwd" command
git init
```
This creates a new *hidden* folder called `.git` inside of the current directory (it’s hidden so you won’t see it in Finder, but if you use `ls -a` (list with the **a**ll option) you can see it there). This folder is the “database” of changes that you will make—git will store all changes you commit in this folder. The presence of the `.git` folder causes that directory to become a repository; we refer to the whole directory as the “repo” (an example of [synechoche](https://en.wikipedia.org/wiki/Synecdoche)).
* Note that because a repo is a single folder, you can have lots of different repos on your machine. Just make sure that they are in separate folders; folders that are *inside* a repo are considered part of that repo, and trying to treat them as a separate repository causes unpleasantness. **Do not put one repo inside of another!**
Multiple folders, multiple repositories.
### 4\.2\.2 Checking Status
Now that you have a repo, the next thing you should do is check its **status**:
```
git status
```
The `git status` command will give you information about the current “state” of the repo. For example, running this command tells us a few things:
* That you’re actually in a repo (otherwise you’ll get an error)
* That you’re on the `master` branch (think: line of development)
* That you’re at the initial commit (you haven’t committed anything yet)
* That currently there are no changes to files that you need to commit (save) to the database
* *What to do next!*
That last point is important. Git status messages are verbose and somewhat awkward to read (this is the command\-line after all), but if you look at them carefully they will almost always tell you what command to use next.
**If you are ever stuck, use `git status` to figure out what to do next!**
If the output of `git status` seems to verbose to you, you may use `git status -s`, the “short” version of it instead. It does gives you only the very basic information about what’s going on, but this is exactly what you need if you are an experienced git user.
This makes `git status` the most useful command in the entire process. Learn it, use it, love it.
4\.3 Making Changes
-------------------
Since `git status` told you to create a file, go ahead and do that. Using your [favorite editor](https://atom.io/), create a new file `books.md` inside the repo directory. This [Markdown](markdown.html#markdown) file should contain a *list* of 3 of your favorite books. Make sure you save the changes to your file to disk (to your computer’s harddrive)!
### 4\.3\.1 Adding Files
Run `git status` again. You should see that git now gives a list of changed and “untracked” files, as well as instructions about what to do next in order to save those changes to the repo’s database.
The first thing you need to do is to save those changes to the **staging area**. This is like a shopping cart in an online store: you put changes in temporary storage before you commit to recording them in the database (e.g., before hitting “purchase”).
We add files to the staging area using the `git add` command:
```
git add filename
```
(Replacing `filename` with the name/path of the file/folder you want to add).
If you are sure you want to add *all the contents* of the directory (tracked or untracked) to the staging area, you can do it with:
```
git add .
```
**WARNING**: This will add *everything* in
your current directory to your git repo, unless ignored through the
`.gitignore` file ([see below](git-basics.html#gitignore)). This may include your top secret passwords,
gigabytes of data, all your illegally ripped dvds, and just files that are
unnecessary to upload.
Add the `books.md` file to the staging area. And of course, now that
you’ve changed the repo (you put something in the staging area), you
should run `git status` to see what it says to do. Notice that it tells
you what files are in the staging area, as well as the command to
*unstage* those files (remove them from the “cart”).
### 4\.3\.2 Committing
When you’re happy with the contents of your staging area (e.g., you’re ready to purchase), it’s time to **commit** those changes, saving that snapshot of the files in the repository database. We do this with the `git commit` command:
```
git commit -m "your message here"
```
The `"your message here"` should be replaced with a short message saying what changes that commit makes to the repo (see below for details).
**WARNING**: If you forget the `-m` option, git will put you into a command\-line *text editor* so that you can compose a message (then save and exit to finish the commit). If you haven’t done any other configuration, you might be dropped into the ***vim*** editor. Type **`:q`** (**colon** then **q**) and hit enter to flee from this horrid place and try again, remembering the `-m` option! Don’t panic: getting stuck in *vim* [happens to everyone](https://stackoverflow.blog/2017/05/23/stack-overflow-helping-one-million-developers-exit-vim/).
If you do not add any new files to git control, you can also use the shorthand form
```
git commit -am "message"
```
instead of `git add` and `git commit` separately. the `-a` option tells git to include *all modified files* into this commit.
#### 4\.3\.2\.1 Commit Message Etiquette
Your commit messages should [be informative](https://xkcd.com/1296/) about what changes the commit is making to the repo. `"stuff"` is not a good commit message. `"Fix critical authorization error"` is a good commit message.
Commit messages should use the **imperative mood** (`"Add feature"` not `"added feature"`). They should complete the sentence:
> If applied, this commit will **{your message}**
Other advice suggests that you limit your message to 50 characters (like an email subject line), at least for the first line—this helps for going back and looking at previous commits. If you want to include more detail, do so after a blank line.
A specific commit message format may also be required by your company or project team. See [this post](http://chris.beams.io/posts/git-commit/) for further consideration of good commit messages.
Finally, be sure to be professional in your commit messages. They will be read by your professors, bosses, coworkers, and other developers on the internet. Don’t join [this group](https://twitter.com/gitlost).
After you’ve committed your changes, be sure and check `git status`, which should now say that there is nothing to commit!
### 4\.3\.3 Commit History
You can also view the history of commits you’ve made:
```
git log [--oneline]
```
This will give you a list of the *sequence* of commits you’ve made: you can see who made what changes and when. (The term **HEAD** refers to the most recent commit). The optional `--oneline` option gives you a nice compact version. Note that each commit is listed with its [SHA\-1](https://en.wikipedia.org/wiki/SHA-1) hash (the random numbers and letters), which you can use to identify each commit.
### 4\.3\.4 Reviewing the Process
This cycle of “edit files”, “add files”, “commit changes” is the standard “development loop” when working with git.
The local git process.
In general, you’ll make lots of changes to your code (editing lots of files, running and testing your code, etc). Then once you’re at a good “break point”—you’ve got a feature working, you’re stuck and need some coffee, you’re about to embark on some radical changes—you will add and commit your changes to make sure you don’t lose any work and you can always get back to that point.
#### 4\.3\.4\.1 Practice
For further practice using git, perform the following steps:
1. **Edit** your list of books to include two more books (top 5 list!)
2. **Add** the changes to the staging area
3. **Commit** the changes to the repository
Be sure and check the status at each step to make sure everything works!
You can also add more files besides `books.md`, such as `movies.md` or
`restaurants.md`.
When your repository grows, you can check which files are tracked by *git*, both already
committed and in the staging area, by
```
git ls-files
```
(Note that `git status` does not mention tracked files that are not changed.)
### 4\.3\.5 The `.gitignore` File
Sometimes you want git to always ignore particular directories or files in your project. For example, if you use a Mac and you tend to organize your files in the Finder, the operating system will create a hidden file in that folder named `.DS_Store` (the leading dot makes it “hidden”) to track the positions of icons, which folders have been “expanded”, etc. This file will likely be different from machine to machine. If it is added to your repository and you work from multiple machines (or as part of a team), it could lead to a lot of merge conflicts (not to mention cluttering up the folders for Windows users).
You can tell git to ignore files like these by creating a special *hidden* file in your project directory called `.gitignore` (note the leading dot). This file contains a *list* of files or folders that git should “ignore” and pretend don’t exist. The file uses a very simple format: each line contains the path to a directory or file to ignore; multiple files are placed on multiple lines. For example:
```
# This is an example .gitignore file
# Mac system file; the leading # marks a comment
.DS_Store
# example: don't check in passwords or ssl keys!
secret/my_password.txt
# example: don't include large files or libraries
movies/my_four_hour_epic.mov
```
Note that the easiest way to create the `.gitignore` file is to use your preferred text editor (e.g., Atom); select `File > New` from the menu and choose to make the `.gitignore` file *directly inside* your repo. `.gitgnore` should be added and committed to the repo to make the ignore\-rules common across the project. There are also ways to ignore files on a single computer only, see [documentation on github](https://help.github.com/en/articles/ignoring-files) for details.
You may want to ignore certain files in *all your repositories*. For instance, if you are on a Mac, we **strongly suggest** *globally ignoring* your `.DS_Store` file. There’s no need to ever share or track this file. To always ignore this file on your machine, simply run these lines of code:
```
# Run these lines on your terminal to configure git to ignore .DS_Store
git config --global core.excludesfile ~/.gitignore
echo .DS_Store >> ~/.gitignore
```
See [this article](http://jesperrasmussen.com/2013/11/13/globally-ignore-ds-store-on-mac/) for more information.
4\.4 GitHub and Remotes
-----------------------
Now that you’ve gotten the hang of git, let’s talk about GitHub. [GitHub](https://github.com/) is an online service that stores copies of repositories in the cloud. These repositories can be *linked* to your **local** repositories (the one on your machine, like you’ve been working with so far) so that you can synchronize changes between them.
* The relationship between git and GitHub is the same as that between your camera and Imgur: **git** is the program we use to create and manage repositories; GitHub is simply a website that stores these repositories. So we use git, but upload to/download from GitHub.
Repositories stored on GitHub are examples of **remotes**: other repos that are linked to your local one. Each repo can have multiple remotes, and you can synchronize commits between them.
Each remote has a URL associated with it (where on the internet the remote copy of the repo can be found), but they are given “alias” names (like browser bookmarks). By convention, the remote repo stored on GitHub’s servers is named **`origin`**, since it tends to be the “origin” of any code you’ve started working on.
Remotes don’t need to be stored on GitHub’s computers, but it’s one of the most popular places to put repos. One repo can also have more than one remote, for instance `origin` for the github\-version of your repo, and `upstream` for the repo you initially forked you code from.
### 4\.4\.1 Forking and Cloning
In order to use GitHub, you’ll need to [**create a free GitHub account**](https://github.com/join), which you should have done as part of setting up your machine.
Next, you’ll need to download a copy of a repo from GitHub onto your own machine. **Never make changes or commit directly to GitHub**: all development work is done locally, and changes you make are then uploaded and *merged* into the remote.
Start by visiting [this link](https://github.com/info201/github_practice). This is the web portal for an existing repository. You can see that it contains one file (`README.md`, a Markdown file with a description of the repo) and a folder containing a second file. You can click on the files and folder to view their source online, but again you won’t change them there!
Just like with Imgur or Flickr or other image\-hosting sites, each GitHub user has their own account under which repos are stored. The repo linked above is under the course book account (`info201`). And because it’s under our user account, you won’t be able to modify it—just like you can’t change someone else’s picture on Imgur. So the first thing you’ll need to do is copy the repo over to *your **own** account on GitHub’s servers*. This process is called **forking** the repo (you’re creating a “fork” in the development, splitting off to your own version).
* To fork a repo, click the **“Fork”** button in the upper\-right of the screen:
The fork button on GitHub’s web portal.
This will copy the repo over to your own account, so that you can upload and download changes to it!
Students in the INFO 201 course will be forking repos for class and lab execises, but *not* for homework assignments (see below)
Now that you have a copy of the repo under your own account, you need to download it to your machine. We do this by using the `clone` command:
```
git clone [url]
```
This command will create a new repo (directory) *in the current folder*, and download a copy of the code and all the commits from the URL you specify.
* You can get the URL from the address bar of your browser, or you can click the green “Clone or Download” button to get a popup with the URL. The little icon will copy the URL to your clipboard. **Do not** click “Open in Desktop” or “Download Zip”.
* Make sure you clone from the *forked* version (the one under your account!)
**Warning** also be sure to `cd` out of the `git_practice` directory; you don’t want to `clone` into a folder that is already a repo; you’re effectively creating a *new* repository on your machine here!
Note that you’ll only need to `clone` once per machine; `clone` is like `init` for repos that are on GitHub—in fact, the `clone` command *includes* the `init` command (so you do not need to init a cloned repo).
### 4\.4\.2 Pushing and Pulling
Now that you have a copy of the repo code, make some changes to it! Edit the `README.md` file to include your name, then `add` the change to the staging area and `commit` the changes to the repo (don’t forget the `-m` message!).
Although you’ve made the changes locally, you have not uploaded them to GitHub yet—if you refresh the web portal page (make sure you’re looking at the one under your account), you shouldn’t see your changes yet.
In order to get the changes to GitHub, you’ll need to `push` (upload) them to GitHub’s computers. You can do this with the following command:
```
git push origin master
```
This will push the current code to the `origin` remote (specifically to its `master` branch of development).
* When you cloned the repo, it came with an `origin` “bookmark” to the original repo’s location on GitHub!
Once you’ve **pushed** your code, you should be able to refresh the GitHub webpage and see your changes to the README!
If you want to download the changes (commits) that someone else made, you can do that using the `pull` command, which will download the changes from GitHub and *merge* them into the code on your local machine:
```
git pull
```
Because you’re merging as part of a `pull`, you’ll need to keep an eye out for **merge conflicts**! These will be discussed in more detail in [chapter 14](git-branches.html#git-branches).
**Pro Tip**: always `pull` before you `push`. Technically using `git push` causes a merge to occur on GitHub’s servers, but GitHub won’t let you push if that merge might potentially cause a conflict. If you `pull` first, you can make sure your local version is up to date so that no conflicts will occur when you upload.
### 4\.4\.3 Reviewing The Process
Overall, the process of using git and GitHub together looks as follows:
The remote git process.
4\.5 Course Assignments on GitHub
---------------------------------
For students in INFO 201: While class and lab work will use the “fork and clone” workflow described above, homework assignments will work slightly differently. Assignments in this course are configured using GitHub Classroom, which provides each student *private* repo (under the class account) for the assignment.
Each assignment description in Canvas contains a link to create an assignment repo: click the link and then **accept the assignment** in order to create your own code repo. Once the repository is created, you should **`clone`** it to your local machine to work. **Do not fork your asssignment repo**.
**DO NOT FORK YOUR ASSIGNMENT REPO.**
After `cloning` the assignment repo, you can begin working following the workflow described above:
1. Make changes to your files
2. **Add** files with changes to the staging area (`git add .`)
3. **Commit** these changes to take a repo (`git commit -m "commit message"`)
4. **Push** changes back to GitHub (`git push origin master`) to turn in your work.
Repeat these steps each time you reach a “checkpoint” in your work to save it both locally and in the cloud (in case of computer problems).
4\.6 Command Summary
--------------------
Whew! You made it through! This chapter has a lot to take in, but really you just need to understand and use the following half\-dozen commands:
* `git status` Check the status of a repo
* `git add` Add file to the staging area
* `git commit -m "message"` Commit changes
* `git clone` Copy repo to local machine
* `git push origin master` Upload commits to GitHub
* `git pull` Download commits from GitHub
Using git and GitHub can be challenging, and you’ll inevitably run into issues. While it’s tempting to ignore version control systems, **they will save you time** in the long\-run. For now, do your best to follow these processes, and read any error messages carefully. If you run into trouble, try to understand the issue (Google/StackOverflow), and don’t hesitate to ask for help.
Resources
---------
* [Git and GitHub in Plain English](https://red-badger.com/blog/2016/11/29/gitgithub-in-plain-english)
* [Atlassian Git Tutorial](https://www.atlassian.com/git/tutorials/what-is-version-control)
* [Try Git](https://try.github.io/levels/1/challenges/1) (interactive tutorial)
* [GitHub Setup and Instructions](https://help.github.com/articles/set-up-git/)
* [Official Git Documentation](https://git-scm.com/doc)
* [Git Cheat Sheet](https://education.github.com/git-cheat-sheet-education.pdf)
* [Ignore DS\_Store on a Mac](http://jesperrasmussen.com/2013/11/13/globally-ignore-ds-store-on-mac/)
4\.1 What is this *git* thing anyway?
-------------------------------------
[
Git is an example of a **version control system**. [Eric Raymond](https://en.wikipedia.org/wiki/Eric_S._Raymond) defines version control as
> A version control system (VCS) is a tool for managing a collection of program code that provides you with three important capabilities: **reversibility**, **concurrency**, and **annotation**.
Version control systems work a lot like Dropbox or Google Docs: they allow multiple people to work on the same files at the same time, to view and “roll back” to previous versions. However, systems like git different from Dropbox in a couple of key ways:
1. New versions of your files must be explicitly “committed” when they are ready. Git doesn’t save a new version every time you save a file to disk. That approach works fine for word\-processing documents, but not for programming files. You typically need to write some code, save it, test it, debug, make some fixes, and test again before you’re ready to save a new version.
2. For text files (which almost all programming files are), git tracks changes *line\-by\-line*. This means it can easily and automatically combine changes from multiple people, and gives you very precise information what what lines of code changes.
Like Dropbox and Google Docs, git can show you all previous versions of a file and can quickly rollback to one of those previous versions. This is often helpful in programming, especially if you embark on making a massive set of changes, only to discover part way through that those changes were a bad idea (we speak from experience here 😱 ).
But where git really comes in handy is in team development. Almost all professional development work is done in teams, which involves multiple people working on the same set of files at the same time. Git helps the team coordinate all these changes, and provides a record so that anyone can see how a given file ended up the way it did.
There are a number of different version control systems in the world, but [git](http://git-scm.com/) is the de facto standard—particularly when used in combination with the cloud\-based service [GitHub](https://github.com/).
### 4\.1\.1 Git Core Concepts
To understand how git works, you need to understand its core concepts. Read this section carefully, and come back to it if you forget what these terms mean.
* **repository (repo):**
A database containing all the committed versions of all your files, along with some additional metadata, stored in a hidden subdirectory named `.git` within your project directory. If you want to sound cool and in\-the\-know, call a project folder a “repo.”
* **commit:**
A set of file versions that have been added to the repository (saved in the database), along with the name of the person who did the commit, a message describing the commit, and a timestamp. This extra tracking information allows you to see when, why, and by whom changes were made to a given file. Committing a set of changes creates a “snapshot” of what that work looks like at the time—it’s like saving the files, but more so.
* **remote:**
A link to a copy of this same repository on a different machine. Typically this will be a central version of the repository that all local copies on your various development machines point to. You can push (upload) commits to, and pull (download) commits from, a remote repository to keep everything in sync.
* **merging:**
Git supports having multiple different versions of your work that all live side by side (in what are called **branches**), whether those versions are created by one person or many collaborators. Git allows the commits saved in different versions of the code to be easily *merged* (combined) back together without you needing to manually copy and paste different pieces of the code. This makes it easy to separate and then recombine work from different developers.
### 4\.1\.2 Wait, but what is GitHub then?
Git was made to support completely decentralized development, where developers pull commits (sets of changes) from each other’s machines directly. But most professional teams take the approach of creating one central repository on a server that all developers push to and pull from. This repository contains the authoritative version the source code, and all deployments to the “rest of the world” are done by downloading from this centralized repository.
Teams can setup their own servers to host these centralized repositories, but many choose to use a server maintained by someone else. The most popular of these in the open\-source world is [GitHub](https://github.com/). In addition to hosting centralized repositories, GitHub also offers other team development features, such as issue tracking, wiki pages, and notifications. Public repositories on GitHub are free, but you have to pay for private ones.
In short: GitHub is a site that provides as a central authority (or clearing\-house) for multiple people collaborating with git. Git is what you use to do version control; GitHub is one possible place where repositories of code can be stored.
### 4\.1\.1 Git Core Concepts
To understand how git works, you need to understand its core concepts. Read this section carefully, and come back to it if you forget what these terms mean.
* **repository (repo):**
A database containing all the committed versions of all your files, along with some additional metadata, stored in a hidden subdirectory named `.git` within your project directory. If you want to sound cool and in\-the\-know, call a project folder a “repo.”
* **commit:**
A set of file versions that have been added to the repository (saved in the database), along with the name of the person who did the commit, a message describing the commit, and a timestamp. This extra tracking information allows you to see when, why, and by whom changes were made to a given file. Committing a set of changes creates a “snapshot” of what that work looks like at the time—it’s like saving the files, but more so.
* **remote:**
A link to a copy of this same repository on a different machine. Typically this will be a central version of the repository that all local copies on your various development machines point to. You can push (upload) commits to, and pull (download) commits from, a remote repository to keep everything in sync.
* **merging:**
Git supports having multiple different versions of your work that all live side by side (in what are called **branches**), whether those versions are created by one person or many collaborators. Git allows the commits saved in different versions of the code to be easily *merged* (combined) back together without you needing to manually copy and paste different pieces of the code. This makes it easy to separate and then recombine work from different developers.
### 4\.1\.2 Wait, but what is GitHub then?
Git was made to support completely decentralized development, where developers pull commits (sets of changes) from each other’s machines directly. But most professional teams take the approach of creating one central repository on a server that all developers push to and pull from. This repository contains the authoritative version the source code, and all deployments to the “rest of the world” are done by downloading from this centralized repository.
Teams can setup their own servers to host these centralized repositories, but many choose to use a server maintained by someone else. The most popular of these in the open\-source world is [GitHub](https://github.com/). In addition to hosting centralized repositories, GitHub also offers other team development features, such as issue tracking, wiki pages, and notifications. Public repositories on GitHub are free, but you have to pay for private ones.
In short: GitHub is a site that provides as a central authority (or clearing\-house) for multiple people collaborating with git. Git is what you use to do version control; GitHub is one possible place where repositories of code can be stored.
4\.2 Installation \& Setup
--------------------------
This chapter will walk you through all the commands you’ll need to do version control with git. It is written as a “tutorial” to help you practice what you’re reading!
If you haven’t yet, the first thing you’ll need to do is [install git](http://git-scm.com/downloads). You should already have done this as part of [setting up your machine](setup-machine.html#setup-machine).
The first time you use `git` on your machine, you’ll need [to configure](https://help.github.com/articles/set-up-git/) the installation, telling git who you are so you can commit changes to a repository. You can do this by using the `git` command with the `config` option (i.e., running the `git config` command):
```
# enter your full name (without the dashes)
git config --global user.name "your-full-name"
# enter your email address (the one associated with your GitHub account)
git config --global user.email "your-email-address"
```
Setting up an [SSH key](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) for GitHub on your own machine is also a huge time saver. If you don’t set up the key, you’ll need to enter your GitHub password each time you want to push changes up to GitHub (which may be multiple times a day). Simply follow the instructions on [this page](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) to set up a key, and make sure to only do this on *your machine*.
### 4\.2\.1 Creating a Repo
The first thing you’ll need in order to work with git is to create a **repository**. A repository acts as a “database” of changes that you make to files in a directory.
In order to have a repository, you’ll need to have a directory of files. Create a new folder `git_practice` on your computer’s Desktop. Since you’ll be using the command\-line for this course, you might as well practice creating a new directory programmatically:
Making a folder with the command\-line.
You can turn this directory *into* a repository by telling the `git` program to run the `init` action:
```
# run IN the directory of project
# you can easily check this with the "pwd" command
git init
```
This creates a new *hidden* folder called `.git` inside of the current directory (it’s hidden so you won’t see it in Finder, but if you use `ls -a` (list with the **a**ll option) you can see it there). This folder is the “database” of changes that you will make—git will store all changes you commit in this folder. The presence of the `.git` folder causes that directory to become a repository; we refer to the whole directory as the “repo” (an example of [synechoche](https://en.wikipedia.org/wiki/Synecdoche)).
* Note that because a repo is a single folder, you can have lots of different repos on your machine. Just make sure that they are in separate folders; folders that are *inside* a repo are considered part of that repo, and trying to treat them as a separate repository causes unpleasantness. **Do not put one repo inside of another!**
Multiple folders, multiple repositories.
### 4\.2\.2 Checking Status
Now that you have a repo, the next thing you should do is check its **status**:
```
git status
```
The `git status` command will give you information about the current “state” of the repo. For example, running this command tells us a few things:
* That you’re actually in a repo (otherwise you’ll get an error)
* That you’re on the `master` branch (think: line of development)
* That you’re at the initial commit (you haven’t committed anything yet)
* That currently there are no changes to files that you need to commit (save) to the database
* *What to do next!*
That last point is important. Git status messages are verbose and somewhat awkward to read (this is the command\-line after all), but if you look at them carefully they will almost always tell you what command to use next.
**If you are ever stuck, use `git status` to figure out what to do next!**
If the output of `git status` seems to verbose to you, you may use `git status -s`, the “short” version of it instead. It does gives you only the very basic information about what’s going on, but this is exactly what you need if you are an experienced git user.
This makes `git status` the most useful command in the entire process. Learn it, use it, love it.
### 4\.2\.1 Creating a Repo
The first thing you’ll need in order to work with git is to create a **repository**. A repository acts as a “database” of changes that you make to files in a directory.
In order to have a repository, you’ll need to have a directory of files. Create a new folder `git_practice` on your computer’s Desktop. Since you’ll be using the command\-line for this course, you might as well practice creating a new directory programmatically:
Making a folder with the command\-line.
You can turn this directory *into* a repository by telling the `git` program to run the `init` action:
```
# run IN the directory of project
# you can easily check this with the "pwd" command
git init
```
This creates a new *hidden* folder called `.git` inside of the current directory (it’s hidden so you won’t see it in Finder, but if you use `ls -a` (list with the **a**ll option) you can see it there). This folder is the “database” of changes that you will make—git will store all changes you commit in this folder. The presence of the `.git` folder causes that directory to become a repository; we refer to the whole directory as the “repo” (an example of [synechoche](https://en.wikipedia.org/wiki/Synecdoche)).
* Note that because a repo is a single folder, you can have lots of different repos on your machine. Just make sure that they are in separate folders; folders that are *inside* a repo are considered part of that repo, and trying to treat them as a separate repository causes unpleasantness. **Do not put one repo inside of another!**
Multiple folders, multiple repositories.
### 4\.2\.2 Checking Status
Now that you have a repo, the next thing you should do is check its **status**:
```
git status
```
The `git status` command will give you information about the current “state” of the repo. For example, running this command tells us a few things:
* That you’re actually in a repo (otherwise you’ll get an error)
* That you’re on the `master` branch (think: line of development)
* That you’re at the initial commit (you haven’t committed anything yet)
* That currently there are no changes to files that you need to commit (save) to the database
* *What to do next!*
That last point is important. Git status messages are verbose and somewhat awkward to read (this is the command\-line after all), but if you look at them carefully they will almost always tell you what command to use next.
**If you are ever stuck, use `git status` to figure out what to do next!**
If the output of `git status` seems to verbose to you, you may use `git status -s`, the “short” version of it instead. It does gives you only the very basic information about what’s going on, but this is exactly what you need if you are an experienced git user.
This makes `git status` the most useful command in the entire process. Learn it, use it, love it.
4\.3 Making Changes
-------------------
Since `git status` told you to create a file, go ahead and do that. Using your [favorite editor](https://atom.io/), create a new file `books.md` inside the repo directory. This [Markdown](markdown.html#markdown) file should contain a *list* of 3 of your favorite books. Make sure you save the changes to your file to disk (to your computer’s harddrive)!
### 4\.3\.1 Adding Files
Run `git status` again. You should see that git now gives a list of changed and “untracked” files, as well as instructions about what to do next in order to save those changes to the repo’s database.
The first thing you need to do is to save those changes to the **staging area**. This is like a shopping cart in an online store: you put changes in temporary storage before you commit to recording them in the database (e.g., before hitting “purchase”).
We add files to the staging area using the `git add` command:
```
git add filename
```
(Replacing `filename` with the name/path of the file/folder you want to add).
If you are sure you want to add *all the contents* of the directory (tracked or untracked) to the staging area, you can do it with:
```
git add .
```
**WARNING**: This will add *everything* in
your current directory to your git repo, unless ignored through the
`.gitignore` file ([see below](git-basics.html#gitignore)). This may include your top secret passwords,
gigabytes of data, all your illegally ripped dvds, and just files that are
unnecessary to upload.
Add the `books.md` file to the staging area. And of course, now that
you’ve changed the repo (you put something in the staging area), you
should run `git status` to see what it says to do. Notice that it tells
you what files are in the staging area, as well as the command to
*unstage* those files (remove them from the “cart”).
### 4\.3\.2 Committing
When you’re happy with the contents of your staging area (e.g., you’re ready to purchase), it’s time to **commit** those changes, saving that snapshot of the files in the repository database. We do this with the `git commit` command:
```
git commit -m "your message here"
```
The `"your message here"` should be replaced with a short message saying what changes that commit makes to the repo (see below for details).
**WARNING**: If you forget the `-m` option, git will put you into a command\-line *text editor* so that you can compose a message (then save and exit to finish the commit). If you haven’t done any other configuration, you might be dropped into the ***vim*** editor. Type **`:q`** (**colon** then **q**) and hit enter to flee from this horrid place and try again, remembering the `-m` option! Don’t panic: getting stuck in *vim* [happens to everyone](https://stackoverflow.blog/2017/05/23/stack-overflow-helping-one-million-developers-exit-vim/).
If you do not add any new files to git control, you can also use the shorthand form
```
git commit -am "message"
```
instead of `git add` and `git commit` separately. the `-a` option tells git to include *all modified files* into this commit.
#### 4\.3\.2\.1 Commit Message Etiquette
Your commit messages should [be informative](https://xkcd.com/1296/) about what changes the commit is making to the repo. `"stuff"` is not a good commit message. `"Fix critical authorization error"` is a good commit message.
Commit messages should use the **imperative mood** (`"Add feature"` not `"added feature"`). They should complete the sentence:
> If applied, this commit will **{your message}**
Other advice suggests that you limit your message to 50 characters (like an email subject line), at least for the first line—this helps for going back and looking at previous commits. If you want to include more detail, do so after a blank line.
A specific commit message format may also be required by your company or project team. See [this post](http://chris.beams.io/posts/git-commit/) for further consideration of good commit messages.
Finally, be sure to be professional in your commit messages. They will be read by your professors, bosses, coworkers, and other developers on the internet. Don’t join [this group](https://twitter.com/gitlost).
After you’ve committed your changes, be sure and check `git status`, which should now say that there is nothing to commit!
### 4\.3\.3 Commit History
You can also view the history of commits you’ve made:
```
git log [--oneline]
```
This will give you a list of the *sequence* of commits you’ve made: you can see who made what changes and when. (The term **HEAD** refers to the most recent commit). The optional `--oneline` option gives you a nice compact version. Note that each commit is listed with its [SHA\-1](https://en.wikipedia.org/wiki/SHA-1) hash (the random numbers and letters), which you can use to identify each commit.
### 4\.3\.4 Reviewing the Process
This cycle of “edit files”, “add files”, “commit changes” is the standard “development loop” when working with git.
The local git process.
In general, you’ll make lots of changes to your code (editing lots of files, running and testing your code, etc). Then once you’re at a good “break point”—you’ve got a feature working, you’re stuck and need some coffee, you’re about to embark on some radical changes—you will add and commit your changes to make sure you don’t lose any work and you can always get back to that point.
#### 4\.3\.4\.1 Practice
For further practice using git, perform the following steps:
1. **Edit** your list of books to include two more books (top 5 list!)
2. **Add** the changes to the staging area
3. **Commit** the changes to the repository
Be sure and check the status at each step to make sure everything works!
You can also add more files besides `books.md`, such as `movies.md` or
`restaurants.md`.
When your repository grows, you can check which files are tracked by *git*, both already
committed and in the staging area, by
```
git ls-files
```
(Note that `git status` does not mention tracked files that are not changed.)
### 4\.3\.5 The `.gitignore` File
Sometimes you want git to always ignore particular directories or files in your project. For example, if you use a Mac and you tend to organize your files in the Finder, the operating system will create a hidden file in that folder named `.DS_Store` (the leading dot makes it “hidden”) to track the positions of icons, which folders have been “expanded”, etc. This file will likely be different from machine to machine. If it is added to your repository and you work from multiple machines (or as part of a team), it could lead to a lot of merge conflicts (not to mention cluttering up the folders for Windows users).
You can tell git to ignore files like these by creating a special *hidden* file in your project directory called `.gitignore` (note the leading dot). This file contains a *list* of files or folders that git should “ignore” and pretend don’t exist. The file uses a very simple format: each line contains the path to a directory or file to ignore; multiple files are placed on multiple lines. For example:
```
# This is an example .gitignore file
# Mac system file; the leading # marks a comment
.DS_Store
# example: don't check in passwords or ssl keys!
secret/my_password.txt
# example: don't include large files or libraries
movies/my_four_hour_epic.mov
```
Note that the easiest way to create the `.gitignore` file is to use your preferred text editor (e.g., Atom); select `File > New` from the menu and choose to make the `.gitignore` file *directly inside* your repo. `.gitgnore` should be added and committed to the repo to make the ignore\-rules common across the project. There are also ways to ignore files on a single computer only, see [documentation on github](https://help.github.com/en/articles/ignoring-files) for details.
You may want to ignore certain files in *all your repositories*. For instance, if you are on a Mac, we **strongly suggest** *globally ignoring* your `.DS_Store` file. There’s no need to ever share or track this file. To always ignore this file on your machine, simply run these lines of code:
```
# Run these lines on your terminal to configure git to ignore .DS_Store
git config --global core.excludesfile ~/.gitignore
echo .DS_Store >> ~/.gitignore
```
See [this article](http://jesperrasmussen.com/2013/11/13/globally-ignore-ds-store-on-mac/) for more information.
### 4\.3\.1 Adding Files
Run `git status` again. You should see that git now gives a list of changed and “untracked” files, as well as instructions about what to do next in order to save those changes to the repo’s database.
The first thing you need to do is to save those changes to the **staging area**. This is like a shopping cart in an online store: you put changes in temporary storage before you commit to recording them in the database (e.g., before hitting “purchase”).
We add files to the staging area using the `git add` command:
```
git add filename
```
(Replacing `filename` with the name/path of the file/folder you want to add).
If you are sure you want to add *all the contents* of the directory (tracked or untracked) to the staging area, you can do it with:
```
git add .
```
**WARNING**: This will add *everything* in
your current directory to your git repo, unless ignored through the
`.gitignore` file ([see below](git-basics.html#gitignore)). This may include your top secret passwords,
gigabytes of data, all your illegally ripped dvds, and just files that are
unnecessary to upload.
Add the `books.md` file to the staging area. And of course, now that
you’ve changed the repo (you put something in the staging area), you
should run `git status` to see what it says to do. Notice that it tells
you what files are in the staging area, as well as the command to
*unstage* those files (remove them from the “cart”).
### 4\.3\.2 Committing
When you’re happy with the contents of your staging area (e.g., you’re ready to purchase), it’s time to **commit** those changes, saving that snapshot of the files in the repository database. We do this with the `git commit` command:
```
git commit -m "your message here"
```
The `"your message here"` should be replaced with a short message saying what changes that commit makes to the repo (see below for details).
**WARNING**: If you forget the `-m` option, git will put you into a command\-line *text editor* so that you can compose a message (then save and exit to finish the commit). If you haven’t done any other configuration, you might be dropped into the ***vim*** editor. Type **`:q`** (**colon** then **q**) and hit enter to flee from this horrid place and try again, remembering the `-m` option! Don’t panic: getting stuck in *vim* [happens to everyone](https://stackoverflow.blog/2017/05/23/stack-overflow-helping-one-million-developers-exit-vim/).
If you do not add any new files to git control, you can also use the shorthand form
```
git commit -am "message"
```
instead of `git add` and `git commit` separately. the `-a` option tells git to include *all modified files* into this commit.
#### 4\.3\.2\.1 Commit Message Etiquette
Your commit messages should [be informative](https://xkcd.com/1296/) about what changes the commit is making to the repo. `"stuff"` is not a good commit message. `"Fix critical authorization error"` is a good commit message.
Commit messages should use the **imperative mood** (`"Add feature"` not `"added feature"`). They should complete the sentence:
> If applied, this commit will **{your message}**
Other advice suggests that you limit your message to 50 characters (like an email subject line), at least for the first line—this helps for going back and looking at previous commits. If you want to include more detail, do so after a blank line.
A specific commit message format may also be required by your company or project team. See [this post](http://chris.beams.io/posts/git-commit/) for further consideration of good commit messages.
Finally, be sure to be professional in your commit messages. They will be read by your professors, bosses, coworkers, and other developers on the internet. Don’t join [this group](https://twitter.com/gitlost).
After you’ve committed your changes, be sure and check `git status`, which should now say that there is nothing to commit!
#### 4\.3\.2\.1 Commit Message Etiquette
Your commit messages should [be informative](https://xkcd.com/1296/) about what changes the commit is making to the repo. `"stuff"` is not a good commit message. `"Fix critical authorization error"` is a good commit message.
Commit messages should use the **imperative mood** (`"Add feature"` not `"added feature"`). They should complete the sentence:
> If applied, this commit will **{your message}**
Other advice suggests that you limit your message to 50 characters (like an email subject line), at least for the first line—this helps for going back and looking at previous commits. If you want to include more detail, do so after a blank line.
A specific commit message format may also be required by your company or project team. See [this post](http://chris.beams.io/posts/git-commit/) for further consideration of good commit messages.
Finally, be sure to be professional in your commit messages. They will be read by your professors, bosses, coworkers, and other developers on the internet. Don’t join [this group](https://twitter.com/gitlost).
After you’ve committed your changes, be sure and check `git status`, which should now say that there is nothing to commit!
### 4\.3\.3 Commit History
You can also view the history of commits you’ve made:
```
git log [--oneline]
```
This will give you a list of the *sequence* of commits you’ve made: you can see who made what changes and when. (The term **HEAD** refers to the most recent commit). The optional `--oneline` option gives you a nice compact version. Note that each commit is listed with its [SHA\-1](https://en.wikipedia.org/wiki/SHA-1) hash (the random numbers and letters), which you can use to identify each commit.
### 4\.3\.4 Reviewing the Process
This cycle of “edit files”, “add files”, “commit changes” is the standard “development loop” when working with git.
The local git process.
In general, you’ll make lots of changes to your code (editing lots of files, running and testing your code, etc). Then once you’re at a good “break point”—you’ve got a feature working, you’re stuck and need some coffee, you’re about to embark on some radical changes—you will add and commit your changes to make sure you don’t lose any work and you can always get back to that point.
#### 4\.3\.4\.1 Practice
For further practice using git, perform the following steps:
1. **Edit** your list of books to include two more books (top 5 list!)
2. **Add** the changes to the staging area
3. **Commit** the changes to the repository
Be sure and check the status at each step to make sure everything works!
You can also add more files besides `books.md`, such as `movies.md` or
`restaurants.md`.
When your repository grows, you can check which files are tracked by *git*, both already
committed and in the staging area, by
```
git ls-files
```
(Note that `git status` does not mention tracked files that are not changed.)
#### 4\.3\.4\.1 Practice
For further practice using git, perform the following steps:
1. **Edit** your list of books to include two more books (top 5 list!)
2. **Add** the changes to the staging area
3. **Commit** the changes to the repository
Be sure and check the status at each step to make sure everything works!
You can also add more files besides `books.md`, such as `movies.md` or
`restaurants.md`.
When your repository grows, you can check which files are tracked by *git*, both already
committed and in the staging area, by
```
git ls-files
```
(Note that `git status` does not mention tracked files that are not changed.)
### 4\.3\.5 The `.gitignore` File
Sometimes you want git to always ignore particular directories or files in your project. For example, if you use a Mac and you tend to organize your files in the Finder, the operating system will create a hidden file in that folder named `.DS_Store` (the leading dot makes it “hidden”) to track the positions of icons, which folders have been “expanded”, etc. This file will likely be different from machine to machine. If it is added to your repository and you work from multiple machines (or as part of a team), it could lead to a lot of merge conflicts (not to mention cluttering up the folders for Windows users).
You can tell git to ignore files like these by creating a special *hidden* file in your project directory called `.gitignore` (note the leading dot). This file contains a *list* of files or folders that git should “ignore” and pretend don’t exist. The file uses a very simple format: each line contains the path to a directory or file to ignore; multiple files are placed on multiple lines. For example:
```
# This is an example .gitignore file
# Mac system file; the leading # marks a comment
.DS_Store
# example: don't check in passwords or ssl keys!
secret/my_password.txt
# example: don't include large files or libraries
movies/my_four_hour_epic.mov
```
Note that the easiest way to create the `.gitignore` file is to use your preferred text editor (e.g., Atom); select `File > New` from the menu and choose to make the `.gitignore` file *directly inside* your repo. `.gitgnore` should be added and committed to the repo to make the ignore\-rules common across the project. There are also ways to ignore files on a single computer only, see [documentation on github](https://help.github.com/en/articles/ignoring-files) for details.
You may want to ignore certain files in *all your repositories*. For instance, if you are on a Mac, we **strongly suggest** *globally ignoring* your `.DS_Store` file. There’s no need to ever share or track this file. To always ignore this file on your machine, simply run these lines of code:
```
# Run these lines on your terminal to configure git to ignore .DS_Store
git config --global core.excludesfile ~/.gitignore
echo .DS_Store >> ~/.gitignore
```
See [this article](http://jesperrasmussen.com/2013/11/13/globally-ignore-ds-store-on-mac/) for more information.
4\.4 GitHub and Remotes
-----------------------
Now that you’ve gotten the hang of git, let’s talk about GitHub. [GitHub](https://github.com/) is an online service that stores copies of repositories in the cloud. These repositories can be *linked* to your **local** repositories (the one on your machine, like you’ve been working with so far) so that you can synchronize changes between them.
* The relationship between git and GitHub is the same as that between your camera and Imgur: **git** is the program we use to create and manage repositories; GitHub is simply a website that stores these repositories. So we use git, but upload to/download from GitHub.
Repositories stored on GitHub are examples of **remotes**: other repos that are linked to your local one. Each repo can have multiple remotes, and you can synchronize commits between them.
Each remote has a URL associated with it (where on the internet the remote copy of the repo can be found), but they are given “alias” names (like browser bookmarks). By convention, the remote repo stored on GitHub’s servers is named **`origin`**, since it tends to be the “origin” of any code you’ve started working on.
Remotes don’t need to be stored on GitHub’s computers, but it’s one of the most popular places to put repos. One repo can also have more than one remote, for instance `origin` for the github\-version of your repo, and `upstream` for the repo you initially forked you code from.
### 4\.4\.1 Forking and Cloning
In order to use GitHub, you’ll need to [**create a free GitHub account**](https://github.com/join), which you should have done as part of setting up your machine.
Next, you’ll need to download a copy of a repo from GitHub onto your own machine. **Never make changes or commit directly to GitHub**: all development work is done locally, and changes you make are then uploaded and *merged* into the remote.
Start by visiting [this link](https://github.com/info201/github_practice). This is the web portal for an existing repository. You can see that it contains one file (`README.md`, a Markdown file with a description of the repo) and a folder containing a second file. You can click on the files and folder to view their source online, but again you won’t change them there!
Just like with Imgur or Flickr or other image\-hosting sites, each GitHub user has their own account under which repos are stored. The repo linked above is under the course book account (`info201`). And because it’s under our user account, you won’t be able to modify it—just like you can’t change someone else’s picture on Imgur. So the first thing you’ll need to do is copy the repo over to *your **own** account on GitHub’s servers*. This process is called **forking** the repo (you’re creating a “fork” in the development, splitting off to your own version).
* To fork a repo, click the **“Fork”** button in the upper\-right of the screen:
The fork button on GitHub’s web portal.
This will copy the repo over to your own account, so that you can upload and download changes to it!
Students in the INFO 201 course will be forking repos for class and lab execises, but *not* for homework assignments (see below)
Now that you have a copy of the repo under your own account, you need to download it to your machine. We do this by using the `clone` command:
```
git clone [url]
```
This command will create a new repo (directory) *in the current folder*, and download a copy of the code and all the commits from the URL you specify.
* You can get the URL from the address bar of your browser, or you can click the green “Clone or Download” button to get a popup with the URL. The little icon will copy the URL to your clipboard. **Do not** click “Open in Desktop” or “Download Zip”.
* Make sure you clone from the *forked* version (the one under your account!)
**Warning** also be sure to `cd` out of the `git_practice` directory; you don’t want to `clone` into a folder that is already a repo; you’re effectively creating a *new* repository on your machine here!
Note that you’ll only need to `clone` once per machine; `clone` is like `init` for repos that are on GitHub—in fact, the `clone` command *includes* the `init` command (so you do not need to init a cloned repo).
### 4\.4\.2 Pushing and Pulling
Now that you have a copy of the repo code, make some changes to it! Edit the `README.md` file to include your name, then `add` the change to the staging area and `commit` the changes to the repo (don’t forget the `-m` message!).
Although you’ve made the changes locally, you have not uploaded them to GitHub yet—if you refresh the web portal page (make sure you’re looking at the one under your account), you shouldn’t see your changes yet.
In order to get the changes to GitHub, you’ll need to `push` (upload) them to GitHub’s computers. You can do this with the following command:
```
git push origin master
```
This will push the current code to the `origin` remote (specifically to its `master` branch of development).
* When you cloned the repo, it came with an `origin` “bookmark” to the original repo’s location on GitHub!
Once you’ve **pushed** your code, you should be able to refresh the GitHub webpage and see your changes to the README!
If you want to download the changes (commits) that someone else made, you can do that using the `pull` command, which will download the changes from GitHub and *merge* them into the code on your local machine:
```
git pull
```
Because you’re merging as part of a `pull`, you’ll need to keep an eye out for **merge conflicts**! These will be discussed in more detail in [chapter 14](git-branches.html#git-branches).
**Pro Tip**: always `pull` before you `push`. Technically using `git push` causes a merge to occur on GitHub’s servers, but GitHub won’t let you push if that merge might potentially cause a conflict. If you `pull` first, you can make sure your local version is up to date so that no conflicts will occur when you upload.
### 4\.4\.3 Reviewing The Process
Overall, the process of using git and GitHub together looks as follows:
The remote git process.
### 4\.4\.1 Forking and Cloning
In order to use GitHub, you’ll need to [**create a free GitHub account**](https://github.com/join), which you should have done as part of setting up your machine.
Next, you’ll need to download a copy of a repo from GitHub onto your own machine. **Never make changes or commit directly to GitHub**: all development work is done locally, and changes you make are then uploaded and *merged* into the remote.
Start by visiting [this link](https://github.com/info201/github_practice). This is the web portal for an existing repository. You can see that it contains one file (`README.md`, a Markdown file with a description of the repo) and a folder containing a second file. You can click on the files and folder to view their source online, but again you won’t change them there!
Just like with Imgur or Flickr or other image\-hosting sites, each GitHub user has their own account under which repos are stored. The repo linked above is under the course book account (`info201`). And because it’s under our user account, you won’t be able to modify it—just like you can’t change someone else’s picture on Imgur. So the first thing you’ll need to do is copy the repo over to *your **own** account on GitHub’s servers*. This process is called **forking** the repo (you’re creating a “fork” in the development, splitting off to your own version).
* To fork a repo, click the **“Fork”** button in the upper\-right of the screen:
The fork button on GitHub’s web portal.
This will copy the repo over to your own account, so that you can upload and download changes to it!
Students in the INFO 201 course will be forking repos for class and lab execises, but *not* for homework assignments (see below)
Now that you have a copy of the repo under your own account, you need to download it to your machine. We do this by using the `clone` command:
```
git clone [url]
```
This command will create a new repo (directory) *in the current folder*, and download a copy of the code and all the commits from the URL you specify.
* You can get the URL from the address bar of your browser, or you can click the green “Clone or Download” button to get a popup with the URL. The little icon will copy the URL to your clipboard. **Do not** click “Open in Desktop” or “Download Zip”.
* Make sure you clone from the *forked* version (the one under your account!)
**Warning** also be sure to `cd` out of the `git_practice` directory; you don’t want to `clone` into a folder that is already a repo; you’re effectively creating a *new* repository on your machine here!
Note that you’ll only need to `clone` once per machine; `clone` is like `init` for repos that are on GitHub—in fact, the `clone` command *includes* the `init` command (so you do not need to init a cloned repo).
### 4\.4\.2 Pushing and Pulling
Now that you have a copy of the repo code, make some changes to it! Edit the `README.md` file to include your name, then `add` the change to the staging area and `commit` the changes to the repo (don’t forget the `-m` message!).
Although you’ve made the changes locally, you have not uploaded them to GitHub yet—if you refresh the web portal page (make sure you’re looking at the one under your account), you shouldn’t see your changes yet.
In order to get the changes to GitHub, you’ll need to `push` (upload) them to GitHub’s computers. You can do this with the following command:
```
git push origin master
```
This will push the current code to the `origin` remote (specifically to its `master` branch of development).
* When you cloned the repo, it came with an `origin` “bookmark” to the original repo’s location on GitHub!
Once you’ve **pushed** your code, you should be able to refresh the GitHub webpage and see your changes to the README!
If you want to download the changes (commits) that someone else made, you can do that using the `pull` command, which will download the changes from GitHub and *merge* them into the code on your local machine:
```
git pull
```
Because you’re merging as part of a `pull`, you’ll need to keep an eye out for **merge conflicts**! These will be discussed in more detail in [chapter 14](git-branches.html#git-branches).
**Pro Tip**: always `pull` before you `push`. Technically using `git push` causes a merge to occur on GitHub’s servers, but GitHub won’t let you push if that merge might potentially cause a conflict. If you `pull` first, you can make sure your local version is up to date so that no conflicts will occur when you upload.
### 4\.4\.3 Reviewing The Process
Overall, the process of using git and GitHub together looks as follows:
The remote git process.
4\.5 Course Assignments on GitHub
---------------------------------
For students in INFO 201: While class and lab work will use the “fork and clone” workflow described above, homework assignments will work slightly differently. Assignments in this course are configured using GitHub Classroom, which provides each student *private* repo (under the class account) for the assignment.
Each assignment description in Canvas contains a link to create an assignment repo: click the link and then **accept the assignment** in order to create your own code repo. Once the repository is created, you should **`clone`** it to your local machine to work. **Do not fork your asssignment repo**.
**DO NOT FORK YOUR ASSIGNMENT REPO.**
After `cloning` the assignment repo, you can begin working following the workflow described above:
1. Make changes to your files
2. **Add** files with changes to the staging area (`git add .`)
3. **Commit** these changes to take a repo (`git commit -m "commit message"`)
4. **Push** changes back to GitHub (`git push origin master`) to turn in your work.
Repeat these steps each time you reach a “checkpoint” in your work to save it both locally and in the cloud (in case of computer problems).
4\.6 Command Summary
--------------------
Whew! You made it through! This chapter has a lot to take in, but really you just need to understand and use the following half\-dozen commands:
* `git status` Check the status of a repo
* `git add` Add file to the staging area
* `git commit -m "message"` Commit changes
* `git clone` Copy repo to local machine
* `git push origin master` Upload commits to GitHub
* `git pull` Download commits from GitHub
Using git and GitHub can be challenging, and you’ll inevitably run into issues. While it’s tempting to ignore version control systems, **they will save you time** in the long\-run. For now, do your best to follow these processes, and read any error messages carefully. If you run into trouble, try to understand the issue (Google/StackOverflow), and don’t hesitate to ask for help.
Resources
---------
* [Git and GitHub in Plain English](https://red-badger.com/blog/2016/11/29/gitgithub-in-plain-english)
* [Atlassian Git Tutorial](https://www.atlassian.com/git/tutorials/what-is-version-control)
* [Try Git](https://try.github.io/levels/1/challenges/1) (interactive tutorial)
* [GitHub Setup and Instructions](https://help.github.com/articles/set-up-git/)
* [Official Git Documentation](https://git-scm.com/doc)
* [Git Cheat Sheet](https://education.github.com/git-cheat-sheet-education.pdf)
* [Ignore DS\_Store on a Mac](http://jesperrasmussen.com/2013/11/13/globally-ignore-ds-store-on-mac/)
| Field Specific |
info201.github.io | https://info201.github.io/r-intro.html |
Chapter 5 Introduction to R
===========================
R is an extraordinarily powerful open\-source software program built for working with data. It is one of the most popular data science tools because of its ability to efficiently perform statistical analysis, implement machine learning algorithms, and create data visualizations. R is the primary programming language used throughout this book, and understanding its foundational operations is key to being able to perform more complex tasks.
5\.1 Programming with R
-----------------------
R is a **statistical programming language** that allows you to write code to work with data. It is an **open\-source** programming language, which means that it is free and continually improved upon by the R community. The R language has a number of functionalities that allow you to read, analyze, and visualize datasets.
* *Fun Fact:* R is called “R” because it was inspired by and comes after the language “S”, a language for **S**tatistics developed by AT\&T.
So far you’ve leveraged formal language to give instructions to your computers, such as by writing syntactically\-precise instructions at the command\-line. Programming in R will work in a similar manner: you will write instructions using R’s special language and syntax, which the computer will **interpret** as instructions for how to work with data.
However, as projects grow in complexity, it will become useful if you can write down all the instructions in a single place, and then order the computer to *execute* all of those instructions at once. This list of instructions is called a **script**. Executing or “running” a script will cause each instruction (line of code) to be run *in order, one after the other*, just as if you had typed them in one by one. Writing scripts allows you to save, share, and re\-use your work. By saving instructions in a file (or set of files), you can easily check, change, and re\-execute the list of instructions as you figure out how to use data to answer questions. And, because R is an *interpreted* language rather than a *compiled* language like Java, R programming environments will also give you the ability to execute each individual line of code in your script if you desire (though this will become cumbersome as projects become large).
As you begin working with data in R, you will be writing multiple instructions (lines of code) and saving them in files with the **`.R`** extension, representing R scripts. You can write this R code in any text editor (such as Atom), but we recommend you usually use a program called **RStudio** which is specialized for writing and running R scripts.
5\.2 Running R Scripts
----------------------
R scripts (programs) are just a sequence of instructions, and there are a couple of different ways in which we can tell the computer to execute these instructions.
### 5\.2\.1 Command\-Line
It is possible to issue R instructions (run lines of code) one\-by\-one at the command\-line by starting an **interactive R session** within your terminal. This will allow you to type R code directly into the terminal, and your computer will interpret and execute each line of code (if you just typed R syntax directly into the terminal, your computer wouldn’t understand it).
With R installed, you can start an interactive R session on a Mac by typing `R` into the terminal (to run the `R` program), or on Windows by running the “R” desktop app program. This will start the session and provide you with lots of information about the R language:
An interactive R session running in the terminal.
Notice that this description also include *instructions on what to do next*—most importantly `"Type 'q()' to quit R."`.
Always read the output when working on the command\-line!
Once you’ve started running an interactive R session, you can begin entering one line of code at a time at the prompt (`>`). This is a nice way to experiment with the R language or to quickly run some code. For example, try doing some math at the command prompt (i.e., enter `1 + 1` and see the output).
* Note that RStudio also provides an interactive console that provides the exact same functionality.
It is also possible to run entire scripts from the command\-line by using the `RScript` program, specifying the `.R` file you wish to execute:
Using RScript from the terminal
Entering this command in the terminal would execute each line of R code written in the `analysis.R` file, performing all of the instructions that you had save there. This is helpful if your data has changed, and you want to reproduce the results of your analysis using the same instructions.
#### 5\.2\.1\.1 Windows Command\-Line
On Windows, you need to tell the computer where to find the `R.exe` and `RScript.exe` programs to execute—that is, what is the **path** to these programs. You can do this by specifying the *absolute path* to the R program when you execute it, for example:
Using RScript from a Windows shell
If you plan to run R from the command\-line regularly (*not* a requirement for this course), a better solution is to add the folder containing these programs to your computer’s [**`PATH` variable**](https://en.wikipedia.org/wiki/PATH_(variable)). This is a *system\-level* variable that contains a list of folders that the computer searches when finding programs to execute execute. The reason the computer knows where to find the `git.exe` program when you type `git` in the command\-line is because that program is “on the `PATH`”.
In Windows, You can add the `R.exe` and `RScript.exe` programs to your computer’s `PATH` by editing your machine’s **Environment Variables** through the *Control Panel*:
* Open up the “Advanced” tab of the “System Properties”. In Windows 10, you can find this by searching for “environment”. Click on the “Environment Variables…” button to open the settings for the Environment Variables.
* In the window that pops up, select the “PATH” variable (either per user or for the whole system) and click the “Edit” button below it.
* In the next window that pops up, click the “Browse” button to select the *folder that contains* the `R.exe` and `RScript.exe` files. See the above screenshot for one possible path.
* You will need to close and re\-open your command\-line (Git Bash) for the `PATH` changes to take effect.
Overall, using R from the command\-line can be tricky; we recommend you just use RStudio instead.
### 5\.2\.2 RStudio
RStudio is an open\-source **integrated development environment (IDE)** that provides an informative user interface for interacting with the R interpreter. IDEs provide a platform for writing *and* executing code, including viewing the results of the code you have run. If you haven’t already, make sure to download and install [the free version of RStudio](https://www.rstudio.com/products/rstudio/download3/#download).
When you open the RStudio program (either by searching for it, or double\-clicking on a desktop icon), you’ll see the following interface:
RStudio’s user interface. Annotations are in red.
An RStudio session usually involves 4 sections (“panes”), though you can customize this layout if you wish:
* **Script**: The top\-left pane is a simple text editor for writing your R code. While it is not as robust as a text editing program like Atom, it will colorize code, “auto\-complete” text, and allows you to easily execute your code. Note that this pane is hidden if there are no open scripts; select `File > New File > R Script` from the menu to create a new script file.
In order to execute (run) the code you write, you have two options:
1. You can execute a section of your script by selecting (highlighting) the desired code and pressing the “Run” button (keyboard shortcut: `ctrl` and `enter`). If no lines are selected, this will run the line currently containing the cursor. This is the most common way to execute code in R.
+ *Protip:* use `cmd + a` to select the entire script!
2. You can execute an entire script by using the `Source` command to treat the current file as the “source” of code. Press the “Source” button (hover the mouse over it for keyboard shortcuts) to do so. If you check the “Source on save” option, your entire script will be executed every time you save the file (this may or may not be appropriate, depending on the complexity of your script and its output).
* **Console**: The bottom\-left pane is a console for entering R commands. This is identical to an inetractive session you’d run on the command\-line, in which you can type and execute one line of code at a time. The console will also show the printed results from executing the code you execute from the Script pane.
+ *Protip:* just like with the command\-line, you can **use the up arrow** to easily access previously executed lines of code.
* **Environment**: The top\-right pane displays information about the current R environment—specifically, information that you have stored inside of *variables* (see below). In the above example, the value `201` is stored in a variable called `x`. You’ll often create dozens of variables within a script, and the Environment pane helps you keep track of which values you have stored in what variables. *This is incredibly useful for debugging!*
* **Plots, packages, help, etc.**: The bottom right pane contains multiple tabs for accessing various information about your program. When you create visualizations, those plots will render in this quadrant. You can also see what packages you’ve loaded or look up information about files. *Most importantly*, this is also where you can access the official documentation for the R language. If you ever have a question about how something in R works, this is a good place to start!
Note, you can use the small spaces between the quadrants to adjust the size of each area to your liking. You can also use menu options to reorganize the panes if you wish.
5\.3 Comments
-------------
Before discussing how to program with R, we need to talk about a piece of syntax that lets you comment your code. In programming, **comments** are bits of text that are *not interpreted as computer instructions*—they aren’t code, they’re just notes about the code! Since computer code can be opaque and difficult to understand, we use comments to help write down the meaning and *purpose* of our code. While a computer is able to understand the code, comments are there to help *people* understand it. This is particularly imporant when someone else will be looking at your work—whether that person is a collaborator, or is simply a future version of you (e.g., when you need to come back and fix something and so need to remember what you were even thinking).
Comments should be clear, concise, and helpful—they should provide information that is not otherwise present or “obvious” in the code itself.
In R, we mark text as a comment by putting it after the pound/hashtag symbol (**`#`**). Everything from the `#` until the end of the line is a comment. We put descriptive comments *immediately above* the code it describes, but you can also put very short notes at the end of the line of code (preferably following two spaces):
```
# Set how many bottles of beer are on the wall
bottles <- 99 - 1 # 98 bottles
```
(You may recognize this `#` syntax and commenting behavior from the command\-line and git chapters. That’s because the same syntax is used in a Bash shell!)
5\.4 Variables
--------------
Since computer programs involve working with lots of *information*, we
need a way to store and refer to this information. We do this with
**variables**. Variables are labels for information stored in memory; in R, you can think of them as “boxes” or “nametags” for data. After putting data in a variable box, you can then refer to that data by the name on the box.
Variable names can contain any combination of letters, numbers, periods (`.`), or underscores (`_`). Variables names must begin with a letter. Note that like everything in programming, variable names are case sensitive. It is best practice to make variable names descriptive and information about what data they contain. `a` is not a good variable name. `cups_of_coffee` is a good variable name. In this course, we will use the [Tidyverse Style Guide](http://style.tidyverse.org/) as a foundation of *how we should style our code*. To comply with this style guide, variables should be **all lower\-case letters, separated by underscore (`_`)**. This is also known as **snake\_case**.
**Note:** There is an important distinction between *syntax* and *style*. You need to use the proper *syntax* such that your *machine* understands your code. This requires that you follow a set of rigid rules for what code can be procesed. *Style* is what helps make your code understandable for other *humans*. Good style is not required to get your code to run, but it’s imperative for writing clear, (human) readable code.
We call putting information in a variable **assigning** that value to the variable. We do this using the *assignment operator* **`<-`**. For example:
```
# Stores the number 7 into a variable called `shoe_size`
shoe_size <- 7
```
* *Notice:* variable name goes on the left, value goes on the right!
You can see what value (data) is inside a variable by either typing that variable name as a line of code, or by using R’s built\-in `print()` function (more on functions later):
```
print(shoe_size)
# [1] 7
```
* We’ll talk about the `[1]` in that output later.
You can also use **mathematical operators** (e.g., `+`, `-`, `/`, `*`) when assigning values to variables. For example, you could create a variable that is the sum of two numbers as follows:
```
x <- 3 + 4
```
Once a value (like a number) is *in* a variable, you can use that variable in place of any other value. So all of the following are valid:
```
x <- 2 # store 2 in x
y <- 9 # store 9 in y
z <- x + y # store sum of x and y in z
print(z) # 11
z <- z + 1 # take z, add 1, and store result back in z
print(z) # 12
```
### 5\.4\.1 Basic Data Types
In the example above, we stored **numeric** values in variables. R is a **dynamically typed language**, which means that we *do not* need to explicitly state what type of information will be stored in each variable we create. R is intelligent enough to understand that if we have code `x <- 7`, then `x` will contain a numeric value (and so we can do math upon it!)
There six “basic types” (called *atomic data types*) for data in R:
* **Numeric**: The default computational data type in R is numeric data, which consists of the set of real numbers (including decimals). We use use **mathematical operators** on numeric data (such as `+`, `-`, `*`, `-`, etc.). There are also numerous functions that work on numeric data (such as calculating sums or averages).
* **Character**: Character data stores *strings* of characters (things you type with a keyboard) in a variable. You specify that some information is character data by surrounding it in either single quotes (**`'`**) or double quotes (**`"`**). To comply with the [style guide](http://style.tidyverse.org/syntax.html#quotes), we’ll consider it best practice to use **double quotes**.
```
# Create character variable `famous.poet` with the value "Bill Shakespeare"
famous_poet <- "Bill Shakespeare"
```
Note that character data *is still data*, so it can be assigned to a variable just like numeric data!
There are no special operators for character data, though there are a many built\-in functions for working with strings.
* **Logical**: Logical (a.k.a Boolean) data types store “yes\-or\-no”
data. A logical value can be one of two values: `TRUE` or
`FALSE`. Importantly, these **are not** the strings `"TRUE"` or
`"FALSE"`; logical values are a different type! In interactive
session, you can use the shorthand `T` or `F` in lieu of `TRUE` and
`FALSE` in variable assignment but this is not recommended for
programs.
+ *Fun fact:* logical values are called “booleans” after mathematician and logician [George Boole](https://en.wikipedia.org/wiki/George_Boole).Logical values are most commonly the result of applying a **relational operator** (also called a **comparison operator**) to some other data. Comparison operators are used to compare values and include: `<` (less than), `>` (greater than), `<=` (less\-than\-or\-equal), `>=` (greater\-than\-or\-equal), `==` (equal), and `!=` (not\-equal).
```
# Store values in variables
x <- 3
y <- 3.15
# Compare values in x and y
x > y # returns logical value FALSE (x IS NOT bigger than y)
y != x # returns logical value TRUE (y IS not-equal to x)
# compare x to pi (built-in variable)
y == pi # returns logical value FALSE
# compare strings (based on alphabetical ordering)
"cat" > "dog" # returns FALSE
```
Logical values have their own operators as well (called **logical operators** or **boolean operators**). These *apply to* logical values and *produce* logical values, allowing you to make more complex logical expressions. These include `&` (and), `|` (or), and `!` (not).
```
# Store values in variables
x <- 3.1
y <- 3.2
pet <- "dog"
weather <- "rain"
# Check if x is less than pi AND y is greater than pi
x < pi & y > pi # TRUE
# Check if pet is "cat" OR "dog"
pet == "cat" | pet == "dog" # TRUE
# Check if pet is "dog" AND NOT weather is "rain"
pet == "dog" & !(weather == "rain") # FALSE
```
Note that it’s easy to write complex expressions with logical operators. If you find yourself getting lost, I recommend rethinking your question to see if there is a simpler way to express it!
* **Integer**: Integer values are technically a different data type than numeric values because of how they are stored and manipulated by the R interpreter. This is something that you will rarely encounter, but it’s good to know that you can specify a number is of integer type rather than general numeric type by placing a capital `L` (for “long integer”) after an value in variable assignment (`my_integer <- 10L`).
* **Complex**: Complex (imaginary) numbers have their own data storage
type in R, are are created using the `i` syntax: `complex_variable <- 1 + 2i`. We will not be using complex numbers in this course.
* **Raw**: is a sequence of “raw” data. It is good for storing a “raw”
sequence of bytes, such as image data. R does not interpret raw data
in any particular way. We will not discuss raw data in this book.
5\.5 Getting Help
-----------------
As with any programming language, when working in R you will inevitably run into problems, confusing situations, or just general questions. Here are a few ways to start getting help.
1. **Read the error messages**: If there is an issue with the way you
have written or executed your code, R will often print out a red error message in your console. Do you best to decipher the message (read it carefully, and think about what is meant by each word in the message), or you can put it directly into Google to get more information. You’ll soon get the hang of interpreting these messages if you put the time into trying to understand them.
2. **Google**: When you’re trying to figure out how to do something, it should be no surprise that Google is often the best resource. Try searching for queries like `"how to <DO THING> in R"`. More frequently than not, your question will lead you to a Q/A forum called StackOverflow (see below), which is a great place to find potential answers.
3. **StackOverflow**: StackOverflow is an amazing Q/A forum for asking/answering programming questions. Indeed, most basic questions have already been asked/answered here. However, don’t hesitate to post your own questions to StackOverflow. Be sure to hone in on the specific question you’re trying to answer, and provide error messages and sample code. I often find that, by the time I can articulate the question clearly enough to post it, I’ve figured out my problem anyway.
* There is a classical method of debugging called [rubber duck debugging](https://en.wikipedia.org/wiki/Rubber_duck_debugging), which involves simply trying to explain your code/problem to an inanimate object (talking to pets works too). You’ll usually be able to fix the problem if you just step back and think about how you would explain it to someone else!
4. **Documentation**: R’s documentation is actually quite good. Functions and behaviors are all described in the same format, and often contain helpful examples. To search the documentation within R (or in RStudio), simply type `?` followed by the function name you’re using (more on functions coming soon). You can also search the documentation by typing two questions marks (`??SEARCH`).
* You can also look up help by using the `help()` function (e.g., `help(print)` will look up information on the `print()` function, just like `?print` does). There is also an `example()` function you can call to see examples of a function in action (e.g., `example(print)`). This will be more important in the next module!
* [rdocumentation.org](https://www.rdocumentation.org/) has a lovely searchable and readable interface to the R documentation.
Resources
---------
* [Tidyverse Style Guide](http://style.tidyverse.org/)
* [R Tutorial: Introduction](http://www.r-tutor.com/r-introduction)
* [R Tutorial: Basic Data Types](http://www.r-tutor.com/r-introduction/basic-data-types)
* [R Tutorial: Operators](https://www.tutorialspoint.com/r/r_operators.htm)
* [RStudio Keyboard Shortcuts](https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts)
* [R Documentation](https://www.rdocumentation.org/) searchable online documentation
* [R for Data Science](http://r4ds.had.co.nz/) online textbook, oriented
toward R usage in data processing and visualization
* [aRrgh: a newcomer’s (angry) guide to R](http://arrgh.tim-smith.us/) opinionated but clear introduction
* [The Art of R Programming](https://www.nostarch.com/artofr.htm) print textbook
5\.1 Programming with R
-----------------------
R is a **statistical programming language** that allows you to write code to work with data. It is an **open\-source** programming language, which means that it is free and continually improved upon by the R community. The R language has a number of functionalities that allow you to read, analyze, and visualize datasets.
* *Fun Fact:* R is called “R” because it was inspired by and comes after the language “S”, a language for **S**tatistics developed by AT\&T.
So far you’ve leveraged formal language to give instructions to your computers, such as by writing syntactically\-precise instructions at the command\-line. Programming in R will work in a similar manner: you will write instructions using R’s special language and syntax, which the computer will **interpret** as instructions for how to work with data.
However, as projects grow in complexity, it will become useful if you can write down all the instructions in a single place, and then order the computer to *execute* all of those instructions at once. This list of instructions is called a **script**. Executing or “running” a script will cause each instruction (line of code) to be run *in order, one after the other*, just as if you had typed them in one by one. Writing scripts allows you to save, share, and re\-use your work. By saving instructions in a file (or set of files), you can easily check, change, and re\-execute the list of instructions as you figure out how to use data to answer questions. And, because R is an *interpreted* language rather than a *compiled* language like Java, R programming environments will also give you the ability to execute each individual line of code in your script if you desire (though this will become cumbersome as projects become large).
As you begin working with data in R, you will be writing multiple instructions (lines of code) and saving them in files with the **`.R`** extension, representing R scripts. You can write this R code in any text editor (such as Atom), but we recommend you usually use a program called **RStudio** which is specialized for writing and running R scripts.
5\.2 Running R Scripts
----------------------
R scripts (programs) are just a sequence of instructions, and there are a couple of different ways in which we can tell the computer to execute these instructions.
### 5\.2\.1 Command\-Line
It is possible to issue R instructions (run lines of code) one\-by\-one at the command\-line by starting an **interactive R session** within your terminal. This will allow you to type R code directly into the terminal, and your computer will interpret and execute each line of code (if you just typed R syntax directly into the terminal, your computer wouldn’t understand it).
With R installed, you can start an interactive R session on a Mac by typing `R` into the terminal (to run the `R` program), or on Windows by running the “R” desktop app program. This will start the session and provide you with lots of information about the R language:
An interactive R session running in the terminal.
Notice that this description also include *instructions on what to do next*—most importantly `"Type 'q()' to quit R."`.
Always read the output when working on the command\-line!
Once you’ve started running an interactive R session, you can begin entering one line of code at a time at the prompt (`>`). This is a nice way to experiment with the R language or to quickly run some code. For example, try doing some math at the command prompt (i.e., enter `1 + 1` and see the output).
* Note that RStudio also provides an interactive console that provides the exact same functionality.
It is also possible to run entire scripts from the command\-line by using the `RScript` program, specifying the `.R` file you wish to execute:
Using RScript from the terminal
Entering this command in the terminal would execute each line of R code written in the `analysis.R` file, performing all of the instructions that you had save there. This is helpful if your data has changed, and you want to reproduce the results of your analysis using the same instructions.
#### 5\.2\.1\.1 Windows Command\-Line
On Windows, you need to tell the computer where to find the `R.exe` and `RScript.exe` programs to execute—that is, what is the **path** to these programs. You can do this by specifying the *absolute path* to the R program when you execute it, for example:
Using RScript from a Windows shell
If you plan to run R from the command\-line regularly (*not* a requirement for this course), a better solution is to add the folder containing these programs to your computer’s [**`PATH` variable**](https://en.wikipedia.org/wiki/PATH_(variable)). This is a *system\-level* variable that contains a list of folders that the computer searches when finding programs to execute execute. The reason the computer knows where to find the `git.exe` program when you type `git` in the command\-line is because that program is “on the `PATH`”.
In Windows, You can add the `R.exe` and `RScript.exe` programs to your computer’s `PATH` by editing your machine’s **Environment Variables** through the *Control Panel*:
* Open up the “Advanced” tab of the “System Properties”. In Windows 10, you can find this by searching for “environment”. Click on the “Environment Variables…” button to open the settings for the Environment Variables.
* In the window that pops up, select the “PATH” variable (either per user or for the whole system) and click the “Edit” button below it.
* In the next window that pops up, click the “Browse” button to select the *folder that contains* the `R.exe` and `RScript.exe` files. See the above screenshot for one possible path.
* You will need to close and re\-open your command\-line (Git Bash) for the `PATH` changes to take effect.
Overall, using R from the command\-line can be tricky; we recommend you just use RStudio instead.
### 5\.2\.2 RStudio
RStudio is an open\-source **integrated development environment (IDE)** that provides an informative user interface for interacting with the R interpreter. IDEs provide a platform for writing *and* executing code, including viewing the results of the code you have run. If you haven’t already, make sure to download and install [the free version of RStudio](https://www.rstudio.com/products/rstudio/download3/#download).
When you open the RStudio program (either by searching for it, or double\-clicking on a desktop icon), you’ll see the following interface:
RStudio’s user interface. Annotations are in red.
An RStudio session usually involves 4 sections (“panes”), though you can customize this layout if you wish:
* **Script**: The top\-left pane is a simple text editor for writing your R code. While it is not as robust as a text editing program like Atom, it will colorize code, “auto\-complete” text, and allows you to easily execute your code. Note that this pane is hidden if there are no open scripts; select `File > New File > R Script` from the menu to create a new script file.
In order to execute (run) the code you write, you have two options:
1. You can execute a section of your script by selecting (highlighting) the desired code and pressing the “Run” button (keyboard shortcut: `ctrl` and `enter`). If no lines are selected, this will run the line currently containing the cursor. This is the most common way to execute code in R.
+ *Protip:* use `cmd + a` to select the entire script!
2. You can execute an entire script by using the `Source` command to treat the current file as the “source” of code. Press the “Source” button (hover the mouse over it for keyboard shortcuts) to do so. If you check the “Source on save” option, your entire script will be executed every time you save the file (this may or may not be appropriate, depending on the complexity of your script and its output).
* **Console**: The bottom\-left pane is a console for entering R commands. This is identical to an inetractive session you’d run on the command\-line, in which you can type and execute one line of code at a time. The console will also show the printed results from executing the code you execute from the Script pane.
+ *Protip:* just like with the command\-line, you can **use the up arrow** to easily access previously executed lines of code.
* **Environment**: The top\-right pane displays information about the current R environment—specifically, information that you have stored inside of *variables* (see below). In the above example, the value `201` is stored in a variable called `x`. You’ll often create dozens of variables within a script, and the Environment pane helps you keep track of which values you have stored in what variables. *This is incredibly useful for debugging!*
* **Plots, packages, help, etc.**: The bottom right pane contains multiple tabs for accessing various information about your program. When you create visualizations, those plots will render in this quadrant. You can also see what packages you’ve loaded or look up information about files. *Most importantly*, this is also where you can access the official documentation for the R language. If you ever have a question about how something in R works, this is a good place to start!
Note, you can use the small spaces between the quadrants to adjust the size of each area to your liking. You can also use menu options to reorganize the panes if you wish.
### 5\.2\.1 Command\-Line
It is possible to issue R instructions (run lines of code) one\-by\-one at the command\-line by starting an **interactive R session** within your terminal. This will allow you to type R code directly into the terminal, and your computer will interpret and execute each line of code (if you just typed R syntax directly into the terminal, your computer wouldn’t understand it).
With R installed, you can start an interactive R session on a Mac by typing `R` into the terminal (to run the `R` program), or on Windows by running the “R” desktop app program. This will start the session and provide you with lots of information about the R language:
An interactive R session running in the terminal.
Notice that this description also include *instructions on what to do next*—most importantly `"Type 'q()' to quit R."`.
Always read the output when working on the command\-line!
Once you’ve started running an interactive R session, you can begin entering one line of code at a time at the prompt (`>`). This is a nice way to experiment with the R language or to quickly run some code. For example, try doing some math at the command prompt (i.e., enter `1 + 1` and see the output).
* Note that RStudio also provides an interactive console that provides the exact same functionality.
It is also possible to run entire scripts from the command\-line by using the `RScript` program, specifying the `.R` file you wish to execute:
Using RScript from the terminal
Entering this command in the terminal would execute each line of R code written in the `analysis.R` file, performing all of the instructions that you had save there. This is helpful if your data has changed, and you want to reproduce the results of your analysis using the same instructions.
#### 5\.2\.1\.1 Windows Command\-Line
On Windows, you need to tell the computer where to find the `R.exe` and `RScript.exe` programs to execute—that is, what is the **path** to these programs. You can do this by specifying the *absolute path* to the R program when you execute it, for example:
Using RScript from a Windows shell
If you plan to run R from the command\-line regularly (*not* a requirement for this course), a better solution is to add the folder containing these programs to your computer’s [**`PATH` variable**](https://en.wikipedia.org/wiki/PATH_(variable)). This is a *system\-level* variable that contains a list of folders that the computer searches when finding programs to execute execute. The reason the computer knows where to find the `git.exe` program when you type `git` in the command\-line is because that program is “on the `PATH`”.
In Windows, You can add the `R.exe` and `RScript.exe` programs to your computer’s `PATH` by editing your machine’s **Environment Variables** through the *Control Panel*:
* Open up the “Advanced” tab of the “System Properties”. In Windows 10, you can find this by searching for “environment”. Click on the “Environment Variables…” button to open the settings for the Environment Variables.
* In the window that pops up, select the “PATH” variable (either per user or for the whole system) and click the “Edit” button below it.
* In the next window that pops up, click the “Browse” button to select the *folder that contains* the `R.exe` and `RScript.exe` files. See the above screenshot for one possible path.
* You will need to close and re\-open your command\-line (Git Bash) for the `PATH` changes to take effect.
Overall, using R from the command\-line can be tricky; we recommend you just use RStudio instead.
#### 5\.2\.1\.1 Windows Command\-Line
On Windows, you need to tell the computer where to find the `R.exe` and `RScript.exe` programs to execute—that is, what is the **path** to these programs. You can do this by specifying the *absolute path* to the R program when you execute it, for example:
Using RScript from a Windows shell
If you plan to run R from the command\-line regularly (*not* a requirement for this course), a better solution is to add the folder containing these programs to your computer’s [**`PATH` variable**](https://en.wikipedia.org/wiki/PATH_(variable)). This is a *system\-level* variable that contains a list of folders that the computer searches when finding programs to execute execute. The reason the computer knows where to find the `git.exe` program when you type `git` in the command\-line is because that program is “on the `PATH`”.
In Windows, You can add the `R.exe` and `RScript.exe` programs to your computer’s `PATH` by editing your machine’s **Environment Variables** through the *Control Panel*:
* Open up the “Advanced” tab of the “System Properties”. In Windows 10, you can find this by searching for “environment”. Click on the “Environment Variables…” button to open the settings for the Environment Variables.
* In the window that pops up, select the “PATH” variable (either per user or for the whole system) and click the “Edit” button below it.
* In the next window that pops up, click the “Browse” button to select the *folder that contains* the `R.exe` and `RScript.exe` files. See the above screenshot for one possible path.
* You will need to close and re\-open your command\-line (Git Bash) for the `PATH` changes to take effect.
Overall, using R from the command\-line can be tricky; we recommend you just use RStudio instead.
### 5\.2\.2 RStudio
RStudio is an open\-source **integrated development environment (IDE)** that provides an informative user interface for interacting with the R interpreter. IDEs provide a platform for writing *and* executing code, including viewing the results of the code you have run. If you haven’t already, make sure to download and install [the free version of RStudio](https://www.rstudio.com/products/rstudio/download3/#download).
When you open the RStudio program (either by searching for it, or double\-clicking on a desktop icon), you’ll see the following interface:
RStudio’s user interface. Annotations are in red.
An RStudio session usually involves 4 sections (“panes”), though you can customize this layout if you wish:
* **Script**: The top\-left pane is a simple text editor for writing your R code. While it is not as robust as a text editing program like Atom, it will colorize code, “auto\-complete” text, and allows you to easily execute your code. Note that this pane is hidden if there are no open scripts; select `File > New File > R Script` from the menu to create a new script file.
In order to execute (run) the code you write, you have two options:
1. You can execute a section of your script by selecting (highlighting) the desired code and pressing the “Run” button (keyboard shortcut: `ctrl` and `enter`). If no lines are selected, this will run the line currently containing the cursor. This is the most common way to execute code in R.
+ *Protip:* use `cmd + a` to select the entire script!
2. You can execute an entire script by using the `Source` command to treat the current file as the “source” of code. Press the “Source” button (hover the mouse over it for keyboard shortcuts) to do so. If you check the “Source on save” option, your entire script will be executed every time you save the file (this may or may not be appropriate, depending on the complexity of your script and its output).
* **Console**: The bottom\-left pane is a console for entering R commands. This is identical to an inetractive session you’d run on the command\-line, in which you can type and execute one line of code at a time. The console will also show the printed results from executing the code you execute from the Script pane.
+ *Protip:* just like with the command\-line, you can **use the up arrow** to easily access previously executed lines of code.
* **Environment**: The top\-right pane displays information about the current R environment—specifically, information that you have stored inside of *variables* (see below). In the above example, the value `201` is stored in a variable called `x`. You’ll often create dozens of variables within a script, and the Environment pane helps you keep track of which values you have stored in what variables. *This is incredibly useful for debugging!*
* **Plots, packages, help, etc.**: The bottom right pane contains multiple tabs for accessing various information about your program. When you create visualizations, those plots will render in this quadrant. You can also see what packages you’ve loaded or look up information about files. *Most importantly*, this is also where you can access the official documentation for the R language. If you ever have a question about how something in R works, this is a good place to start!
Note, you can use the small spaces between the quadrants to adjust the size of each area to your liking. You can also use menu options to reorganize the panes if you wish.
5\.3 Comments
-------------
Before discussing how to program with R, we need to talk about a piece of syntax that lets you comment your code. In programming, **comments** are bits of text that are *not interpreted as computer instructions*—they aren’t code, they’re just notes about the code! Since computer code can be opaque and difficult to understand, we use comments to help write down the meaning and *purpose* of our code. While a computer is able to understand the code, comments are there to help *people* understand it. This is particularly imporant when someone else will be looking at your work—whether that person is a collaborator, or is simply a future version of you (e.g., when you need to come back and fix something and so need to remember what you were even thinking).
Comments should be clear, concise, and helpful—they should provide information that is not otherwise present or “obvious” in the code itself.
In R, we mark text as a comment by putting it after the pound/hashtag symbol (**`#`**). Everything from the `#` until the end of the line is a comment. We put descriptive comments *immediately above* the code it describes, but you can also put very short notes at the end of the line of code (preferably following two spaces):
```
# Set how many bottles of beer are on the wall
bottles <- 99 - 1 # 98 bottles
```
(You may recognize this `#` syntax and commenting behavior from the command\-line and git chapters. That’s because the same syntax is used in a Bash shell!)
5\.4 Variables
--------------
Since computer programs involve working with lots of *information*, we
need a way to store and refer to this information. We do this with
**variables**. Variables are labels for information stored in memory; in R, you can think of them as “boxes” or “nametags” for data. After putting data in a variable box, you can then refer to that data by the name on the box.
Variable names can contain any combination of letters, numbers, periods (`.`), or underscores (`_`). Variables names must begin with a letter. Note that like everything in programming, variable names are case sensitive. It is best practice to make variable names descriptive and information about what data they contain. `a` is not a good variable name. `cups_of_coffee` is a good variable name. In this course, we will use the [Tidyverse Style Guide](http://style.tidyverse.org/) as a foundation of *how we should style our code*. To comply with this style guide, variables should be **all lower\-case letters, separated by underscore (`_`)**. This is also known as **snake\_case**.
**Note:** There is an important distinction between *syntax* and *style*. You need to use the proper *syntax* such that your *machine* understands your code. This requires that you follow a set of rigid rules for what code can be procesed. *Style* is what helps make your code understandable for other *humans*. Good style is not required to get your code to run, but it’s imperative for writing clear, (human) readable code.
We call putting information in a variable **assigning** that value to the variable. We do this using the *assignment operator* **`<-`**. For example:
```
# Stores the number 7 into a variable called `shoe_size`
shoe_size <- 7
```
* *Notice:* variable name goes on the left, value goes on the right!
You can see what value (data) is inside a variable by either typing that variable name as a line of code, or by using R’s built\-in `print()` function (more on functions later):
```
print(shoe_size)
# [1] 7
```
* We’ll talk about the `[1]` in that output later.
You can also use **mathematical operators** (e.g., `+`, `-`, `/`, `*`) when assigning values to variables. For example, you could create a variable that is the sum of two numbers as follows:
```
x <- 3 + 4
```
Once a value (like a number) is *in* a variable, you can use that variable in place of any other value. So all of the following are valid:
```
x <- 2 # store 2 in x
y <- 9 # store 9 in y
z <- x + y # store sum of x and y in z
print(z) # 11
z <- z + 1 # take z, add 1, and store result back in z
print(z) # 12
```
### 5\.4\.1 Basic Data Types
In the example above, we stored **numeric** values in variables. R is a **dynamically typed language**, which means that we *do not* need to explicitly state what type of information will be stored in each variable we create. R is intelligent enough to understand that if we have code `x <- 7`, then `x` will contain a numeric value (and so we can do math upon it!)
There six “basic types” (called *atomic data types*) for data in R:
* **Numeric**: The default computational data type in R is numeric data, which consists of the set of real numbers (including decimals). We use use **mathematical operators** on numeric data (such as `+`, `-`, `*`, `-`, etc.). There are also numerous functions that work on numeric data (such as calculating sums or averages).
* **Character**: Character data stores *strings* of characters (things you type with a keyboard) in a variable. You specify that some information is character data by surrounding it in either single quotes (**`'`**) or double quotes (**`"`**). To comply with the [style guide](http://style.tidyverse.org/syntax.html#quotes), we’ll consider it best practice to use **double quotes**.
```
# Create character variable `famous.poet` with the value "Bill Shakespeare"
famous_poet <- "Bill Shakespeare"
```
Note that character data *is still data*, so it can be assigned to a variable just like numeric data!
There are no special operators for character data, though there are a many built\-in functions for working with strings.
* **Logical**: Logical (a.k.a Boolean) data types store “yes\-or\-no”
data. A logical value can be one of two values: `TRUE` or
`FALSE`. Importantly, these **are not** the strings `"TRUE"` or
`"FALSE"`; logical values are a different type! In interactive
session, you can use the shorthand `T` or `F` in lieu of `TRUE` and
`FALSE` in variable assignment but this is not recommended for
programs.
+ *Fun fact:* logical values are called “booleans” after mathematician and logician [George Boole](https://en.wikipedia.org/wiki/George_Boole).Logical values are most commonly the result of applying a **relational operator** (also called a **comparison operator**) to some other data. Comparison operators are used to compare values and include: `<` (less than), `>` (greater than), `<=` (less\-than\-or\-equal), `>=` (greater\-than\-or\-equal), `==` (equal), and `!=` (not\-equal).
```
# Store values in variables
x <- 3
y <- 3.15
# Compare values in x and y
x > y # returns logical value FALSE (x IS NOT bigger than y)
y != x # returns logical value TRUE (y IS not-equal to x)
# compare x to pi (built-in variable)
y == pi # returns logical value FALSE
# compare strings (based on alphabetical ordering)
"cat" > "dog" # returns FALSE
```
Logical values have their own operators as well (called **logical operators** or **boolean operators**). These *apply to* logical values and *produce* logical values, allowing you to make more complex logical expressions. These include `&` (and), `|` (or), and `!` (not).
```
# Store values in variables
x <- 3.1
y <- 3.2
pet <- "dog"
weather <- "rain"
# Check if x is less than pi AND y is greater than pi
x < pi & y > pi # TRUE
# Check if pet is "cat" OR "dog"
pet == "cat" | pet == "dog" # TRUE
# Check if pet is "dog" AND NOT weather is "rain"
pet == "dog" & !(weather == "rain") # FALSE
```
Note that it’s easy to write complex expressions with logical operators. If you find yourself getting lost, I recommend rethinking your question to see if there is a simpler way to express it!
* **Integer**: Integer values are technically a different data type than numeric values because of how they are stored and manipulated by the R interpreter. This is something that you will rarely encounter, but it’s good to know that you can specify a number is of integer type rather than general numeric type by placing a capital `L` (for “long integer”) after an value in variable assignment (`my_integer <- 10L`).
* **Complex**: Complex (imaginary) numbers have their own data storage
type in R, are are created using the `i` syntax: `complex_variable <- 1 + 2i`. We will not be using complex numbers in this course.
* **Raw**: is a sequence of “raw” data. It is good for storing a “raw”
sequence of bytes, such as image data. R does not interpret raw data
in any particular way. We will not discuss raw data in this book.
### 5\.4\.1 Basic Data Types
In the example above, we stored **numeric** values in variables. R is a **dynamically typed language**, which means that we *do not* need to explicitly state what type of information will be stored in each variable we create. R is intelligent enough to understand that if we have code `x <- 7`, then `x` will contain a numeric value (and so we can do math upon it!)
There six “basic types” (called *atomic data types*) for data in R:
* **Numeric**: The default computational data type in R is numeric data, which consists of the set of real numbers (including decimals). We use use **mathematical operators** on numeric data (such as `+`, `-`, `*`, `-`, etc.). There are also numerous functions that work on numeric data (such as calculating sums or averages).
* **Character**: Character data stores *strings* of characters (things you type with a keyboard) in a variable. You specify that some information is character data by surrounding it in either single quotes (**`'`**) or double quotes (**`"`**). To comply with the [style guide](http://style.tidyverse.org/syntax.html#quotes), we’ll consider it best practice to use **double quotes**.
```
# Create character variable `famous.poet` with the value "Bill Shakespeare"
famous_poet <- "Bill Shakespeare"
```
Note that character data *is still data*, so it can be assigned to a variable just like numeric data!
There are no special operators for character data, though there are a many built\-in functions for working with strings.
* **Logical**: Logical (a.k.a Boolean) data types store “yes\-or\-no”
data. A logical value can be one of two values: `TRUE` or
`FALSE`. Importantly, these **are not** the strings `"TRUE"` or
`"FALSE"`; logical values are a different type! In interactive
session, you can use the shorthand `T` or `F` in lieu of `TRUE` and
`FALSE` in variable assignment but this is not recommended for
programs.
+ *Fun fact:* logical values are called “booleans” after mathematician and logician [George Boole](https://en.wikipedia.org/wiki/George_Boole).Logical values are most commonly the result of applying a **relational operator** (also called a **comparison operator**) to some other data. Comparison operators are used to compare values and include: `<` (less than), `>` (greater than), `<=` (less\-than\-or\-equal), `>=` (greater\-than\-or\-equal), `==` (equal), and `!=` (not\-equal).
```
# Store values in variables
x <- 3
y <- 3.15
# Compare values in x and y
x > y # returns logical value FALSE (x IS NOT bigger than y)
y != x # returns logical value TRUE (y IS not-equal to x)
# compare x to pi (built-in variable)
y == pi # returns logical value FALSE
# compare strings (based on alphabetical ordering)
"cat" > "dog" # returns FALSE
```
Logical values have their own operators as well (called **logical operators** or **boolean operators**). These *apply to* logical values and *produce* logical values, allowing you to make more complex logical expressions. These include `&` (and), `|` (or), and `!` (not).
```
# Store values in variables
x <- 3.1
y <- 3.2
pet <- "dog"
weather <- "rain"
# Check if x is less than pi AND y is greater than pi
x < pi & y > pi # TRUE
# Check if pet is "cat" OR "dog"
pet == "cat" | pet == "dog" # TRUE
# Check if pet is "dog" AND NOT weather is "rain"
pet == "dog" & !(weather == "rain") # FALSE
```
Note that it’s easy to write complex expressions with logical operators. If you find yourself getting lost, I recommend rethinking your question to see if there is a simpler way to express it!
* **Integer**: Integer values are technically a different data type than numeric values because of how they are stored and manipulated by the R interpreter. This is something that you will rarely encounter, but it’s good to know that you can specify a number is of integer type rather than general numeric type by placing a capital `L` (for “long integer”) after an value in variable assignment (`my_integer <- 10L`).
* **Complex**: Complex (imaginary) numbers have their own data storage
type in R, are are created using the `i` syntax: `complex_variable <- 1 + 2i`. We will not be using complex numbers in this course.
* **Raw**: is a sequence of “raw” data. It is good for storing a “raw”
sequence of bytes, such as image data. R does not interpret raw data
in any particular way. We will not discuss raw data in this book.
5\.5 Getting Help
-----------------
As with any programming language, when working in R you will inevitably run into problems, confusing situations, or just general questions. Here are a few ways to start getting help.
1. **Read the error messages**: If there is an issue with the way you
have written or executed your code, R will often print out a red error message in your console. Do you best to decipher the message (read it carefully, and think about what is meant by each word in the message), or you can put it directly into Google to get more information. You’ll soon get the hang of interpreting these messages if you put the time into trying to understand them.
2. **Google**: When you’re trying to figure out how to do something, it should be no surprise that Google is often the best resource. Try searching for queries like `"how to <DO THING> in R"`. More frequently than not, your question will lead you to a Q/A forum called StackOverflow (see below), which is a great place to find potential answers.
3. **StackOverflow**: StackOverflow is an amazing Q/A forum for asking/answering programming questions. Indeed, most basic questions have already been asked/answered here. However, don’t hesitate to post your own questions to StackOverflow. Be sure to hone in on the specific question you’re trying to answer, and provide error messages and sample code. I often find that, by the time I can articulate the question clearly enough to post it, I’ve figured out my problem anyway.
* There is a classical method of debugging called [rubber duck debugging](https://en.wikipedia.org/wiki/Rubber_duck_debugging), which involves simply trying to explain your code/problem to an inanimate object (talking to pets works too). You’ll usually be able to fix the problem if you just step back and think about how you would explain it to someone else!
4. **Documentation**: R’s documentation is actually quite good. Functions and behaviors are all described in the same format, and often contain helpful examples. To search the documentation within R (or in RStudio), simply type `?` followed by the function name you’re using (more on functions coming soon). You can also search the documentation by typing two questions marks (`??SEARCH`).
* You can also look up help by using the `help()` function (e.g., `help(print)` will look up information on the `print()` function, just like `?print` does). There is also an `example()` function you can call to see examples of a function in action (e.g., `example(print)`). This will be more important in the next module!
* [rdocumentation.org](https://www.rdocumentation.org/) has a lovely searchable and readable interface to the R documentation.
Resources
---------
* [Tidyverse Style Guide](http://style.tidyverse.org/)
* [R Tutorial: Introduction](http://www.r-tutor.com/r-introduction)
* [R Tutorial: Basic Data Types](http://www.r-tutor.com/r-introduction/basic-data-types)
* [R Tutorial: Operators](https://www.tutorialspoint.com/r/r_operators.htm)
* [RStudio Keyboard Shortcuts](https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts)
* [R Documentation](https://www.rdocumentation.org/) searchable online documentation
* [R for Data Science](http://r4ds.had.co.nz/) online textbook, oriented
toward R usage in data processing and visualization
* [aRrgh: a newcomer’s (angry) guide to R](http://arrgh.tim-smith.us/) opinionated but clear introduction
* [The Art of R Programming](https://www.nostarch.com/artofr.htm) print textbook
| Field Specific |
info201.github.io | https://info201.github.io/functions.html |
Chapter 6 Functions
===================
This chapter will explore how to use **functions** in R to perform advanced capabilities and actually ask questions about data. After considering a function in an abstract sense, it will discuss using built\-in R functions, accessing additional functions by loading R packages, and writing your own functions.
6\.1 What are Functions?
------------------------
In a broad sense, a **function** is a named sequence of instructions
(lines of code) that you may want to perform one or more times
throughout a program. They provide a way of *encapsulating* multiple
instructions into a single “unit” that can be used in a variety of
different contexts. So rather than needing to repeatedly write down all
the individual instructions for “make a sandwich” every time you’re
hungry, you can define a `make_sandwich()` function once and then just
**call** (execute) that function when you want to perform those steps.
Typically, functions also accept *inputs* so you can make the things
slightly differently from time to time. For instance, sometimes you may
want to call `make_sandwich(cheese)` while another time `make_sandwich(chicken)`.
In addition to grouping instructions, functions in programming languages
also allow to model the mathematical definition of function,
i.e. perform certain operations on a number of inputs that lead to an
*output*. For instance, look at the
function `max()` that finds the largest value among numbers:
```
max(1,2,3) # 3
```
The inputs are numbers `1`, `2`, and `3` in parenthesis, usually called
**arguments** or **parameters**, and we say that these arguments are
**passed** to a function (like a football). We say that a function then
**returns** a value, number “3” in this example, which we can either
print or assign to a variable and use later.
Finally, the functions may also have **side effects**. An example case
is the `cat()` function that just prints it’s arguments. For instance,
in case of the following line of code
```
cat("The answer is", 1+1, "\n")
```
we call function `cat()` with three arguments: `"The answer is"`, `2`
(note: `1+1` will be evaluated to 2\),
and `"\n"` (the line break symbol). However, here we may not care about the return value but in the side effect—the message being printed on the screen.
6\.2 How to Use Functions
-------------------------
R functions are referred to by name (technically, they are values like
any other variable, just not atomic values). As in many programming
languages, we **call** a function by writing the name of the function
followed immediately (no space) by parentheses `()`. Sometimes this is
enough, for instance
```
Sys.Date()
```
gives us the current date and that’s it.
But often we want the function to do something with our inputs. In this
case we put the **arguments** (inputs) inside the parenthesis, separated
by commas (**`,`**). Thus computer functions look just like
multi\-variable mathematical functions, although usually with fancier names than `f()`.
```
# call the sqrt() function, passing it 25 as an argument
sqrt(25) # 5, square root of 25
# count number of characters in "Hello world" as an argument
nchar("Hello world") # 11, note: space is a character too
# call the min() function, pass it 1, 6/8, AND 4/3 as arguments
# this is an example of a function that takes multiple args
min(1, 6/8, 4/3) # 0.75, (6/8 is the smallest value)
```
To keep functions and ordinary variables distinct, we include empty parentheses `()` when referring to a function by name. This does not mean that the function takes no arguments, it is just a useful shorthand for indicating that something is a function.
**Note:** You always need to supply the
parenthesis if you want to *call* the function (force it to do what it
is supposed to do). If you leave the parenthesis out, you get the
function definition printed on screen instead. So `cat()` is actually a *function call* while `cat` is the function. You can see that it is a function if you just print it as `print(cat)`.
However, we ignore this distinction here.
If you call any of these functions interactively, R will display the **returned value** (the output) in the console. However, the computer is not able to “read” what is written in the console—that’s for humans to view! If you want the computer to be able to *use* a returned value, you will need to give that value a name so that the computer can refer to it. That is, you need to store the returned value in a variable:
```
# store min value in smallest.number variable
smallest_number <- min(1, 6/8, 4/3)
# we can then use the variable as normal, such as for a comparison
min_is_big <- smallest_number > 1 # FALSE
# we can also use functions directly when doing computations
phi <- .5 + sqrt(5)/2 # 1.618...
# we can even pass the result of a function as an argument to another!
# watch out for where the parentheses close!
print(min(1.5, sqrt(3))) # prints 1.5
```
* In the last example, the resulting *value* of the “inner” function
(e.g., `sqrt()`) is immediately used as an argument for the middle
function (i.e. `min()`), value of which is fed in turn to the outer
function `print()`. Because that value is used immediately, we don’t
have to assign it a separate variable name. It is known as an
**anonymous variable**.
* note also that in the last example, we are solely interested in the
side effect of the `print()` function. It also returns it’s argument
(`min(1.5, sqrt(3))` here) but we do not store it in a variable.
R functions take two types of arguments: **positional arguments** and
**named arguments**: the function has to know how to treat each of it’s
arguments. For instance, we can round number *e* to 3
digits by `round(2.718282, 3)`. But in order to do this, the `round()` function must know that `2.718282` is the number
and `3` is the requested number of digits, and not the other way around. It understands this because
it requires the number to be the first argument, and digits
the second argument. This approach works well in case of known small
number of inputs. However, this is not an option for functions with
**variable number of arguments**, such as `cat()`. `cat()` just prints
out all of it’s (potentially a large number of) inputs, except a limited number of special named
arguments. One of these is `sep`, the string to be placed between the other pieces
of output (by default just a space is printed). Note the difference in output
between
```
cat(1, 2, "-", "\n") # 1 2 -
cat(1, 2, sep="-", "\n") # 1-2-
```
In the first case `cat()` prints `1`, `2`, `"-"`, and the line break
`"\n"`, all separated by a space. In the second case the name `sep`
ensures that `"-"` is not
printed out but instead treated as the separator between `1`, `2` and `"\n"`.
6\.3 Built\-in R Functions
--------------------------
As you have likely noticed, R comes with a variety of functions that are built into the language. In the above example, we used the `print()` function to print a value to the console, the `min()` function to find the smallest number among the arguments, and the `sqrt()` function to take the square root of a number. Here is a *very* limited list of functions you can experiment with (or see a few more [here](http://www.statmethods.net/management/functions.html)).
| Function Name | Description | Example |
| --- | --- | --- |
| `sum(a,b,...)` | Calculates the sum of all input values | `sum(1, 5)` returns `6` |
| `round(x,digits)` | Rounds the first argument to the given number of digits | `round(3.1415, 3)` returns `3.142` |
| `toupper(str)` | Returns the characters in uppercase | `toupper("hi there")` returns `"HI THERE"` |
| `paste(a,b,...)` | *Concatenate* (combine) characters into one value | `paste("hi", "there")` returns `"hi there"` |
| `nchar(str)` | Counts the number of characters in a string | `nchar("hi there")` returns `8` (space is a character!) |
| `c(a,b,...)` | *Concatenate* (combine) multiple items into a *vector* (see [chapter 7](vectors.html#vectors)) | `c(1, 2)` returns `1, 2` |
| `seq(a,b)` | Return a sequence of numbers from a to b | `seq(1, 5)` returns `1, 2, 3, 4, 5` |
To learn more about any individual function, look them up in the R documentation by using `?FunctionName` account as described in the previous chapter.
“Knowing” how to program in a language is to some extent simply “knowing” what provided functions are available in that language. Thus you should look around and become familiar with these functions… but **do not** feel that you need to memorize them! It’s enough to simply be aware “oh yeah, there was a function that sums up numbers”, and then be able to look up the name and argument for that function.
6\.4 Loading Functions
----------------------
Although R comes with lots of built\-in functions, you can always use
more! **Packages** (also known as **libraries**) are additional sets of
R functions (and data and variables) that are written and published by the R community. Because
many R users encounter the same data management/analysis challenges,
programmers are able to use these libraries and thus benefit from the
work of others (this is the amazing thing about the open\-source
community—people solve problems and then make those solutions
available to others). Popular packages include *dplyr* for
manipulating data, *ggplot2* for visualizations, and *data.table* for
handling large datasets.
Most of the R packages **do not** ship with the R
software by default, and need to be downloaded (once) and then loaded
into your interpreter’s environment (each time you wish to use
them). While this may seem cumbersome, it is a necessary trade\-off
between speed and size. R software would be huge and slow if it would
include *all* available packages.
Luckily, it is quite simple to install and load R packages from within
R. To do so, you’ll need to use the *built\-in* R functions
`install.packages` and `library`. Below is an example of installing and
loading the `stringr` package (which contains many handy functions for working with character strings):
```
# Install the `stringr` package. Only needs to be done once on your machine
install.packages("stringr")
```
We stress here that you need to install each package only once per computer.
As installation may be slow and and resource\-demanding, you **should not
do it repeatedly inside your script!**. Even more, if your script is
also run by other users on their computers, you should get their
explicit consent before installing additional software for them!
The easiest remedy is to solely rely on manual installation.
Exactly the same syntax—`install.packages("stringr")`—is
also used for re\-installing it. You may want to re\-install it if a
newer version comes out, of if you upgrade your R and receive warnings
about the package being built under a previous version of R.
After installation, the easiest way to get access to the functions is by
*loading the package*:
```
# Load the package (make stringr() functions available in this R session/program)
library("stringr") # quotes optional here
```
This makes all functions in the *stringr* package available for R (see [the documentation](https://cran.r-project.org/web/packages/stringr/stringr.pdf) for a list of functions included with the `stringr` library). For
instance, if we want to pad the word “justice” from left with tildes to create a
width\-10 string, we can do
```
str_pad("justice", 10, "left", "~") # "~~~justice"
```
We can use `str_pad` function without any additional hassle because `library()` command made it
available.
This is an easy and popular approach. However, what happens if more than one package call a function by the same name? For
instance, many packages implement function `filter()`. In this case
the more recent package will *mask* the function as defined by the
previous package. You will also see related warnings when you load the
library. In case you want to use a masked
function you can write something like `package::function()` in order to
call it. For instance, we can do the example above with
```
stringr::str_pad("justice", 10, "left", "~") # "~~~justice"
```
This approach—specifying *namespace* in front of the
function—ensures we access the function in the right package. If
we call all functions in this way, we don’t even need to load the
package with `library()` command. This is the preferred approach if you
only need few functions from a large library.
6\.5 Writing Functions
----------------------
Even more exciting than loading other peoples’ functions is writing your
own. Any time you have a task that you may repeat throughout a
script—or sometimes when you just want to organize your code
better—it’s a good practice to write a function to perform that task. This will limit repetition and reduce the likelihood of errors as well as make things easier to read and understand (and thus identify flaws in your analysis).
Functions are values like numbers and characters, so we use the *assignment
operator* (**`<-`**) to store a function into a variable.
Although “values” (more precisely *objects*), functions are not [atomic
objects](#basic-data-strucures). But they can still be stored and
manipulated in many ways like other objects.
[Tidyverse style guide](http://style.tidyverse.org/functions.html) we
follow in this books suggests to use verbs as function names,
written in **snake\_case** just like other variables.
The best way to understand the syntax for defining a function is to look at an example:
```
# A function named `make_full_name` that takes two arguments
# and returns the "full name" made from them
make_full_name <- function(first_name, last_name) {
# Function body: perform tasks in here
full_name <- paste(first_name, last_name)
# paste joins first and last name into a
# single string
# Return: what you want the function to output
return(full_name)
}
# Call the `make_full_name` function with the values "Alice" and "Kim"
my_name <- make_full_name("Alice", "Kim") # "Alice Kim"
```
Function definition contains several important pieces:
* The value assigned to the function variable uses the
syntax `function(....)` to indicate that you are creating a function
(as opposed to a number or character string).
* If you want to feed your function with certain
inputs, these must be referred somehow inside of your function.
You list these names between the parentheses in the `function(....)`
declaration. These are called **formal arguments**. Formal arguments *will
contain* the values passed in when calling the function (called
**actual arguments**). For example, when we
call `make_full_name("Alice", "Kim")`, the value of the first
actual argument (`"Alice"`) will be assigned to the first formal argument
(`first_name`), and the value of the second actual argument (`"Kim"`) will
be assigned to the second formal argument (`last_name`). Inside the
function’s body, both of these formal arguments behave exactly as
ordinary variables with values “Alice” and “Kim” respectively.
Importantly, we could have made the formal argument names anything
we wanted (`name_first`, `xyz`, etc.), just as long as we then use
*these formal argument names* to refer to the argument while inside
the function.
Moreover, the formal argument names *only apply* while inside the function. You can think of them like “nicknames” for the values. The variables `first_name`, `last_name`, and `full_name` only exist within this particular function.
* **Body**: The body of the function is a **block** of code that falls between curly braces **`{}`** (a “block” is represented by curly braces surrounding code statements). Note that cleanest style is to put the opening `{` immediately after the arguments list, and the closing `}` on its own line.
The function body specifies all the instructions (lines of code)
that your function will perform. A function can contain as many
lines of code as you want—you’ll usually want more than 1 to
make it worth while, but if you have more than 20 you might want to
break it up into separate functions. You can use the formal
arguments here as any other variables, you can create new variables,
call other functions, you can even declare functions inside
functions… basically any code that you would write outside of a
function can be written inside of one as well!
All the variables you create in the function body are **local
variables**. These are only visible from within the function and
“will be forgotten” as soon as you return from the function. However,
variables defined outside of the function are still visible from
within.
* **Return value** is what your function produces. You can specify this
by calling the `return()` function and passing that the value that
you wish *your function* to return. It is typically the last line
of the function. Note that even though we returned a variable
called `full_name`, that variable was *local* to the function and so
doesn’t exist outside of it; thus we have to store the returned
value into another variable if we want to use it later (as with
`name <- make_full_name("Alice", "Kim")`).
`return()` statement is usually unnecessary as R implicitly returns
the last object it evaluated anyway. So we may shorten the function
definition into
```
make_full_name <- function(first_name, last_name) {
full_name <- paste(first_name, last_name)
}
```
or even not store the concatenated names into `full_name`:
```
make_full_name <- function(first_name, last_name) {
paste(first_name, last_name)
}
```
The last evaluation was concatenating the first and last name, and
hence the full name will be implicitly returned.
We can call (execute) a function we defined exactly in the same way we
call built\-in functions. When we do so, R will take the *actual
arguments* we passed in (e.g., `"Alice"` and `"Kim"`) and assign them to
the *formal arguments*. Then it executes each line of code in the *function body* one at a time. When it gets to the `return()` call, it will end the function and return the given value, which can then be assigned to a different variable outside of the function.
6\.6 Conditional Statements
---------------------------
Functions are one way to organize and control the flow of execution
(e.g., what lines of code get run in what order). There are other
ways. In all languages, we can specify that different instructions will
be run based on a different set of conditions. These are **Conditional
statements** which specify which chunks of code will run
depending on which conditions are true. This is valuable both within
and outside of functions.
In an abstract sense, an conditional statement is saying:
```
IF something is true
do some lines of code
OTHERWISE
do some other lines of code
```
In R, we write these conditional statements using the keywords **`if`** and **`else`** and the following syntax:
```
if (condition) {
# lines of code to run if condition is TRUE
} else {
# lines of code to run if condition is FALSE
}
```
(Note that the the `else` needs to be on the same line as the closing `}` of the `if` block. It is also possible to omit the `else` and its block).
The `condition` can be any variable or expression that resolves to a logical value (`TRUE` or `FALSE`). Thus both of the conditional statements below are valid:
```
porridge_temp <- 115 # in degrees F
if (porridge_temp > 120) {
print("This porridge is too hot!")
}
too_cold <- porridge_temp < 70
# a logical value
if (too_cold) {
print("This porridge is too cold!")
}
```
Note, we can extend the set of conditions evaluated using an `else if` statement. For example:
```
# Function to determine if you should eat porridge
test_food_temp <- function(temp) {
if (temp > 120) {
status <- "This porridge is too hot!"
} else if (temp < 70) {
status <- "This porridge is too cold!"
} else {
status <- "This porridge is just right!"
}
return(status)
}
# Use function on different temperatures
test_food_temp(119) # "This porridge is just right!"
test_food_temp(60) # "This porridge is too cold!"
test_food_temp(150) # "This porridge is too hot!"
```
See more about `if`, `else` and other related constructs in [Appendix C](control-structures.html#control-structures).
Resources
---------
* [R Function Cheatsheet](https://cran.r-project.org/doc/contrib/Short-refcard.pdf)
* [User Defined R Functions](http://www.statmethods.net/management/userfunctions.html)
6\.1 What are Functions?
------------------------
In a broad sense, a **function** is a named sequence of instructions
(lines of code) that you may want to perform one or more times
throughout a program. They provide a way of *encapsulating* multiple
instructions into a single “unit” that can be used in a variety of
different contexts. So rather than needing to repeatedly write down all
the individual instructions for “make a sandwich” every time you’re
hungry, you can define a `make_sandwich()` function once and then just
**call** (execute) that function when you want to perform those steps.
Typically, functions also accept *inputs* so you can make the things
slightly differently from time to time. For instance, sometimes you may
want to call `make_sandwich(cheese)` while another time `make_sandwich(chicken)`.
In addition to grouping instructions, functions in programming languages
also allow to model the mathematical definition of function,
i.e. perform certain operations on a number of inputs that lead to an
*output*. For instance, look at the
function `max()` that finds the largest value among numbers:
```
max(1,2,3) # 3
```
The inputs are numbers `1`, `2`, and `3` in parenthesis, usually called
**arguments** or **parameters**, and we say that these arguments are
**passed** to a function (like a football). We say that a function then
**returns** a value, number “3” in this example, which we can either
print or assign to a variable and use later.
Finally, the functions may also have **side effects**. An example case
is the `cat()` function that just prints it’s arguments. For instance,
in case of the following line of code
```
cat("The answer is", 1+1, "\n")
```
we call function `cat()` with three arguments: `"The answer is"`, `2`
(note: `1+1` will be evaluated to 2\),
and `"\n"` (the line break symbol). However, here we may not care about the return value but in the side effect—the message being printed on the screen.
6\.2 How to Use Functions
-------------------------
R functions are referred to by name (technically, they are values like
any other variable, just not atomic values). As in many programming
languages, we **call** a function by writing the name of the function
followed immediately (no space) by parentheses `()`. Sometimes this is
enough, for instance
```
Sys.Date()
```
gives us the current date and that’s it.
But often we want the function to do something with our inputs. In this
case we put the **arguments** (inputs) inside the parenthesis, separated
by commas (**`,`**). Thus computer functions look just like
multi\-variable mathematical functions, although usually with fancier names than `f()`.
```
# call the sqrt() function, passing it 25 as an argument
sqrt(25) # 5, square root of 25
# count number of characters in "Hello world" as an argument
nchar("Hello world") # 11, note: space is a character too
# call the min() function, pass it 1, 6/8, AND 4/3 as arguments
# this is an example of a function that takes multiple args
min(1, 6/8, 4/3) # 0.75, (6/8 is the smallest value)
```
To keep functions and ordinary variables distinct, we include empty parentheses `()` when referring to a function by name. This does not mean that the function takes no arguments, it is just a useful shorthand for indicating that something is a function.
**Note:** You always need to supply the
parenthesis if you want to *call* the function (force it to do what it
is supposed to do). If you leave the parenthesis out, you get the
function definition printed on screen instead. So `cat()` is actually a *function call* while `cat` is the function. You can see that it is a function if you just print it as `print(cat)`.
However, we ignore this distinction here.
If you call any of these functions interactively, R will display the **returned value** (the output) in the console. However, the computer is not able to “read” what is written in the console—that’s for humans to view! If you want the computer to be able to *use* a returned value, you will need to give that value a name so that the computer can refer to it. That is, you need to store the returned value in a variable:
```
# store min value in smallest.number variable
smallest_number <- min(1, 6/8, 4/3)
# we can then use the variable as normal, such as for a comparison
min_is_big <- smallest_number > 1 # FALSE
# we can also use functions directly when doing computations
phi <- .5 + sqrt(5)/2 # 1.618...
# we can even pass the result of a function as an argument to another!
# watch out for where the parentheses close!
print(min(1.5, sqrt(3))) # prints 1.5
```
* In the last example, the resulting *value* of the “inner” function
(e.g., `sqrt()`) is immediately used as an argument for the middle
function (i.e. `min()`), value of which is fed in turn to the outer
function `print()`. Because that value is used immediately, we don’t
have to assign it a separate variable name. It is known as an
**anonymous variable**.
* note also that in the last example, we are solely interested in the
side effect of the `print()` function. It also returns it’s argument
(`min(1.5, sqrt(3))` here) but we do not store it in a variable.
R functions take two types of arguments: **positional arguments** and
**named arguments**: the function has to know how to treat each of it’s
arguments. For instance, we can round number *e* to 3
digits by `round(2.718282, 3)`. But in order to do this, the `round()` function must know that `2.718282` is the number
and `3` is the requested number of digits, and not the other way around. It understands this because
it requires the number to be the first argument, and digits
the second argument. This approach works well in case of known small
number of inputs. However, this is not an option for functions with
**variable number of arguments**, such as `cat()`. `cat()` just prints
out all of it’s (potentially a large number of) inputs, except a limited number of special named
arguments. One of these is `sep`, the string to be placed between the other pieces
of output (by default just a space is printed). Note the difference in output
between
```
cat(1, 2, "-", "\n") # 1 2 -
cat(1, 2, sep="-", "\n") # 1-2-
```
In the first case `cat()` prints `1`, `2`, `"-"`, and the line break
`"\n"`, all separated by a space. In the second case the name `sep`
ensures that `"-"` is not
printed out but instead treated as the separator between `1`, `2` and `"\n"`.
6\.3 Built\-in R Functions
--------------------------
As you have likely noticed, R comes with a variety of functions that are built into the language. In the above example, we used the `print()` function to print a value to the console, the `min()` function to find the smallest number among the arguments, and the `sqrt()` function to take the square root of a number. Here is a *very* limited list of functions you can experiment with (or see a few more [here](http://www.statmethods.net/management/functions.html)).
| Function Name | Description | Example |
| --- | --- | --- |
| `sum(a,b,...)` | Calculates the sum of all input values | `sum(1, 5)` returns `6` |
| `round(x,digits)` | Rounds the first argument to the given number of digits | `round(3.1415, 3)` returns `3.142` |
| `toupper(str)` | Returns the characters in uppercase | `toupper("hi there")` returns `"HI THERE"` |
| `paste(a,b,...)` | *Concatenate* (combine) characters into one value | `paste("hi", "there")` returns `"hi there"` |
| `nchar(str)` | Counts the number of characters in a string | `nchar("hi there")` returns `8` (space is a character!) |
| `c(a,b,...)` | *Concatenate* (combine) multiple items into a *vector* (see [chapter 7](vectors.html#vectors)) | `c(1, 2)` returns `1, 2` |
| `seq(a,b)` | Return a sequence of numbers from a to b | `seq(1, 5)` returns `1, 2, 3, 4, 5` |
To learn more about any individual function, look them up in the R documentation by using `?FunctionName` account as described in the previous chapter.
“Knowing” how to program in a language is to some extent simply “knowing” what provided functions are available in that language. Thus you should look around and become familiar with these functions… but **do not** feel that you need to memorize them! It’s enough to simply be aware “oh yeah, there was a function that sums up numbers”, and then be able to look up the name and argument for that function.
6\.4 Loading Functions
----------------------
Although R comes with lots of built\-in functions, you can always use
more! **Packages** (also known as **libraries**) are additional sets of
R functions (and data and variables) that are written and published by the R community. Because
many R users encounter the same data management/analysis challenges,
programmers are able to use these libraries and thus benefit from the
work of others (this is the amazing thing about the open\-source
community—people solve problems and then make those solutions
available to others). Popular packages include *dplyr* for
manipulating data, *ggplot2* for visualizations, and *data.table* for
handling large datasets.
Most of the R packages **do not** ship with the R
software by default, and need to be downloaded (once) and then loaded
into your interpreter’s environment (each time you wish to use
them). While this may seem cumbersome, it is a necessary trade\-off
between speed and size. R software would be huge and slow if it would
include *all* available packages.
Luckily, it is quite simple to install and load R packages from within
R. To do so, you’ll need to use the *built\-in* R functions
`install.packages` and `library`. Below is an example of installing and
loading the `stringr` package (which contains many handy functions for working with character strings):
```
# Install the `stringr` package. Only needs to be done once on your machine
install.packages("stringr")
```
We stress here that you need to install each package only once per computer.
As installation may be slow and and resource\-demanding, you **should not
do it repeatedly inside your script!**. Even more, if your script is
also run by other users on their computers, you should get their
explicit consent before installing additional software for them!
The easiest remedy is to solely rely on manual installation.
Exactly the same syntax—`install.packages("stringr")`—is
also used for re\-installing it. You may want to re\-install it if a
newer version comes out, of if you upgrade your R and receive warnings
about the package being built under a previous version of R.
After installation, the easiest way to get access to the functions is by
*loading the package*:
```
# Load the package (make stringr() functions available in this R session/program)
library("stringr") # quotes optional here
```
This makes all functions in the *stringr* package available for R (see [the documentation](https://cran.r-project.org/web/packages/stringr/stringr.pdf) for a list of functions included with the `stringr` library). For
instance, if we want to pad the word “justice” from left with tildes to create a
width\-10 string, we can do
```
str_pad("justice", 10, "left", "~") # "~~~justice"
```
We can use `str_pad` function without any additional hassle because `library()` command made it
available.
This is an easy and popular approach. However, what happens if more than one package call a function by the same name? For
instance, many packages implement function `filter()`. In this case
the more recent package will *mask* the function as defined by the
previous package. You will also see related warnings when you load the
library. In case you want to use a masked
function you can write something like `package::function()` in order to
call it. For instance, we can do the example above with
```
stringr::str_pad("justice", 10, "left", "~") # "~~~justice"
```
This approach—specifying *namespace* in front of the
function—ensures we access the function in the right package. If
we call all functions in this way, we don’t even need to load the
package with `library()` command. This is the preferred approach if you
only need few functions from a large library.
6\.5 Writing Functions
----------------------
Even more exciting than loading other peoples’ functions is writing your
own. Any time you have a task that you may repeat throughout a
script—or sometimes when you just want to organize your code
better—it’s a good practice to write a function to perform that task. This will limit repetition and reduce the likelihood of errors as well as make things easier to read and understand (and thus identify flaws in your analysis).
Functions are values like numbers and characters, so we use the *assignment
operator* (**`<-`**) to store a function into a variable.
Although “values” (more precisely *objects*), functions are not [atomic
objects](#basic-data-strucures). But they can still be stored and
manipulated in many ways like other objects.
[Tidyverse style guide](http://style.tidyverse.org/functions.html) we
follow in this books suggests to use verbs as function names,
written in **snake\_case** just like other variables.
The best way to understand the syntax for defining a function is to look at an example:
```
# A function named `make_full_name` that takes two arguments
# and returns the "full name" made from them
make_full_name <- function(first_name, last_name) {
# Function body: perform tasks in here
full_name <- paste(first_name, last_name)
# paste joins first and last name into a
# single string
# Return: what you want the function to output
return(full_name)
}
# Call the `make_full_name` function with the values "Alice" and "Kim"
my_name <- make_full_name("Alice", "Kim") # "Alice Kim"
```
Function definition contains several important pieces:
* The value assigned to the function variable uses the
syntax `function(....)` to indicate that you are creating a function
(as opposed to a number or character string).
* If you want to feed your function with certain
inputs, these must be referred somehow inside of your function.
You list these names between the parentheses in the `function(....)`
declaration. These are called **formal arguments**. Formal arguments *will
contain* the values passed in when calling the function (called
**actual arguments**). For example, when we
call `make_full_name("Alice", "Kim")`, the value of the first
actual argument (`"Alice"`) will be assigned to the first formal argument
(`first_name`), and the value of the second actual argument (`"Kim"`) will
be assigned to the second formal argument (`last_name`). Inside the
function’s body, both of these formal arguments behave exactly as
ordinary variables with values “Alice” and “Kim” respectively.
Importantly, we could have made the formal argument names anything
we wanted (`name_first`, `xyz`, etc.), just as long as we then use
*these formal argument names* to refer to the argument while inside
the function.
Moreover, the formal argument names *only apply* while inside the function. You can think of them like “nicknames” for the values. The variables `first_name`, `last_name`, and `full_name` only exist within this particular function.
* **Body**: The body of the function is a **block** of code that falls between curly braces **`{}`** (a “block” is represented by curly braces surrounding code statements). Note that cleanest style is to put the opening `{` immediately after the arguments list, and the closing `}` on its own line.
The function body specifies all the instructions (lines of code)
that your function will perform. A function can contain as many
lines of code as you want—you’ll usually want more than 1 to
make it worth while, but if you have more than 20 you might want to
break it up into separate functions. You can use the formal
arguments here as any other variables, you can create new variables,
call other functions, you can even declare functions inside
functions… basically any code that you would write outside of a
function can be written inside of one as well!
All the variables you create in the function body are **local
variables**. These are only visible from within the function and
“will be forgotten” as soon as you return from the function. However,
variables defined outside of the function are still visible from
within.
* **Return value** is what your function produces. You can specify this
by calling the `return()` function and passing that the value that
you wish *your function* to return. It is typically the last line
of the function. Note that even though we returned a variable
called `full_name`, that variable was *local* to the function and so
doesn’t exist outside of it; thus we have to store the returned
value into another variable if we want to use it later (as with
`name <- make_full_name("Alice", "Kim")`).
`return()` statement is usually unnecessary as R implicitly returns
the last object it evaluated anyway. So we may shorten the function
definition into
```
make_full_name <- function(first_name, last_name) {
full_name <- paste(first_name, last_name)
}
```
or even not store the concatenated names into `full_name`:
```
make_full_name <- function(first_name, last_name) {
paste(first_name, last_name)
}
```
The last evaluation was concatenating the first and last name, and
hence the full name will be implicitly returned.
We can call (execute) a function we defined exactly in the same way we
call built\-in functions. When we do so, R will take the *actual
arguments* we passed in (e.g., `"Alice"` and `"Kim"`) and assign them to
the *formal arguments*. Then it executes each line of code in the *function body* one at a time. When it gets to the `return()` call, it will end the function and return the given value, which can then be assigned to a different variable outside of the function.
6\.6 Conditional Statements
---------------------------
Functions are one way to organize and control the flow of execution
(e.g., what lines of code get run in what order). There are other
ways. In all languages, we can specify that different instructions will
be run based on a different set of conditions. These are **Conditional
statements** which specify which chunks of code will run
depending on which conditions are true. This is valuable both within
and outside of functions.
In an abstract sense, an conditional statement is saying:
```
IF something is true
do some lines of code
OTHERWISE
do some other lines of code
```
In R, we write these conditional statements using the keywords **`if`** and **`else`** and the following syntax:
```
if (condition) {
# lines of code to run if condition is TRUE
} else {
# lines of code to run if condition is FALSE
}
```
(Note that the the `else` needs to be on the same line as the closing `}` of the `if` block. It is also possible to omit the `else` and its block).
The `condition` can be any variable or expression that resolves to a logical value (`TRUE` or `FALSE`). Thus both of the conditional statements below are valid:
```
porridge_temp <- 115 # in degrees F
if (porridge_temp > 120) {
print("This porridge is too hot!")
}
too_cold <- porridge_temp < 70
# a logical value
if (too_cold) {
print("This porridge is too cold!")
}
```
Note, we can extend the set of conditions evaluated using an `else if` statement. For example:
```
# Function to determine if you should eat porridge
test_food_temp <- function(temp) {
if (temp > 120) {
status <- "This porridge is too hot!"
} else if (temp < 70) {
status <- "This porridge is too cold!"
} else {
status <- "This porridge is just right!"
}
return(status)
}
# Use function on different temperatures
test_food_temp(119) # "This porridge is just right!"
test_food_temp(60) # "This porridge is too cold!"
test_food_temp(150) # "This porridge is too hot!"
```
See more about `if`, `else` and other related constructs in [Appendix C](control-structures.html#control-structures).
Resources
---------
* [R Function Cheatsheet](https://cran.r-project.org/doc/contrib/Short-refcard.pdf)
* [User Defined R Functions](http://www.statmethods.net/management/userfunctions.html)
| Field Specific |
info201.github.io | https://info201.github.io/vectors.html |
Chapter 7 Vectors
=================
This chapter covers the foundational concepts for working with vectors in R. Vectors are *the* fundamental data type in R: in order to use R, you need to become comfortable with vectors. This chapter will discuss how R stores information in vectors, the way in which operations are executed in *vectorized* form, and how to extract subsets of vectors. These concepts are **key to effectively programming** in R.
7\.1 What is a Vector?
----------------------
**Vectors** are *one\-dimensional ordered collections of values* that are all
stored in a single variable. For example, you can make a vector
`people` that contains the character strings “Sarah”, “Amit”, and
“Zhang”. Alternatively, you could make a vector `numbers` that stores
the numbers from 1 to 100\. Each value in a vector is refered to as an
**element** of that vector; thus the `people` vector would have 3
elements, `"Sarah"`, `"Amit"`, and `"Zhang"`, and `numbers` vector
will have 100 elements. *Ordered* means that once in the vector, the
elements will remain there in the original order. If “Amit” was put
on the second place, it will remain on the second place unless explicitly moved.
Unfortunately, there are at least five different sometimes contradicting
definitions of what is “vector” in R. Here we focus on **atomic
vectors**, vectors that contain the
[atomic data types](r-intro.html#basic-data-types). Another different class of
vectors is
**generalized vectors** or **lists**, the topic of [chapter Lists](lists.html#lists).
Atomic vector can only contain elements of an atomic data
type—numeric, integer, character or logical. Importantly, all the
elements in a vector need to have the same. You can’t have an atomic vector whose elements include both numbers and character strings.
7\.2 Creating Vectors
---------------------
The easiest and most universal syntax for creating vectors is to use the built in `c()` function, which is used to ***c***\_ombine\_ values into a vector. The `c()` function takes in any number of **arguments** of the same type (separated by commas as usual), and **returns** a vector that contains those elements:
```
# Use the combine (`c`) function to create a vector.
people <- c("Sarah", "Amit", "Zhang")
print(people) # [1] "Sarah" "Amit" "Zhang"
numbers <- c(1, 2, 3, 4, 5)
print(numbers) # [1] 1 2 3 4 5
```
You can use the `length()` function to determine how many **elements** are in a vector:
```
people <- c("Sarah", "Amit", "Zhang")
length(people) # [1] 3
numbers <- c(1, 2, 3, 4, 5)
length(numbers) # [1] 5
```
As atomic vectors can only contain same type of elements, `c()`
automatically **casts** (converts) one type to the other if necessary
(and if possible). For instance, when attempting to create a vector
containing number 1 and character “a”
```
mix <- c(1, "a")
mix # [1] "1" "a"
```
we get a *character vector* where the number 1 was converted to a
character “1”. This is a frequent problem when reading data where some
fields contain invalid number codes.
There are other handy ways to create vectors. For example, the `seq()`
function mentioned in [chapter 6](functions.html#functions) takes 2 (or more) arguments and
produces a vector of the integers between them. An *optional* third
argument specifies by which step to increment the numbers:
```
# Make vector of numbers 1 to 90
one_to_ninety <- seq(1, 90)
print(one_to_ninety) # [1] 1 2 3 4 5 ...
# Make vector of numbers 1 to 10, counting by 2
odds <- seq(1, 10, 2)
print(odds) # [1] 1 3 5 7 9
```
* When you print out `one_to_ninety`, you’ll notice that in addition to the leading `[1]` that you’ve seen in all printed results, there are additional bracketed numbers at the start of each line. These bracketed numbers tells you from which element number (**index**, see below) that line is showing the elements of. Thus the `[1]` means that the printed line shows elements started at element number `1`, a `[20]` means that the printed line shows elements starting at element number `20`, and so on. This is to help make the output more readable, so you know where in the vector you are when looking at in a printed line of elements!
As a shorthand, you can produce a sequence with the **colon operator** (**`a:b`**), which returns a vector `a` to `b` with the element values incrementing by `1`:
```
one_to_ninety <- 1:90
```
Another useful function that creates vectors is `rep()` that repeats
it’s first argument:
```
rep("Xi'an", 5) # [1] "Xi'an" "Xi'an" "Xi'an" "Xi'an" "Xi'an"
```
`c()` can also be used to add elements to an existing vector:
```
# Use the combine (`c()`) function to create a vector.
people <- c("Sarah", "Amit", "Zhang")
# Use the `c()` function to combine the `people` vector and the name 'Josh'.
more_people <- c(people, 'Josh')
print(more_people) # [1] "Sarah" "Amit" "Zhang" "Josh"
```
Note that `c()` retains the order of elements—“Josh” will be the
last element in the extended vector.
All the vector creation functions we introduced here, `c()`, `seq()` and
`rep()` are noticeably more powerful and complex than the brief
discussion above. You are encouraged to read the help pages!
7\.3 Vector Indices
-------------------
Vectors are the fundamental structure for storing collections of data. Yet you often want to only work with *some* of the data in a vector. This section will discuss a few ways that you can get a **subset** of elements in a vector.
In particular, you can refer to individual elements in a vector by their
**index** (more specifically **numeric index**), which is the number of their position in the vector. For example, in the vector:
```
vowels <- c('a','e','i','o','u')
```
The `'a'` (the first element) is at *index* 1, `'e'` (the second element) is at index 2, and so on.
Note in R vector elements are indexed starting with `1` (**one\-based indexing**). This is distinct from many other programming languages
which use **zero\-based indexing** and so reference the first element
at index `0`.
### 7\.3\.1 Simple Numeric Indices
You can retrieve a value from a vector using **bracket notation**: you refer to the element at a particular index of a vector by writing the name of the vector, followed by square brackets (**`[]`**) that contain the index of interest:
```
# Create the people vector
people <- c("Sarah", "Amit", "Zhang")
# access the element at index 1
people[1] # [1] "Sarah"
# access the element at index 2
people[2] # [1] "Amit"
# You can also use variables inside the brackets
last_index <- length(people) # last index is the length of the vector!
people[last_index] # returns "Zhang"
# You may want to check out the `tail()` function instead!
```
Don’t get confused by the `[1]` in the printed output—it doesn’t refer to which index you got from `people`, but what index in the *extracted* result (e.g., stored in `first_person`) is being printed!
If you specify an index that is **out\-of\-bounds** (e.g., greater than
the number of elements in the vector) in the square brackets, you will
get back the value `NA`, which stands for **N**ot **A**vailable. Note
that this is *not* the *character string* `"NA"`, but a specific value,
specially designed to denote missing data.
```
vowels <- c('a','e','i','o','u')
# Attempt to access the 10th element
vowels[10] # returns NA
```
If you specify a **negative index** in the square\-brackets, R will return all elements *except* the (negative) index specified:
```
vowels <- c("a", "e", "i", "o", "u")
# Return all elements EXCEPT that at index 2
all_but_e <- vowels[-2]
print(all_but_e) # [1] "a" "i" "o" "u"
```
### 7\.3\.2 Multiple Indices
Remember that in R, **all atomic objects are vectors**. This means that when you put a single number inside the square brackets, you’re actually putting a *vector with a single element in it* into the brackets So what you’re really doing is specifying a **vector of indices** that you want R to extract from the vector. As such, you can put a vector of any length inside the brackets, and R will extract *all* the elements with those indices from the vector (producing a **subset** of the vector elements):
```
# Create a `colors` vector
colors <- c("red", "green", "blue", "yellow", "purple")
# Vector of indices to extract
indices <- c(1, 3, 4)
# Retrieve the colors at those indices
colors[indices] # [1] "red" "blue" "yellow"
# Specify the index array anonymously
colors[c(2, 5)] # [1] "green" "purple"
```
It’s very\-very handy to use the **colon operator** to quickly specify a range of indices to extract:
```
colors <- c("red", "green", "blue", "yellow", "purple")
# Retrieve values in positions 2 through 5
colors[2:5] # [1] "green" "blue" "yellow" "purple"
```
This easily reads as *“a vector of the elements in positions 2 through
5”*.
The object returned by multiple indexing (and also by a single index) is
a **copy of the original**, unlike in some other programming
languages. These are good news in terms of avoiding unexpected effects:
modifying the returned copy does not affect the original.
However, copying large objects may be costly and make your code slow and sluggish.
### 7\.3\.3 Logical Indexing
In the above section, you used a vector of indices (*numeric* values) to
retrieve a subset of elements from a vector. Alternatively, you can put a **vector of logical values** inside the square brackets to specify which ones you want to extract (`TRUE` in the *corresponding position* means extract, `FALSE` means don’t extract):
```
# Create a vector of shoe sizes
shoe_sizes <- c(7, 6.5, 4, 11, 8)
# Vector of elements to extract
filter <- c(TRUE, FALSE, FALSE, TRUE, TRUE)
# Extract every element in an index that is TRUE
shoe_sizes[filter] # [1] 7 11 8
```
R will go through the boolean vector and extract every item at the
position that is `TRUE`. In the example above, since `filter` is `TRUE` and indices 1, 4, and 5, then `shoe_sizes[filter]` returns a vector with the elements from indices 1, 4, and 5\.
This may seem a bit strange, but it is actually incredibly powerful because it lets you select elements from a vector that *meet a certain criteria* (called **filtering**). You perform this *filtering operation* by first creating a vector of boolean values that correspond with the indices meeting that criteria, and then put that filter vector inside the square brackets:
```
# Create a vector of shoe sizes
shoe_sizes <- c(7, 6.5, 4, 11, 8)
# Create a boolean vector that indicates if a shoe size is greater than 6.5
shoe_is_big <- shoe_sizes > 6.5 # T, F, F, T, T
# Use the `shoe_is_big` vector to select large shoes
big_shoes <- shoe_sizes[shoe_is_big] # returns 7, 11, 8
```
There is often little reason to explicitly create the index vector
`shoe_is_big`. You can combine the second and third lines of code into
a single statement with anonymous index vector:
```
# Create a vector of shoe sizes
shoe_sizes <- c(7, 6.5, 4, 11, 8)
# Select shoe sizes that are greater than 6.5
shoe_sizes[shoe_sizes > 6.5] # returns 7, 11, 8
```
You can think of the this statement as saying “shoe\_sizes **where**
shoe\_sizes is greater than 6\.5”. This is a valid statement because the
logical expression inside of the square\-brackets (`shoe_sizes > 6.5`) is
evaluated first, producing an anonymos boolean vector which is then used to filter the `shoe_sizes` vector.
This kind of filtering is immensely popular in real\-life applications.
### 7\.3\.4 Named Vectors and Character Indexing
All the vectors we created above where created without names. But
vector elements can have names, and given they have names, we can access
these using the names. There are two ways to create **named vectors**.
First, we can add names when creating vectors with `c()` function:
```
param <- c(gamma=3, alpha=1.7, "c-2"=-1.33)
param
## gamma alpha c-2
## 3.00 1.70 -1.33
```
This creates a numeric vector of length 3 where each element has a
name. Note that we have to quote the names, such as “c\-2”, that are not
valid R variable names. Note also that the printout differs from that
of unnamed vectors, in particular the index position (`[1]`) is not
printed.
Alternatively, we can set names to an already existing vector using the
`names()`
function:[1](#fn1)
```
numbers <- 1:5
names(numbers) <- c("A", "B", "C", "D", "E")
numbers
## A B C D E
## 1 2 3 4 5
```
Now when we have a named vector, we can access it’s elements by names.
For instance
```
numbers["C"]
## C
## 3
numbers[c("D", "B")]
## D B
## 4 2
```
Note that in the latter case the names `"B"` and `"D"` are in “wrong
order”, i.e. not in the same order as they are in the vector `numbers`.
However, this works just fine, the elements are extracted in the order
they are specified in the index (This is only possible with character
and numeric indices, logical index can only extract elements in the
“right” order.)
While most vectors we encounter in this book gain little by
names, exactly the same approach also applies to lists and data frames
where character indexing is one of the important workhorses.
Another important use case of named vectors in R are a substitute of **maps**
(aka **dictionaries**). Maps are just lookup tables where we can
find a value that corresponds to a value of another element in the
table. For instance, the example above found values that correspond to
the names `"D"` and `"B"`.
7\.4 Modifying Vectors
----------------------
Indexing can also be used to modify elements within the vector. To do this, put the extracted *subset* on the **left\-hand side** of the assignment operator, and then assign the element a new value:
```
# Create a vector of school supplies
school_supplies <- c("Backpack", "Laptop", "Pen")
# Replace 'Pen' (element at index 3) with 'Pencil'
school_supplies[3] <- "Pencil"
```
And of course, there’s no reason that you can’t select multiple elements on the left\-hand side, and assign them multiple values. The assignment operator is *vectorized*!
```
# Create a vector of school supplies
school_supplies <- c("Backpack", "Laptop", "Pen")
# Replace 'Laptop' with 'Tablet', and 'Pen' with 'Pencil'
school_supplies[c(2, 3)] <- c("Tablet", "Pencil")
```
If you vector has names, you can use character indexing in exactly the
same way.
Logical indexing offer some very powerful possibilities. Imagine you
had a vector of values in which you wanted to replace all numbers
greater that 10 with the number 10 (to “cap” the values). We can
achieve with an one\-liner:
```
# Element of values
v1 <- c(1, 5, 55, 1, 3, 11, 4, 27)
# Replace all values greater than 10 with 10
v1[v1 > 10] <- 10 # returns 1, 5, 10, 1, 3, 10, 4, 10
```
In this example, we first compute the logical index of “too large”
values by `v1 > 10`, and thereafter assign the value 10 to all these
elements in vector v1\. Replacing a numeric vector by the absolute
values of the elements can be done in a similar fashion:
```
v <- c(1,-1,2,-2)
v[v < 0] <- -v[v < 0]
v # [1] 1 1 2 2
```
As a first step we find the logical index of the negative elements of
`v`: `v < 0`. Next, we flip the sign of these elements in `v` by
replacing these with `-v[v < 0]`.
7\.5 Vectorized Operations
--------------------------
Many R operators and functions are optimized for vectors, i.e. when fed
a vector, they work on all elements of that vector. These operations
are usually very fast and efficient.
### 7\.5\.1 Vectorized Operators
When performing operations (such as mathematical operations `+`, `-`, etc.) on vectors, the operation is applied to vector elements **member\-wise**. This means that each element from the first vector operand is modified by the element in the **same corresponding position** in the second vector operand, in order to determine the value *at the corresponding position* of the resulting vector. E.g., if you want to add (`+`) two vectors, then the value of the first element in the result will be the sum (`+`) of the first elements in each vector, the second element in the result will be the sum of the second elements in each vector, and so on.
```
# Create two vectors to combine
v1 <- c(1, 1, 1, 1, 1)
v2 <- c(1, 2, 3, 4, 5)
# Create arithmetic combinations of the vectors
v1 + v2 # returns 2, 3, 4, 5, 6
v1 - v2 # returns 0, -1, -2, -3, -4
v1 * v2 # returns 1, 2, 3, 4, 5
v1 / v2 # returns 1, .5, .33, .25, .2
# Add a vector to itself (why not?)
v3 <- v2 + v2 # returns 2, 4, 6, 8, 10
# Perform more advanced arithmetic!
v4 <- (v1 + v2) / (v1 + v1) # returns 1, 1.5, 2, 2.5, 3
```
### 7\.5\.2 Vectorized Functions
> *Vectors In, Vector Out*
Because all atomic objects are vectors, it means that pretty much every
function you’ve used so far has actually applied to vectors, not just to
single values. These are referred to as **vectorized functions**, and
will run significantly faster than non\-vector approaches. You’ll find
that functions work the same way for vectors as they do for single
values, because single values are just instances of vectors! For
instance, we can use `paste()` to
concatenate the elements of two character vectors:
```
colors <- c("Green", "Blue")
spaces <- c("sky", "grass")
# Note: look up the `paste()` function if it's not familiar!
paste(colors, spaces) # "Green sky", "Blue grass"
```
Notice the same *member\-wise* combination is occurring: the `paste()` function is applied to the first elements, then to the second elements, and so on.
* *Fun fact:* The mathematical operators (e.g., `+`) are actually functions in R that take 2 arguments (the operands). The mathematical notation we’re used to using is just a shortcut.
```
# these two lines of code are the same:
x <- 2 + 3 # add 2 and 3
x <- '+'(2, 3) # add 2 and 3
```
For another example consider the `round()` function described in the previous chapter. This function rounds the given argument to the nearest whole number (or number of decimal places if specified).
```
# round number to 1 decimal place
round(1.67, 1) # returns 1.6
```
But recall that the `1.6` in the above example is *actually a vector of
length 1*. If we instead pass a longer vector as an argument, the function will perform the same rounding on each element in the vector.
```
# Create a vector of numbers
nums <- c(3.98, 8, 10.8, 3.27, 5.21)
# Perform the vectorized operation
round(nums, 1) # [1] 4.0 8.0 10.8 3.3 5.2
```
This vectorization process is ***extremely powerful***, and is a significant factor in what makes R an efficient language for working with large data sets (particularly in comparison to languages that require explicit iteration through elements in a collection). Thus to write really effective R code, you’ll need to be comfortable applying functions to vectors of data, and getting vectors of data back as results.
Just remember: *when you use a vectorized function on a vector, you’re using that function **on each item** in the vector*!
### 7\.5\.3 Recycling
Above we saw a number of vectorized operations, where similar operations
were applied to elements of two vectors member\-wise. However, what
happens if the two vectors are of unequal length?
**Recycling** refers to what R does in cases when there are an unequal number of elements in two operand vectors. If R is tasked with performing a vectorized operation with two vectors of unequal length, it will reuse (*recycle*) elements from the shorter vector. For example:
```
# Create vectors to combine
v1 <- c(1, 3, 5, 8)
v2 <- c(1, 2)
# Add vectors
v1 + v2 # [1] 2 5 6 10
```
In this example, R first combined the elements in the first position of
each vector (`1+1=2`). Then, it combined elements from the second
position (`3+2=5`). When it got to the third element of `v1` it run out
of elements of `v2`, so it went back to the **beginning** of `v2` to
select a value, yielding `5+1=6`. Finally, it combined the 4th element
of `v1` (8\) with the second element of `v2` (2\) to get 10\.
If the longer object length is not a multiple of shorter object length,
R will issue a warning, notifying you that the lengths do not match.
This warning doesn’t necessarily mean you did something wrong although
in practice this tends to be the case.
### 7\.5\.4 R Is a Vectorized World
Actually we have already met many more examples of recycling and
vectorized functions above. For
instance, in the case of finding big shoes with
```
shoe_sizes <- c(7, 6.5, 4, 11, 8)
shoe_sizes[shoe_sizes > 6.5]
```
we first recycle the length\-one vector `6.5` five times to match it with
the shoe size vector `c(7, 6.5, 4, 11, 8)`. Afterwards we use the
vectorized operator `>` (or actually the function `">()"`) to compare
each of the shoe sizes with the value 6\.5\. The result is a logical
vector of length 5\.
This is also what happens if you add a vector and a “regular” single value (a **scalar**):
```
# create vector of numbers 1 to 5
v1 <- 1:5
v1 + 4 # add scalar to vector
## [1] 5 6 7 8 9
```
As you can see (and probably expected), the operation added `4` to every
element in the vector. The reason this sensible behavior occurs is
because all atomic objects are vectors. Even when you thought you were
creating a single value (a scalar), you were actually just creating a
vector with a single element (length 1\). When you create a variable
storing the number `7` (with `x <- 7`), R creates a vector of length 1
with the number `7` as that single element:
```
# Create a vector of length 1 in a variable x
x <- 7 # equivalent to `x <- c(7)`
```
* This is why R prints the `[1]` in front of all results: it’s telling you that it’s showing a vector (which happens to have 1 element) starting at element number 1\.
* This is also why you can’t use the `length()` function to get the
length of a character string; it just returns the length of the array
containing that string (`1`). Instead, use the `nchar()` function to
get the number of characters in each element in a character vector.
Thus when you add a “scalar” such as `4` to a vector, what you’re really doing is adding a vector with a single element `4`. As such the same *recycling* principle applies, and that single element is “recycled” and applied to each element of the first operand.
Note: here we are implicitly using the word *vector* in two different
meanings. The one is a way R stores objects (atomic vector), the other
is *vector* in the mathematical sense, as the opposite to *scalar*.
Similar confusion also occurs with matrices. Matrices as mathematical
objects are distinct from vectors (and scalars). In R they are stored
as vectors, and treated as matrices in dedicated matrix operations only.
Finally, you should also know that there are many kinds of objects in
R that are not vectors. These include functions, and many other more
“exotic” objects.
Resources
---------
* [R Tutorial: Vectors](http://www.r-tutor.com/r-introduction/vector)
7\.1 What is a Vector?
----------------------
**Vectors** are *one\-dimensional ordered collections of values* that are all
stored in a single variable. For example, you can make a vector
`people` that contains the character strings “Sarah”, “Amit”, and
“Zhang”. Alternatively, you could make a vector `numbers` that stores
the numbers from 1 to 100\. Each value in a vector is refered to as an
**element** of that vector; thus the `people` vector would have 3
elements, `"Sarah"`, `"Amit"`, and `"Zhang"`, and `numbers` vector
will have 100 elements. *Ordered* means that once in the vector, the
elements will remain there in the original order. If “Amit” was put
on the second place, it will remain on the second place unless explicitly moved.
Unfortunately, there are at least five different sometimes contradicting
definitions of what is “vector” in R. Here we focus on **atomic
vectors**, vectors that contain the
[atomic data types](r-intro.html#basic-data-types). Another different class of
vectors is
**generalized vectors** or **lists**, the topic of [chapter Lists](lists.html#lists).
Atomic vector can only contain elements of an atomic data
type—numeric, integer, character or logical. Importantly, all the
elements in a vector need to have the same. You can’t have an atomic vector whose elements include both numbers and character strings.
7\.2 Creating Vectors
---------------------
The easiest and most universal syntax for creating vectors is to use the built in `c()` function, which is used to ***c***\_ombine\_ values into a vector. The `c()` function takes in any number of **arguments** of the same type (separated by commas as usual), and **returns** a vector that contains those elements:
```
# Use the combine (`c`) function to create a vector.
people <- c("Sarah", "Amit", "Zhang")
print(people) # [1] "Sarah" "Amit" "Zhang"
numbers <- c(1, 2, 3, 4, 5)
print(numbers) # [1] 1 2 3 4 5
```
You can use the `length()` function to determine how many **elements** are in a vector:
```
people <- c("Sarah", "Amit", "Zhang")
length(people) # [1] 3
numbers <- c(1, 2, 3, 4, 5)
length(numbers) # [1] 5
```
As atomic vectors can only contain same type of elements, `c()`
automatically **casts** (converts) one type to the other if necessary
(and if possible). For instance, when attempting to create a vector
containing number 1 and character “a”
```
mix <- c(1, "a")
mix # [1] "1" "a"
```
we get a *character vector* where the number 1 was converted to a
character “1”. This is a frequent problem when reading data where some
fields contain invalid number codes.
There are other handy ways to create vectors. For example, the `seq()`
function mentioned in [chapter 6](functions.html#functions) takes 2 (or more) arguments and
produces a vector of the integers between them. An *optional* third
argument specifies by which step to increment the numbers:
```
# Make vector of numbers 1 to 90
one_to_ninety <- seq(1, 90)
print(one_to_ninety) # [1] 1 2 3 4 5 ...
# Make vector of numbers 1 to 10, counting by 2
odds <- seq(1, 10, 2)
print(odds) # [1] 1 3 5 7 9
```
* When you print out `one_to_ninety`, you’ll notice that in addition to the leading `[1]` that you’ve seen in all printed results, there are additional bracketed numbers at the start of each line. These bracketed numbers tells you from which element number (**index**, see below) that line is showing the elements of. Thus the `[1]` means that the printed line shows elements started at element number `1`, a `[20]` means that the printed line shows elements starting at element number `20`, and so on. This is to help make the output more readable, so you know where in the vector you are when looking at in a printed line of elements!
As a shorthand, you can produce a sequence with the **colon operator** (**`a:b`**), which returns a vector `a` to `b` with the element values incrementing by `1`:
```
one_to_ninety <- 1:90
```
Another useful function that creates vectors is `rep()` that repeats
it’s first argument:
```
rep("Xi'an", 5) # [1] "Xi'an" "Xi'an" "Xi'an" "Xi'an" "Xi'an"
```
`c()` can also be used to add elements to an existing vector:
```
# Use the combine (`c()`) function to create a vector.
people <- c("Sarah", "Amit", "Zhang")
# Use the `c()` function to combine the `people` vector and the name 'Josh'.
more_people <- c(people, 'Josh')
print(more_people) # [1] "Sarah" "Amit" "Zhang" "Josh"
```
Note that `c()` retains the order of elements—“Josh” will be the
last element in the extended vector.
All the vector creation functions we introduced here, `c()`, `seq()` and
`rep()` are noticeably more powerful and complex than the brief
discussion above. You are encouraged to read the help pages!
7\.3 Vector Indices
-------------------
Vectors are the fundamental structure for storing collections of data. Yet you often want to only work with *some* of the data in a vector. This section will discuss a few ways that you can get a **subset** of elements in a vector.
In particular, you can refer to individual elements in a vector by their
**index** (more specifically **numeric index**), which is the number of their position in the vector. For example, in the vector:
```
vowels <- c('a','e','i','o','u')
```
The `'a'` (the first element) is at *index* 1, `'e'` (the second element) is at index 2, and so on.
Note in R vector elements are indexed starting with `1` (**one\-based indexing**). This is distinct from many other programming languages
which use **zero\-based indexing** and so reference the first element
at index `0`.
### 7\.3\.1 Simple Numeric Indices
You can retrieve a value from a vector using **bracket notation**: you refer to the element at a particular index of a vector by writing the name of the vector, followed by square brackets (**`[]`**) that contain the index of interest:
```
# Create the people vector
people <- c("Sarah", "Amit", "Zhang")
# access the element at index 1
people[1] # [1] "Sarah"
# access the element at index 2
people[2] # [1] "Amit"
# You can also use variables inside the brackets
last_index <- length(people) # last index is the length of the vector!
people[last_index] # returns "Zhang"
# You may want to check out the `tail()` function instead!
```
Don’t get confused by the `[1]` in the printed output—it doesn’t refer to which index you got from `people`, but what index in the *extracted* result (e.g., stored in `first_person`) is being printed!
If you specify an index that is **out\-of\-bounds** (e.g., greater than
the number of elements in the vector) in the square brackets, you will
get back the value `NA`, which stands for **N**ot **A**vailable. Note
that this is *not* the *character string* `"NA"`, but a specific value,
specially designed to denote missing data.
```
vowels <- c('a','e','i','o','u')
# Attempt to access the 10th element
vowels[10] # returns NA
```
If you specify a **negative index** in the square\-brackets, R will return all elements *except* the (negative) index specified:
```
vowels <- c("a", "e", "i", "o", "u")
# Return all elements EXCEPT that at index 2
all_but_e <- vowels[-2]
print(all_but_e) # [1] "a" "i" "o" "u"
```
### 7\.3\.2 Multiple Indices
Remember that in R, **all atomic objects are vectors**. This means that when you put a single number inside the square brackets, you’re actually putting a *vector with a single element in it* into the brackets So what you’re really doing is specifying a **vector of indices** that you want R to extract from the vector. As such, you can put a vector of any length inside the brackets, and R will extract *all* the elements with those indices from the vector (producing a **subset** of the vector elements):
```
# Create a `colors` vector
colors <- c("red", "green", "blue", "yellow", "purple")
# Vector of indices to extract
indices <- c(1, 3, 4)
# Retrieve the colors at those indices
colors[indices] # [1] "red" "blue" "yellow"
# Specify the index array anonymously
colors[c(2, 5)] # [1] "green" "purple"
```
It’s very\-very handy to use the **colon operator** to quickly specify a range of indices to extract:
```
colors <- c("red", "green", "blue", "yellow", "purple")
# Retrieve values in positions 2 through 5
colors[2:5] # [1] "green" "blue" "yellow" "purple"
```
This easily reads as *“a vector of the elements in positions 2 through
5”*.
The object returned by multiple indexing (and also by a single index) is
a **copy of the original**, unlike in some other programming
languages. These are good news in terms of avoiding unexpected effects:
modifying the returned copy does not affect the original.
However, copying large objects may be costly and make your code slow and sluggish.
### 7\.3\.3 Logical Indexing
In the above section, you used a vector of indices (*numeric* values) to
retrieve a subset of elements from a vector. Alternatively, you can put a **vector of logical values** inside the square brackets to specify which ones you want to extract (`TRUE` in the *corresponding position* means extract, `FALSE` means don’t extract):
```
# Create a vector of shoe sizes
shoe_sizes <- c(7, 6.5, 4, 11, 8)
# Vector of elements to extract
filter <- c(TRUE, FALSE, FALSE, TRUE, TRUE)
# Extract every element in an index that is TRUE
shoe_sizes[filter] # [1] 7 11 8
```
R will go through the boolean vector and extract every item at the
position that is `TRUE`. In the example above, since `filter` is `TRUE` and indices 1, 4, and 5, then `shoe_sizes[filter]` returns a vector with the elements from indices 1, 4, and 5\.
This may seem a bit strange, but it is actually incredibly powerful because it lets you select elements from a vector that *meet a certain criteria* (called **filtering**). You perform this *filtering operation* by first creating a vector of boolean values that correspond with the indices meeting that criteria, and then put that filter vector inside the square brackets:
```
# Create a vector of shoe sizes
shoe_sizes <- c(7, 6.5, 4, 11, 8)
# Create a boolean vector that indicates if a shoe size is greater than 6.5
shoe_is_big <- shoe_sizes > 6.5 # T, F, F, T, T
# Use the `shoe_is_big` vector to select large shoes
big_shoes <- shoe_sizes[shoe_is_big] # returns 7, 11, 8
```
There is often little reason to explicitly create the index vector
`shoe_is_big`. You can combine the second and third lines of code into
a single statement with anonymous index vector:
```
# Create a vector of shoe sizes
shoe_sizes <- c(7, 6.5, 4, 11, 8)
# Select shoe sizes that are greater than 6.5
shoe_sizes[shoe_sizes > 6.5] # returns 7, 11, 8
```
You can think of the this statement as saying “shoe\_sizes **where**
shoe\_sizes is greater than 6\.5”. This is a valid statement because the
logical expression inside of the square\-brackets (`shoe_sizes > 6.5`) is
evaluated first, producing an anonymos boolean vector which is then used to filter the `shoe_sizes` vector.
This kind of filtering is immensely popular in real\-life applications.
### 7\.3\.4 Named Vectors and Character Indexing
All the vectors we created above where created without names. But
vector elements can have names, and given they have names, we can access
these using the names. There are two ways to create **named vectors**.
First, we can add names when creating vectors with `c()` function:
```
param <- c(gamma=3, alpha=1.7, "c-2"=-1.33)
param
## gamma alpha c-2
## 3.00 1.70 -1.33
```
This creates a numeric vector of length 3 where each element has a
name. Note that we have to quote the names, such as “c\-2”, that are not
valid R variable names. Note also that the printout differs from that
of unnamed vectors, in particular the index position (`[1]`) is not
printed.
Alternatively, we can set names to an already existing vector using the
`names()`
function:[1](#fn1)
```
numbers <- 1:5
names(numbers) <- c("A", "B", "C", "D", "E")
numbers
## A B C D E
## 1 2 3 4 5
```
Now when we have a named vector, we can access it’s elements by names.
For instance
```
numbers["C"]
## C
## 3
numbers[c("D", "B")]
## D B
## 4 2
```
Note that in the latter case the names `"B"` and `"D"` are in “wrong
order”, i.e. not in the same order as they are in the vector `numbers`.
However, this works just fine, the elements are extracted in the order
they are specified in the index (This is only possible with character
and numeric indices, logical index can only extract elements in the
“right” order.)
While most vectors we encounter in this book gain little by
names, exactly the same approach also applies to lists and data frames
where character indexing is one of the important workhorses.
Another important use case of named vectors in R are a substitute of **maps**
(aka **dictionaries**). Maps are just lookup tables where we can
find a value that corresponds to a value of another element in the
table. For instance, the example above found values that correspond to
the names `"D"` and `"B"`.
### 7\.3\.1 Simple Numeric Indices
You can retrieve a value from a vector using **bracket notation**: you refer to the element at a particular index of a vector by writing the name of the vector, followed by square brackets (**`[]`**) that contain the index of interest:
```
# Create the people vector
people <- c("Sarah", "Amit", "Zhang")
# access the element at index 1
people[1] # [1] "Sarah"
# access the element at index 2
people[2] # [1] "Amit"
# You can also use variables inside the brackets
last_index <- length(people) # last index is the length of the vector!
people[last_index] # returns "Zhang"
# You may want to check out the `tail()` function instead!
```
Don’t get confused by the `[1]` in the printed output—it doesn’t refer to which index you got from `people`, but what index in the *extracted* result (e.g., stored in `first_person`) is being printed!
If you specify an index that is **out\-of\-bounds** (e.g., greater than
the number of elements in the vector) in the square brackets, you will
get back the value `NA`, which stands for **N**ot **A**vailable. Note
that this is *not* the *character string* `"NA"`, but a specific value,
specially designed to denote missing data.
```
vowels <- c('a','e','i','o','u')
# Attempt to access the 10th element
vowels[10] # returns NA
```
If you specify a **negative index** in the square\-brackets, R will return all elements *except* the (negative) index specified:
```
vowels <- c("a", "e", "i", "o", "u")
# Return all elements EXCEPT that at index 2
all_but_e <- vowels[-2]
print(all_but_e) # [1] "a" "i" "o" "u"
```
### 7\.3\.2 Multiple Indices
Remember that in R, **all atomic objects are vectors**. This means that when you put a single number inside the square brackets, you’re actually putting a *vector with a single element in it* into the brackets So what you’re really doing is specifying a **vector of indices** that you want R to extract from the vector. As such, you can put a vector of any length inside the brackets, and R will extract *all* the elements with those indices from the vector (producing a **subset** of the vector elements):
```
# Create a `colors` vector
colors <- c("red", "green", "blue", "yellow", "purple")
# Vector of indices to extract
indices <- c(1, 3, 4)
# Retrieve the colors at those indices
colors[indices] # [1] "red" "blue" "yellow"
# Specify the index array anonymously
colors[c(2, 5)] # [1] "green" "purple"
```
It’s very\-very handy to use the **colon operator** to quickly specify a range of indices to extract:
```
colors <- c("red", "green", "blue", "yellow", "purple")
# Retrieve values in positions 2 through 5
colors[2:5] # [1] "green" "blue" "yellow" "purple"
```
This easily reads as *“a vector of the elements in positions 2 through
5”*.
The object returned by multiple indexing (and also by a single index) is
a **copy of the original**, unlike in some other programming
languages. These are good news in terms of avoiding unexpected effects:
modifying the returned copy does not affect the original.
However, copying large objects may be costly and make your code slow and sluggish.
### 7\.3\.3 Logical Indexing
In the above section, you used a vector of indices (*numeric* values) to
retrieve a subset of elements from a vector. Alternatively, you can put a **vector of logical values** inside the square brackets to specify which ones you want to extract (`TRUE` in the *corresponding position* means extract, `FALSE` means don’t extract):
```
# Create a vector of shoe sizes
shoe_sizes <- c(7, 6.5, 4, 11, 8)
# Vector of elements to extract
filter <- c(TRUE, FALSE, FALSE, TRUE, TRUE)
# Extract every element in an index that is TRUE
shoe_sizes[filter] # [1] 7 11 8
```
R will go through the boolean vector and extract every item at the
position that is `TRUE`. In the example above, since `filter` is `TRUE` and indices 1, 4, and 5, then `shoe_sizes[filter]` returns a vector with the elements from indices 1, 4, and 5\.
This may seem a bit strange, but it is actually incredibly powerful because it lets you select elements from a vector that *meet a certain criteria* (called **filtering**). You perform this *filtering operation* by first creating a vector of boolean values that correspond with the indices meeting that criteria, and then put that filter vector inside the square brackets:
```
# Create a vector of shoe sizes
shoe_sizes <- c(7, 6.5, 4, 11, 8)
# Create a boolean vector that indicates if a shoe size is greater than 6.5
shoe_is_big <- shoe_sizes > 6.5 # T, F, F, T, T
# Use the `shoe_is_big` vector to select large shoes
big_shoes <- shoe_sizes[shoe_is_big] # returns 7, 11, 8
```
There is often little reason to explicitly create the index vector
`shoe_is_big`. You can combine the second and third lines of code into
a single statement with anonymous index vector:
```
# Create a vector of shoe sizes
shoe_sizes <- c(7, 6.5, 4, 11, 8)
# Select shoe sizes that are greater than 6.5
shoe_sizes[shoe_sizes > 6.5] # returns 7, 11, 8
```
You can think of the this statement as saying “shoe\_sizes **where**
shoe\_sizes is greater than 6\.5”. This is a valid statement because the
logical expression inside of the square\-brackets (`shoe_sizes > 6.5`) is
evaluated first, producing an anonymos boolean vector which is then used to filter the `shoe_sizes` vector.
This kind of filtering is immensely popular in real\-life applications.
### 7\.3\.4 Named Vectors and Character Indexing
All the vectors we created above where created without names. But
vector elements can have names, and given they have names, we can access
these using the names. There are two ways to create **named vectors**.
First, we can add names when creating vectors with `c()` function:
```
param <- c(gamma=3, alpha=1.7, "c-2"=-1.33)
param
## gamma alpha c-2
## 3.00 1.70 -1.33
```
This creates a numeric vector of length 3 where each element has a
name. Note that we have to quote the names, such as “c\-2”, that are not
valid R variable names. Note also that the printout differs from that
of unnamed vectors, in particular the index position (`[1]`) is not
printed.
Alternatively, we can set names to an already existing vector using the
`names()`
function:[1](#fn1)
```
numbers <- 1:5
names(numbers) <- c("A", "B", "C", "D", "E")
numbers
## A B C D E
## 1 2 3 4 5
```
Now when we have a named vector, we can access it’s elements by names.
For instance
```
numbers["C"]
## C
## 3
numbers[c("D", "B")]
## D B
## 4 2
```
Note that in the latter case the names `"B"` and `"D"` are in “wrong
order”, i.e. not in the same order as they are in the vector `numbers`.
However, this works just fine, the elements are extracted in the order
they are specified in the index (This is only possible with character
and numeric indices, logical index can only extract elements in the
“right” order.)
While most vectors we encounter in this book gain little by
names, exactly the same approach also applies to lists and data frames
where character indexing is one of the important workhorses.
Another important use case of named vectors in R are a substitute of **maps**
(aka **dictionaries**). Maps are just lookup tables where we can
find a value that corresponds to a value of another element in the
table. For instance, the example above found values that correspond to
the names `"D"` and `"B"`.
7\.4 Modifying Vectors
----------------------
Indexing can also be used to modify elements within the vector. To do this, put the extracted *subset* on the **left\-hand side** of the assignment operator, and then assign the element a new value:
```
# Create a vector of school supplies
school_supplies <- c("Backpack", "Laptop", "Pen")
# Replace 'Pen' (element at index 3) with 'Pencil'
school_supplies[3] <- "Pencil"
```
And of course, there’s no reason that you can’t select multiple elements on the left\-hand side, and assign them multiple values. The assignment operator is *vectorized*!
```
# Create a vector of school supplies
school_supplies <- c("Backpack", "Laptop", "Pen")
# Replace 'Laptop' with 'Tablet', and 'Pen' with 'Pencil'
school_supplies[c(2, 3)] <- c("Tablet", "Pencil")
```
If you vector has names, you can use character indexing in exactly the
same way.
Logical indexing offer some very powerful possibilities. Imagine you
had a vector of values in which you wanted to replace all numbers
greater that 10 with the number 10 (to “cap” the values). We can
achieve with an one\-liner:
```
# Element of values
v1 <- c(1, 5, 55, 1, 3, 11, 4, 27)
# Replace all values greater than 10 with 10
v1[v1 > 10] <- 10 # returns 1, 5, 10, 1, 3, 10, 4, 10
```
In this example, we first compute the logical index of “too large”
values by `v1 > 10`, and thereafter assign the value 10 to all these
elements in vector v1\. Replacing a numeric vector by the absolute
values of the elements can be done in a similar fashion:
```
v <- c(1,-1,2,-2)
v[v < 0] <- -v[v < 0]
v # [1] 1 1 2 2
```
As a first step we find the logical index of the negative elements of
`v`: `v < 0`. Next, we flip the sign of these elements in `v` by
replacing these with `-v[v < 0]`.
7\.5 Vectorized Operations
--------------------------
Many R operators and functions are optimized for vectors, i.e. when fed
a vector, they work on all elements of that vector. These operations
are usually very fast and efficient.
### 7\.5\.1 Vectorized Operators
When performing operations (such as mathematical operations `+`, `-`, etc.) on vectors, the operation is applied to vector elements **member\-wise**. This means that each element from the first vector operand is modified by the element in the **same corresponding position** in the second vector operand, in order to determine the value *at the corresponding position* of the resulting vector. E.g., if you want to add (`+`) two vectors, then the value of the first element in the result will be the sum (`+`) of the first elements in each vector, the second element in the result will be the sum of the second elements in each vector, and so on.
```
# Create two vectors to combine
v1 <- c(1, 1, 1, 1, 1)
v2 <- c(1, 2, 3, 4, 5)
# Create arithmetic combinations of the vectors
v1 + v2 # returns 2, 3, 4, 5, 6
v1 - v2 # returns 0, -1, -2, -3, -4
v1 * v2 # returns 1, 2, 3, 4, 5
v1 / v2 # returns 1, .5, .33, .25, .2
# Add a vector to itself (why not?)
v3 <- v2 + v2 # returns 2, 4, 6, 8, 10
# Perform more advanced arithmetic!
v4 <- (v1 + v2) / (v1 + v1) # returns 1, 1.5, 2, 2.5, 3
```
### 7\.5\.2 Vectorized Functions
> *Vectors In, Vector Out*
Because all atomic objects are vectors, it means that pretty much every
function you’ve used so far has actually applied to vectors, not just to
single values. These are referred to as **vectorized functions**, and
will run significantly faster than non\-vector approaches. You’ll find
that functions work the same way for vectors as they do for single
values, because single values are just instances of vectors! For
instance, we can use `paste()` to
concatenate the elements of two character vectors:
```
colors <- c("Green", "Blue")
spaces <- c("sky", "grass")
# Note: look up the `paste()` function if it's not familiar!
paste(colors, spaces) # "Green sky", "Blue grass"
```
Notice the same *member\-wise* combination is occurring: the `paste()` function is applied to the first elements, then to the second elements, and so on.
* *Fun fact:* The mathematical operators (e.g., `+`) are actually functions in R that take 2 arguments (the operands). The mathematical notation we’re used to using is just a shortcut.
```
# these two lines of code are the same:
x <- 2 + 3 # add 2 and 3
x <- '+'(2, 3) # add 2 and 3
```
For another example consider the `round()` function described in the previous chapter. This function rounds the given argument to the nearest whole number (or number of decimal places if specified).
```
# round number to 1 decimal place
round(1.67, 1) # returns 1.6
```
But recall that the `1.6` in the above example is *actually a vector of
length 1*. If we instead pass a longer vector as an argument, the function will perform the same rounding on each element in the vector.
```
# Create a vector of numbers
nums <- c(3.98, 8, 10.8, 3.27, 5.21)
# Perform the vectorized operation
round(nums, 1) # [1] 4.0 8.0 10.8 3.3 5.2
```
This vectorization process is ***extremely powerful***, and is a significant factor in what makes R an efficient language for working with large data sets (particularly in comparison to languages that require explicit iteration through elements in a collection). Thus to write really effective R code, you’ll need to be comfortable applying functions to vectors of data, and getting vectors of data back as results.
Just remember: *when you use a vectorized function on a vector, you’re using that function **on each item** in the vector*!
### 7\.5\.3 Recycling
Above we saw a number of vectorized operations, where similar operations
were applied to elements of two vectors member\-wise. However, what
happens if the two vectors are of unequal length?
**Recycling** refers to what R does in cases when there are an unequal number of elements in two operand vectors. If R is tasked with performing a vectorized operation with two vectors of unequal length, it will reuse (*recycle*) elements from the shorter vector. For example:
```
# Create vectors to combine
v1 <- c(1, 3, 5, 8)
v2 <- c(1, 2)
# Add vectors
v1 + v2 # [1] 2 5 6 10
```
In this example, R first combined the elements in the first position of
each vector (`1+1=2`). Then, it combined elements from the second
position (`3+2=5`). When it got to the third element of `v1` it run out
of elements of `v2`, so it went back to the **beginning** of `v2` to
select a value, yielding `5+1=6`. Finally, it combined the 4th element
of `v1` (8\) with the second element of `v2` (2\) to get 10\.
If the longer object length is not a multiple of shorter object length,
R will issue a warning, notifying you that the lengths do not match.
This warning doesn’t necessarily mean you did something wrong although
in practice this tends to be the case.
### 7\.5\.4 R Is a Vectorized World
Actually we have already met many more examples of recycling and
vectorized functions above. For
instance, in the case of finding big shoes with
```
shoe_sizes <- c(7, 6.5, 4, 11, 8)
shoe_sizes[shoe_sizes > 6.5]
```
we first recycle the length\-one vector `6.5` five times to match it with
the shoe size vector `c(7, 6.5, 4, 11, 8)`. Afterwards we use the
vectorized operator `>` (or actually the function `">()"`) to compare
each of the shoe sizes with the value 6\.5\. The result is a logical
vector of length 5\.
This is also what happens if you add a vector and a “regular” single value (a **scalar**):
```
# create vector of numbers 1 to 5
v1 <- 1:5
v1 + 4 # add scalar to vector
## [1] 5 6 7 8 9
```
As you can see (and probably expected), the operation added `4` to every
element in the vector. The reason this sensible behavior occurs is
because all atomic objects are vectors. Even when you thought you were
creating a single value (a scalar), you were actually just creating a
vector with a single element (length 1\). When you create a variable
storing the number `7` (with `x <- 7`), R creates a vector of length 1
with the number `7` as that single element:
```
# Create a vector of length 1 in a variable x
x <- 7 # equivalent to `x <- c(7)`
```
* This is why R prints the `[1]` in front of all results: it’s telling you that it’s showing a vector (which happens to have 1 element) starting at element number 1\.
* This is also why you can’t use the `length()` function to get the
length of a character string; it just returns the length of the array
containing that string (`1`). Instead, use the `nchar()` function to
get the number of characters in each element in a character vector.
Thus when you add a “scalar” such as `4` to a vector, what you’re really doing is adding a vector with a single element `4`. As such the same *recycling* principle applies, and that single element is “recycled” and applied to each element of the first operand.
Note: here we are implicitly using the word *vector* in two different
meanings. The one is a way R stores objects (atomic vector), the other
is *vector* in the mathematical sense, as the opposite to *scalar*.
Similar confusion also occurs with matrices. Matrices as mathematical
objects are distinct from vectors (and scalars). In R they are stored
as vectors, and treated as matrices in dedicated matrix operations only.
Finally, you should also know that there are many kinds of objects in
R that are not vectors. These include functions, and many other more
“exotic” objects.
### 7\.5\.1 Vectorized Operators
When performing operations (such as mathematical operations `+`, `-`, etc.) on vectors, the operation is applied to vector elements **member\-wise**. This means that each element from the first vector operand is modified by the element in the **same corresponding position** in the second vector operand, in order to determine the value *at the corresponding position* of the resulting vector. E.g., if you want to add (`+`) two vectors, then the value of the first element in the result will be the sum (`+`) of the first elements in each vector, the second element in the result will be the sum of the second elements in each vector, and so on.
```
# Create two vectors to combine
v1 <- c(1, 1, 1, 1, 1)
v2 <- c(1, 2, 3, 4, 5)
# Create arithmetic combinations of the vectors
v1 + v2 # returns 2, 3, 4, 5, 6
v1 - v2 # returns 0, -1, -2, -3, -4
v1 * v2 # returns 1, 2, 3, 4, 5
v1 / v2 # returns 1, .5, .33, .25, .2
# Add a vector to itself (why not?)
v3 <- v2 + v2 # returns 2, 4, 6, 8, 10
# Perform more advanced arithmetic!
v4 <- (v1 + v2) / (v1 + v1) # returns 1, 1.5, 2, 2.5, 3
```
### 7\.5\.2 Vectorized Functions
> *Vectors In, Vector Out*
Because all atomic objects are vectors, it means that pretty much every
function you’ve used so far has actually applied to vectors, not just to
single values. These are referred to as **vectorized functions**, and
will run significantly faster than non\-vector approaches. You’ll find
that functions work the same way for vectors as they do for single
values, because single values are just instances of vectors! For
instance, we can use `paste()` to
concatenate the elements of two character vectors:
```
colors <- c("Green", "Blue")
spaces <- c("sky", "grass")
# Note: look up the `paste()` function if it's not familiar!
paste(colors, spaces) # "Green sky", "Blue grass"
```
Notice the same *member\-wise* combination is occurring: the `paste()` function is applied to the first elements, then to the second elements, and so on.
* *Fun fact:* The mathematical operators (e.g., `+`) are actually functions in R that take 2 arguments (the operands). The mathematical notation we’re used to using is just a shortcut.
```
# these two lines of code are the same:
x <- 2 + 3 # add 2 and 3
x <- '+'(2, 3) # add 2 and 3
```
For another example consider the `round()` function described in the previous chapter. This function rounds the given argument to the nearest whole number (or number of decimal places if specified).
```
# round number to 1 decimal place
round(1.67, 1) # returns 1.6
```
But recall that the `1.6` in the above example is *actually a vector of
length 1*. If we instead pass a longer vector as an argument, the function will perform the same rounding on each element in the vector.
```
# Create a vector of numbers
nums <- c(3.98, 8, 10.8, 3.27, 5.21)
# Perform the vectorized operation
round(nums, 1) # [1] 4.0 8.0 10.8 3.3 5.2
```
This vectorization process is ***extremely powerful***, and is a significant factor in what makes R an efficient language for working with large data sets (particularly in comparison to languages that require explicit iteration through elements in a collection). Thus to write really effective R code, you’ll need to be comfortable applying functions to vectors of data, and getting vectors of data back as results.
Just remember: *when you use a vectorized function on a vector, you’re using that function **on each item** in the vector*!
### 7\.5\.3 Recycling
Above we saw a number of vectorized operations, where similar operations
were applied to elements of two vectors member\-wise. However, what
happens if the two vectors are of unequal length?
**Recycling** refers to what R does in cases when there are an unequal number of elements in two operand vectors. If R is tasked with performing a vectorized operation with two vectors of unequal length, it will reuse (*recycle*) elements from the shorter vector. For example:
```
# Create vectors to combine
v1 <- c(1, 3, 5, 8)
v2 <- c(1, 2)
# Add vectors
v1 + v2 # [1] 2 5 6 10
```
In this example, R first combined the elements in the first position of
each vector (`1+1=2`). Then, it combined elements from the second
position (`3+2=5`). When it got to the third element of `v1` it run out
of elements of `v2`, so it went back to the **beginning** of `v2` to
select a value, yielding `5+1=6`. Finally, it combined the 4th element
of `v1` (8\) with the second element of `v2` (2\) to get 10\.
If the longer object length is not a multiple of shorter object length,
R will issue a warning, notifying you that the lengths do not match.
This warning doesn’t necessarily mean you did something wrong although
in practice this tends to be the case.
### 7\.5\.4 R Is a Vectorized World
Actually we have already met many more examples of recycling and
vectorized functions above. For
instance, in the case of finding big shoes with
```
shoe_sizes <- c(7, 6.5, 4, 11, 8)
shoe_sizes[shoe_sizes > 6.5]
```
we first recycle the length\-one vector `6.5` five times to match it with
the shoe size vector `c(7, 6.5, 4, 11, 8)`. Afterwards we use the
vectorized operator `>` (or actually the function `">()"`) to compare
each of the shoe sizes with the value 6\.5\. The result is a logical
vector of length 5\.
This is also what happens if you add a vector and a “regular” single value (a **scalar**):
```
# create vector of numbers 1 to 5
v1 <- 1:5
v1 + 4 # add scalar to vector
## [1] 5 6 7 8 9
```
As you can see (and probably expected), the operation added `4` to every
element in the vector. The reason this sensible behavior occurs is
because all atomic objects are vectors. Even when you thought you were
creating a single value (a scalar), you were actually just creating a
vector with a single element (length 1\). When you create a variable
storing the number `7` (with `x <- 7`), R creates a vector of length 1
with the number `7` as that single element:
```
# Create a vector of length 1 in a variable x
x <- 7 # equivalent to `x <- c(7)`
```
* This is why R prints the `[1]` in front of all results: it’s telling you that it’s showing a vector (which happens to have 1 element) starting at element number 1\.
* This is also why you can’t use the `length()` function to get the
length of a character string; it just returns the length of the array
containing that string (`1`). Instead, use the `nchar()` function to
get the number of characters in each element in a character vector.
Thus when you add a “scalar” such as `4` to a vector, what you’re really doing is adding a vector with a single element `4`. As such the same *recycling* principle applies, and that single element is “recycled” and applied to each element of the first operand.
Note: here we are implicitly using the word *vector* in two different
meanings. The one is a way R stores objects (atomic vector), the other
is *vector* in the mathematical sense, as the opposite to *scalar*.
Similar confusion also occurs with matrices. Matrices as mathematical
objects are distinct from vectors (and scalars). In R they are stored
as vectors, and treated as matrices in dedicated matrix operations only.
Finally, you should also know that there are many kinds of objects in
R that are not vectors. These include functions, and many other more
“exotic” objects.
Resources
---------
* [R Tutorial: Vectors](http://www.r-tutor.com/r-introduction/vector)
| Field Specific |
info201.github.io | https://info201.github.io/lists.html |
Chapter 8 Lists
===============
This chapter covers an additional R data type called lists. Lists are
somewhat similar to atomic vectors (they are “generalized vectors”!),
but can store more types of data and more details *about* that data
(with some cost). Lists are another way to create R’s version of a
[**Map**](https://en.wikipedia.org/wiki/Associative_array) data structure, a common and extremely useful way of organizing data in a computer program. Moreover: lists are used to create *data frames*, which is the primary data storage type used for working with sets of real data in R. This chapter will cover how to create and access elements in a list, as well as how to apply functions to lists or vectors.
8\.1 What is a List?
--------------------
A **List** is a lot like an atomic vector. It is also a
*one\-dimensional positional ordered collection of data*. Exaclyt as in
case of atomic vectors, list elements preserve their order, and they
have a well\-defined position in the list. However, lists have a few major differences from vectors:
1. Unlike a vector, you can store elements of *different types* in a
list: e.g., a list can contain numeric data *and* character string data,
functions, and even other lists.
2. Because lists can contain any type of data, they are much less efficient
as vectors. The vectorized operations that can handle atomic vectors on
the fly usually fail in case of lists, or may work substantially slower. Hence one should prefer atomic
vectors over lists if possible.
3. Elements in a list can also be named, but unlike in case of vector,
there exists a convenient shorthand `$`\-construct to extract named elements from lists.
Lists are extremely useful for organizing data. They allow you to group
together data like a person’s name (characters), job title (characters),
salary (number), and whether they are in a union (logical)—and you
don’t have to remember whether the person’s name or title was the first
element! In this sense lists can be used as a quick alternative to
formal classes, objects that can store heterogeneous data in a
consistent way. This is one of the primary uses of lists.
8\.2 Creating Lists
-------------------
You create a list by using the `list()` function and passing it any number of **arguments** (separated by commas) that you want to make up that list—similar to the `c()` functon for vectors.
However, if your list contains heterogenous elements, it is usually a
good idea to specify the **names** (or **tags**) for each element in the list in the same
way you can give names to vector elements in `c()`—by putting the name tag (which is like a variable name), followed by an equal symbol (**`=`**), followed by the value you want to go in the list and be associated with that tag. For example:
```
person <- list(first_name = "Ada", job = "Programmer", salary = 78000,
in_union = TRUE)
person
## person
## $first_name
## [1] "Ada"
##
## $job
## [1] "Programmer"
##
## $salary
## [1] 78000
##
## $in_union
## [1] TRUE
```
This creates a list of 4 elements: `"Ada"` which is tagged with
`first_name`, `"Programmer"` which is tagged with `job`, `78000` which
is tagged with `salary`, and `TRUE` which is tagged with `in_union`.
The output lists all component names following the dollar sign `$` (more
about it below), and prints the components themselves right after the names.
* Note that you can have *vectors* as elements of a list. In fact, each
of these scalar values are really vectors (of length 1\) as indicated
by `[1]` preceeding their values!
* The use of the `=` symbol here is an example of assigning a value to a specific named argument. You can actually use this syntax for *any* function (e.g., rather than listing arguments in order, you can explicit “assign” a value to each argument), but it is more common to just use the normal order of the arguments if there aren’t very many.
Note that if you need to, you can get a *vector* of element tags using the `names()` function:
```
person <- list(first_name = "Ada", job = "Programmer", salary = 78000, in_union = TRUE)
names(person) # [1] "first_name" "job" "salary" "in_union"
```
This is useful for understanding the structure of variables that may have come from other data sources.
It is possible to create a list without tagging the elements, and assign
names later if you wish:
```
person_alt <- list("Ada", 78000, TRUE)
person_alt
## [[1]]
## [1] "Ada"
##
## [[3]]
## [1] 78000
##
## [[4]]
## [1] TRUE
names(person_alt) <- c("name", "income", "memebership")
person_alt
## $name
## [1] "Ada"
##
## $income
## [1] 78000
##
## $memebership
## [1] TRUE
```
Note that the name tags are missing before we assign names, instead of
names we see the position of the components in double brackets like
`[[1]]` (more about it [below](lists.html#lists-indexing-by-position)).
Making name\-less lists and assigning names later is usually more error\-prone and harder way to make lists
manually, but when you automatically create lists in your code, it may be the only option.
Finally, empty lists of given length can also be created using the
general `vector()` function. For instance, `vector("list", 5)`, creates
a list of five `NULL` elements. This is a good approach if you just want am empty list to be filled in a loop later.
8\.3 Accessing List Elements
----------------------------
There are four ways to access elements in lists. Three of these reflect
atomic vector indexing, the [`$`\-construct](lists.html#lists-dollar-shortcut) is unique for lists. However, there
are important differences.
### 8\.3\.1 Indexing by position
You can always access list elements by their position. It is in many
ways similar to that of atomic vectors with one major caveat:
indexing with single brackets will extract not the components but a *sublist* that
contains just those components:
```
# note: this list is not not an atomic vector, even though elements have the same types
animals <- list("Aardvark", "Baboon", "Camel")
animals[c(1,3)]
## [[1]]
## [1] "Aardvark"
##
## [[2]]
## [1] "Camel"
```
You can see that the result is a list with two components, “Aardvark”
and “Camel”, picked from the the positions 1 and 3 in the original list.
The fact that single brackets return a list in case of vector is
actually a smart design choice. First, it cannot return a vector in
general—the requested components may be of different type and
simply not fit into an atomic vector. Second, single\-bracket indexing
in case of vectors actually returns a *subvector*. We just tend to
overlook that a “scalar” is actually a length\-1 vector. But however
smart this design decision may be, people tend to learn it in the hard
way. When confronted with weird errors, check that what you think
should be a vector is in fact a vector and not a list.
The good news is that there is an easy way to extract components. A
single element, and not just a length\-one\-sublist, is extracted by
double brackets. For instance,
```
animals[[2]]
## [1] "Baboon"
```
returns a length\-1 character vector.
Unfortunately, the good news end here. You can extract individual
elements in this way, but you cannot get a vector of individual list
components: `animals[[1:2]]` will give you *subscript out of bounds*.
As above, this is a design choice: as list components may be of
different type, you may not be able to mold these into a single vector.
There are ways to merge components into a vector, given they are of the
same type. For instance `Reduce(c, animals)` will convert the animals
into a vector of suitable type. Ditto with `as.character(animals)`.
### 8\.3\.2 Indexing by Name
If the list is named, one can use a character vector to extract it’s
components, exacly in the same way as we used the numeric positions
above. For instance
```
person <- list(first_name = "Bob", last_name = "Wong", salary = 77000, in_union = TRUE)
person[c("first_name", "salary")]
## $first_name
## [1] "Bob"
##
## $salary
## [1] 77000
person[["first_name"]] # [1] "Bob"
person[["salary"]] # [1] 77000
```
As in case of positional indexing, single brackets return a sublist
while double brackets return the corresponding component itself.
### 8\.3\.3 Indexing by Logical Vector
As in case of atomic vectors, we can use logical indices with
lists too. There are a few differences though:
* one can only extract sublists, not individual components.
`person[c(TRUE, TRUE, FALSE, FALSE)]` will give you a sublist with
first and last name. `person[[c(TRUE, FALSE, FALSE, FALSE)]]` will
fail.
* many operators are vectorized but they are not “listified”. You
cannot do math like `*` or `+` with lists. Hence the
powerful logical indexing operations like `x[x > 0]` are in general not possible
with lists. This substantially reduces the potential usage cases of
logical indexing.
For instance, we can extract all components of certain name from the
list:
```
planes <- list("Airbus 380"=c(seats=575, speed=0.85),
"Boeing 787"=c(seats=290, speed=0.85),
"Airbus 350"=c(seats=325, speed=0.85))
# cruise speed, Mach
planes[startsWith(names(planes), "Airbus")] # extract components, names
# of which starting with "Airbus"
## $`Airbus 380`
## seats speed
## 575.00 0.85
##
## $`Airbus 350`
## seats speed
## 325.00 0.85
```
However, certain vectorized operations, such as `>` or `==` also work with lists
that contain single numeric values as their elements. It seems to be
hard to come up with general rules, so we recommed not to rely on this
behaviour in code.
### 8\.3\.4 Extracting named elements with `$`
Finally, there is a very convenient `$`\-shortcut alternative for
extracting individual components.
If you printed out one of the named lists above, for instance `person`, you would see the following:
```
person <- list(name = "Ada", job = "Programmer")
print(person)
## $first_name
## [1] "Ada"
##
## $job
## [1] "Programmer"
```
Notice that the output lists each name tag prepended with a dollar sign
(**`$`**) symbol, and then on the following line the vector that is the
element itself. You can retrieve individual components in a similar
fashion, the **dollar notation** is one of the easiest ways of accessing
list elements. You refer to the particular element in the list with its
tag by writing the name of the list, followed by a `$`, followed by the element’s tag:
```
person$name # [1] "Ada"
person$job # [1] "Programmer"
```
Obviously, this only works for named lists. There are no dollar notation analogue for atomic vectors, even for named
vectors. `$` extractor only exists for lists (and such data structures
that are derived from lists, like data frames).
You can almost read the dollar sign as like an “apostrophe s” (possessive) in English: so `person$salary` would mean “the `person` list**’s** `salary` value”.
Dollar notation allows list elements to almost be treated as variables in their own right—for example, you specify that you’re talking about the `salary` variable in the `person` list, rather than the `salary` variable in some other list (or not in a list at all).
```
person <- list(first_name = "Ada", job = "Programmer", salary = 78000, in_union = TRUE)
# use elements as function or operation arguments
paste(person$job, person$first_name) # [1] "Programmer Ada"
# assign values to list element
person$job <- "Senior Programmer" # a promotion!
print(person$job) # [1] "Senior Programmer"
# assign value to list element from itself
person$salary <- person$salary * 1.15 # a 15% raise!
print(person$salary) # [1] 89700
```
Dollar\-notation is a drop\-in replacement to double\-brackets extraction
given you know the name of the component. If you do not—as is
often the case when programming—you have to rely on double bracket approach.
### 8\.3\.5 Single vs. Double Brackets vs. Dollar
The list indexing may be confusing: we have single and double brackets,
indexing by position and name, and finally the dollar\-notation. Which
is the right thing to do? As is so often the case, it depends.
* **Dollar notation** is the quickest and easiest way to extract a single
named component in case you know it’s name.
* **Double brackets** is very much a more verbose alternative to the dollar notation. It returns a single component exactly as the dollar notation. However, it also allows one to decide later which components to extract. (This is terribly useful in programs!) For instance,
we can decide if we want to use someones first or last name:
```
person <- list(first_name = "Bob", last_name = "Wong", salary = 77000)
name_to_use <- "last_name" # choose name (i.e., based on formality)
person[[name_to_use]] # [1] "Wong"
name_to_use <- "first_name" # change name to use
person[[name_to_use]] # [1] "Bob"
```
Note: you can often hear that double brackets return a vector. This is only true if the corresponding element is a vector. But they always return the element!
* **Single brackets** is the most powerful and universal way of indexing. If work in a very similar fashion than vector indexing. The main caveat here is that it *returns a sub\-list*, not a vector. (But note that in case of vectors, single\-bracket indexing returns a *sub\-vector*.) It allows by position, by names, and by logical vector.
In some sense it is **filtering** by whatever vector is inside the brackets (which may have just a single element). In R, single brackets *always* mean to filter the collection where the collection may be either atomic vector or list. So if you put single\-brackets after a collection, you get a filtered version of the same collection, containing the desired elements. The type of the collection, list or atomic vector, is not affected.
**Watch out**: In vectors, single\-bracket notation returns a vector, in lists single\-bracket notation returns a list!
We recap this section by an example:
```
animal <- list(class='A', count=201, endangered=TRUE, species='rhinoceros')
## SINGLE brackets returns a list
animal[1]
## $class
## [1] "A"
## can use any vector as the argument to single brackets, just like with vectors
animal[c("species", "endangered")]
## $species
## [1] "rhinoceros"
##
## $endangered
## [1] TRUE
## DOUBLE brackets returns the element (here its a vector)!
animal[[1]] # [1] "A"
## Dollar notation is equivalent to the double brackets
animal$class # [1] "A"
```
Finally, all these methods can also be used for assignment. Just put any of these construct on the left side of the assignment operator `<-`.
8\.4 Modifying Lists
--------------------
As in the case with atomic vectors, you can assign new values to existing elements. However, lists also enable dedicated syntax to *remove* elements. (Remember, you can always “unselect” an element in a vector, including list, by using negative positional index.)
You can add elements to a list simply by assigning a value to a tag (or index) in the list that doesn’t yet exist:
```
person <- list(first_name = "Ada", job = "Programmer", salary = 78000, in_union = TRUE)
# has no `age` element
person$age # NULL
# assign a value to the `age` tag to add it
person$age <- 40
person$age # [1] 40
# assign using index
person[[10]] <- "Tenth field"
# elements 6-9 will be NULL
```
This parallel fairly closely with atomic vectors.
You can remove elements by assiging the special value `NULL` to their tag or index:
```
a_list <- list('A', 201, True)
a_list[[2]] <- NULL # remove element #2
print(a_list)
# [[1]]
# [1] "A"
#
# [[2]]
# [1] TRUE
```
There is no analogue here to atomic vectors.
8\.5 The `lapply()` Function
----------------------------
A large number of common R functions (e.g., `paste()`, `round()`, etc.) and most common operators (like `+`, `>`, etc) are *vectorized* so you can pass vectors as arguments, and the function will be applied to each item in the vector. It “just works”. In case of lists it usually fails. You need to put in a bit more effort if you want to apply a
function to each item in a list.
The effort involves either an explicit loop, or an implicit loop through a function called **`lapply()`** (for ***l**ist apply*). We will discuss the latter approach here.
`lapply()` takes two arguments: the first is a list (or a vector, vectors will do as well) you want to work with, and the second is the function you want to “apply” to each item in that list. For example:
```
# list, not a vector
people <- list("Sarah", "Amit", "Zhang")
# apply the `toupper()` function to each element in `people`
lapply(people, toupper)
## [[1]]
## [1] "SARAH"
##
## [[2]]
## [1] "AMIT"
##
## [[3]]
## [1] "ZHANG"
```
You can add even more arguments to `lapply()`, those will be assumed to belong to the function you are applying:
```
# apply the `paste()` function to each element in `people`,
# with an addition argument `"dances!"` to each call
lapply(people, paste, "dances!")
## [[1]]
## [1] "Sarah dances!"
##
## [[2]]
## [1] "Amit dances!"
##
## [[3]]
## [1] "Zhang dances!"
```
The last unnamed argument, `"dances"`, are taken as the second argument to `paste`. So behind the scenes, `lapply()` runs a loop over `paste("Sarah", "dances!")`, `paste("Amit", "dances!")` and so on.
* Notice that the second argument to `lapply()` is just the function: not the name of the functions as character string (it’s not quoted in `""`). You’re also not actually *calling* that function when you write it’s name in `lapply()` (you don’t put the parenthesis `()` after its name). See more in [section *How to Use Functions*](functions.html#how-to-use-functions).
After the function, you can put any additional arguments you want the applied function to be called with: for example, how many digits to round to, or what value to paste to the end of a string.
Note that the `lapply()` function returns a *new* list; the original one is unmodified. This makes it a [**mapping**](https://en.wikipedia.org/wiki/Map_(parallel_pattern)) operation. It is an operation, and not the same thing as *map* data structure. In mapping operation the code applies the same **elemental function** to the all elements in a list.
You commonly use `lapply()` with your own custom functions which define what you want to do to a single element in that list:
```
# A function that prepends "Hello" to any item
greet <- function(item) {
return(paste("Hello", item))
}
# a list of people
people <- list("Sarah", "Amit", "Zhang")
# greet each name
greetings <- lapply(people, greet)
## [[1]]
## [1] "Hello Sarah"
##
## [[2]]
## [1] "Hello Amit"
##
## [[3]]
## [1] "Hello Zhang"
```
Additionally, `lapply()` is a member of the “`*apply()`” family of functions: a set of functions that each start with different letters and may apply to a different data structure, but otherwise all work in a similar fashion. For example, `lapply()` returns a list, while `sapply()` (**s**implified apply) simplifies the list into a vector, if possible. If you are interested in parallel programming, we recommend you to check out the function `parLapply` and it’s friends in the *parallel* package.
Resources
---------
* [R Tutorial: Lists](http://www.r-tutor.com/r-introduction/list)
* [R Tutorial: Named List Members](http://www.r-tutor.com/r-introduction/list/named-list-members)
* [StackOverflow: Single vs. double brackets](http://stackoverflow.com/questions/1169456/in-r-what-is-the-difference-between-the-and-notations-for-accessing-the)
8\.1 What is a List?
--------------------
A **List** is a lot like an atomic vector. It is also a
*one\-dimensional positional ordered collection of data*. Exaclyt as in
case of atomic vectors, list elements preserve their order, and they
have a well\-defined position in the list. However, lists have a few major differences from vectors:
1. Unlike a vector, you can store elements of *different types* in a
list: e.g., a list can contain numeric data *and* character string data,
functions, and even other lists.
2. Because lists can contain any type of data, they are much less efficient
as vectors. The vectorized operations that can handle atomic vectors on
the fly usually fail in case of lists, or may work substantially slower. Hence one should prefer atomic
vectors over lists if possible.
3. Elements in a list can also be named, but unlike in case of vector,
there exists a convenient shorthand `$`\-construct to extract named elements from lists.
Lists are extremely useful for organizing data. They allow you to group
together data like a person’s name (characters), job title (characters),
salary (number), and whether they are in a union (logical)—and you
don’t have to remember whether the person’s name or title was the first
element! In this sense lists can be used as a quick alternative to
formal classes, objects that can store heterogeneous data in a
consistent way. This is one of the primary uses of lists.
8\.2 Creating Lists
-------------------
You create a list by using the `list()` function and passing it any number of **arguments** (separated by commas) that you want to make up that list—similar to the `c()` functon for vectors.
However, if your list contains heterogenous elements, it is usually a
good idea to specify the **names** (or **tags**) for each element in the list in the same
way you can give names to vector elements in `c()`—by putting the name tag (which is like a variable name), followed by an equal symbol (**`=`**), followed by the value you want to go in the list and be associated with that tag. For example:
```
person <- list(first_name = "Ada", job = "Programmer", salary = 78000,
in_union = TRUE)
person
## person
## $first_name
## [1] "Ada"
##
## $job
## [1] "Programmer"
##
## $salary
## [1] 78000
##
## $in_union
## [1] TRUE
```
This creates a list of 4 elements: `"Ada"` which is tagged with
`first_name`, `"Programmer"` which is tagged with `job`, `78000` which
is tagged with `salary`, and `TRUE` which is tagged with `in_union`.
The output lists all component names following the dollar sign `$` (more
about it below), and prints the components themselves right after the names.
* Note that you can have *vectors* as elements of a list. In fact, each
of these scalar values are really vectors (of length 1\) as indicated
by `[1]` preceeding their values!
* The use of the `=` symbol here is an example of assigning a value to a specific named argument. You can actually use this syntax for *any* function (e.g., rather than listing arguments in order, you can explicit “assign” a value to each argument), but it is more common to just use the normal order of the arguments if there aren’t very many.
Note that if you need to, you can get a *vector* of element tags using the `names()` function:
```
person <- list(first_name = "Ada", job = "Programmer", salary = 78000, in_union = TRUE)
names(person) # [1] "first_name" "job" "salary" "in_union"
```
This is useful for understanding the structure of variables that may have come from other data sources.
It is possible to create a list without tagging the elements, and assign
names later if you wish:
```
person_alt <- list("Ada", 78000, TRUE)
person_alt
## [[1]]
## [1] "Ada"
##
## [[3]]
## [1] 78000
##
## [[4]]
## [1] TRUE
names(person_alt) <- c("name", "income", "memebership")
person_alt
## $name
## [1] "Ada"
##
## $income
## [1] 78000
##
## $memebership
## [1] TRUE
```
Note that the name tags are missing before we assign names, instead of
names we see the position of the components in double brackets like
`[[1]]` (more about it [below](lists.html#lists-indexing-by-position)).
Making name\-less lists and assigning names later is usually more error\-prone and harder way to make lists
manually, but when you automatically create lists in your code, it may be the only option.
Finally, empty lists of given length can also be created using the
general `vector()` function. For instance, `vector("list", 5)`, creates
a list of five `NULL` elements. This is a good approach if you just want am empty list to be filled in a loop later.
8\.3 Accessing List Elements
----------------------------
There are four ways to access elements in lists. Three of these reflect
atomic vector indexing, the [`$`\-construct](lists.html#lists-dollar-shortcut) is unique for lists. However, there
are important differences.
### 8\.3\.1 Indexing by position
You can always access list elements by their position. It is in many
ways similar to that of atomic vectors with one major caveat:
indexing with single brackets will extract not the components but a *sublist* that
contains just those components:
```
# note: this list is not not an atomic vector, even though elements have the same types
animals <- list("Aardvark", "Baboon", "Camel")
animals[c(1,3)]
## [[1]]
## [1] "Aardvark"
##
## [[2]]
## [1] "Camel"
```
You can see that the result is a list with two components, “Aardvark”
and “Camel”, picked from the the positions 1 and 3 in the original list.
The fact that single brackets return a list in case of vector is
actually a smart design choice. First, it cannot return a vector in
general—the requested components may be of different type and
simply not fit into an atomic vector. Second, single\-bracket indexing
in case of vectors actually returns a *subvector*. We just tend to
overlook that a “scalar” is actually a length\-1 vector. But however
smart this design decision may be, people tend to learn it in the hard
way. When confronted with weird errors, check that what you think
should be a vector is in fact a vector and not a list.
The good news is that there is an easy way to extract components. A
single element, and not just a length\-one\-sublist, is extracted by
double brackets. For instance,
```
animals[[2]]
## [1] "Baboon"
```
returns a length\-1 character vector.
Unfortunately, the good news end here. You can extract individual
elements in this way, but you cannot get a vector of individual list
components: `animals[[1:2]]` will give you *subscript out of bounds*.
As above, this is a design choice: as list components may be of
different type, you may not be able to mold these into a single vector.
There are ways to merge components into a vector, given they are of the
same type. For instance `Reduce(c, animals)` will convert the animals
into a vector of suitable type. Ditto with `as.character(animals)`.
### 8\.3\.2 Indexing by Name
If the list is named, one can use a character vector to extract it’s
components, exacly in the same way as we used the numeric positions
above. For instance
```
person <- list(first_name = "Bob", last_name = "Wong", salary = 77000, in_union = TRUE)
person[c("first_name", "salary")]
## $first_name
## [1] "Bob"
##
## $salary
## [1] 77000
person[["first_name"]] # [1] "Bob"
person[["salary"]] # [1] 77000
```
As in case of positional indexing, single brackets return a sublist
while double brackets return the corresponding component itself.
### 8\.3\.3 Indexing by Logical Vector
As in case of atomic vectors, we can use logical indices with
lists too. There are a few differences though:
* one can only extract sublists, not individual components.
`person[c(TRUE, TRUE, FALSE, FALSE)]` will give you a sublist with
first and last name. `person[[c(TRUE, FALSE, FALSE, FALSE)]]` will
fail.
* many operators are vectorized but they are not “listified”. You
cannot do math like `*` or `+` with lists. Hence the
powerful logical indexing operations like `x[x > 0]` are in general not possible
with lists. This substantially reduces the potential usage cases of
logical indexing.
For instance, we can extract all components of certain name from the
list:
```
planes <- list("Airbus 380"=c(seats=575, speed=0.85),
"Boeing 787"=c(seats=290, speed=0.85),
"Airbus 350"=c(seats=325, speed=0.85))
# cruise speed, Mach
planes[startsWith(names(planes), "Airbus")] # extract components, names
# of which starting with "Airbus"
## $`Airbus 380`
## seats speed
## 575.00 0.85
##
## $`Airbus 350`
## seats speed
## 325.00 0.85
```
However, certain vectorized operations, such as `>` or `==` also work with lists
that contain single numeric values as their elements. It seems to be
hard to come up with general rules, so we recommed not to rely on this
behaviour in code.
### 8\.3\.4 Extracting named elements with `$`
Finally, there is a very convenient `$`\-shortcut alternative for
extracting individual components.
If you printed out one of the named lists above, for instance `person`, you would see the following:
```
person <- list(name = "Ada", job = "Programmer")
print(person)
## $first_name
## [1] "Ada"
##
## $job
## [1] "Programmer"
```
Notice that the output lists each name tag prepended with a dollar sign
(**`$`**) symbol, and then on the following line the vector that is the
element itself. You can retrieve individual components in a similar
fashion, the **dollar notation** is one of the easiest ways of accessing
list elements. You refer to the particular element in the list with its
tag by writing the name of the list, followed by a `$`, followed by the element’s tag:
```
person$name # [1] "Ada"
person$job # [1] "Programmer"
```
Obviously, this only works for named lists. There are no dollar notation analogue for atomic vectors, even for named
vectors. `$` extractor only exists for lists (and such data structures
that are derived from lists, like data frames).
You can almost read the dollar sign as like an “apostrophe s” (possessive) in English: so `person$salary` would mean “the `person` list**’s** `salary` value”.
Dollar notation allows list elements to almost be treated as variables in their own right—for example, you specify that you’re talking about the `salary` variable in the `person` list, rather than the `salary` variable in some other list (or not in a list at all).
```
person <- list(first_name = "Ada", job = "Programmer", salary = 78000, in_union = TRUE)
# use elements as function or operation arguments
paste(person$job, person$first_name) # [1] "Programmer Ada"
# assign values to list element
person$job <- "Senior Programmer" # a promotion!
print(person$job) # [1] "Senior Programmer"
# assign value to list element from itself
person$salary <- person$salary * 1.15 # a 15% raise!
print(person$salary) # [1] 89700
```
Dollar\-notation is a drop\-in replacement to double\-brackets extraction
given you know the name of the component. If you do not—as is
often the case when programming—you have to rely on double bracket approach.
### 8\.3\.5 Single vs. Double Brackets vs. Dollar
The list indexing may be confusing: we have single and double brackets,
indexing by position and name, and finally the dollar\-notation. Which
is the right thing to do? As is so often the case, it depends.
* **Dollar notation** is the quickest and easiest way to extract a single
named component in case you know it’s name.
* **Double brackets** is very much a more verbose alternative to the dollar notation. It returns a single component exactly as the dollar notation. However, it also allows one to decide later which components to extract. (This is terribly useful in programs!) For instance,
we can decide if we want to use someones first or last name:
```
person <- list(first_name = "Bob", last_name = "Wong", salary = 77000)
name_to_use <- "last_name" # choose name (i.e., based on formality)
person[[name_to_use]] # [1] "Wong"
name_to_use <- "first_name" # change name to use
person[[name_to_use]] # [1] "Bob"
```
Note: you can often hear that double brackets return a vector. This is only true if the corresponding element is a vector. But they always return the element!
* **Single brackets** is the most powerful and universal way of indexing. If work in a very similar fashion than vector indexing. The main caveat here is that it *returns a sub\-list*, not a vector. (But note that in case of vectors, single\-bracket indexing returns a *sub\-vector*.) It allows by position, by names, and by logical vector.
In some sense it is **filtering** by whatever vector is inside the brackets (which may have just a single element). In R, single brackets *always* mean to filter the collection where the collection may be either atomic vector or list. So if you put single\-brackets after a collection, you get a filtered version of the same collection, containing the desired elements. The type of the collection, list or atomic vector, is not affected.
**Watch out**: In vectors, single\-bracket notation returns a vector, in lists single\-bracket notation returns a list!
We recap this section by an example:
```
animal <- list(class='A', count=201, endangered=TRUE, species='rhinoceros')
## SINGLE brackets returns a list
animal[1]
## $class
## [1] "A"
## can use any vector as the argument to single brackets, just like with vectors
animal[c("species", "endangered")]
## $species
## [1] "rhinoceros"
##
## $endangered
## [1] TRUE
## DOUBLE brackets returns the element (here its a vector)!
animal[[1]] # [1] "A"
## Dollar notation is equivalent to the double brackets
animal$class # [1] "A"
```
Finally, all these methods can also be used for assignment. Just put any of these construct on the left side of the assignment operator `<-`.
### 8\.3\.1 Indexing by position
You can always access list elements by their position. It is in many
ways similar to that of atomic vectors with one major caveat:
indexing with single brackets will extract not the components but a *sublist* that
contains just those components:
```
# note: this list is not not an atomic vector, even though elements have the same types
animals <- list("Aardvark", "Baboon", "Camel")
animals[c(1,3)]
## [[1]]
## [1] "Aardvark"
##
## [[2]]
## [1] "Camel"
```
You can see that the result is a list with two components, “Aardvark”
and “Camel”, picked from the the positions 1 and 3 in the original list.
The fact that single brackets return a list in case of vector is
actually a smart design choice. First, it cannot return a vector in
general—the requested components may be of different type and
simply not fit into an atomic vector. Second, single\-bracket indexing
in case of vectors actually returns a *subvector*. We just tend to
overlook that a “scalar” is actually a length\-1 vector. But however
smart this design decision may be, people tend to learn it in the hard
way. When confronted with weird errors, check that what you think
should be a vector is in fact a vector and not a list.
The good news is that there is an easy way to extract components. A
single element, and not just a length\-one\-sublist, is extracted by
double brackets. For instance,
```
animals[[2]]
## [1] "Baboon"
```
returns a length\-1 character vector.
Unfortunately, the good news end here. You can extract individual
elements in this way, but you cannot get a vector of individual list
components: `animals[[1:2]]` will give you *subscript out of bounds*.
As above, this is a design choice: as list components may be of
different type, you may not be able to mold these into a single vector.
There are ways to merge components into a vector, given they are of the
same type. For instance `Reduce(c, animals)` will convert the animals
into a vector of suitable type. Ditto with `as.character(animals)`.
### 8\.3\.2 Indexing by Name
If the list is named, one can use a character vector to extract it’s
components, exacly in the same way as we used the numeric positions
above. For instance
```
person <- list(first_name = "Bob", last_name = "Wong", salary = 77000, in_union = TRUE)
person[c("first_name", "salary")]
## $first_name
## [1] "Bob"
##
## $salary
## [1] 77000
person[["first_name"]] # [1] "Bob"
person[["salary"]] # [1] 77000
```
As in case of positional indexing, single brackets return a sublist
while double brackets return the corresponding component itself.
### 8\.3\.3 Indexing by Logical Vector
As in case of atomic vectors, we can use logical indices with
lists too. There are a few differences though:
* one can only extract sublists, not individual components.
`person[c(TRUE, TRUE, FALSE, FALSE)]` will give you a sublist with
first and last name. `person[[c(TRUE, FALSE, FALSE, FALSE)]]` will
fail.
* many operators are vectorized but they are not “listified”. You
cannot do math like `*` or `+` with lists. Hence the
powerful logical indexing operations like `x[x > 0]` are in general not possible
with lists. This substantially reduces the potential usage cases of
logical indexing.
For instance, we can extract all components of certain name from the
list:
```
planes <- list("Airbus 380"=c(seats=575, speed=0.85),
"Boeing 787"=c(seats=290, speed=0.85),
"Airbus 350"=c(seats=325, speed=0.85))
# cruise speed, Mach
planes[startsWith(names(planes), "Airbus")] # extract components, names
# of which starting with "Airbus"
## $`Airbus 380`
## seats speed
## 575.00 0.85
##
## $`Airbus 350`
## seats speed
## 325.00 0.85
```
However, certain vectorized operations, such as `>` or `==` also work with lists
that contain single numeric values as their elements. It seems to be
hard to come up with general rules, so we recommed not to rely on this
behaviour in code.
### 8\.3\.4 Extracting named elements with `$`
Finally, there is a very convenient `$`\-shortcut alternative for
extracting individual components.
If you printed out one of the named lists above, for instance `person`, you would see the following:
```
person <- list(name = "Ada", job = "Programmer")
print(person)
## $first_name
## [1] "Ada"
##
## $job
## [1] "Programmer"
```
Notice that the output lists each name tag prepended with a dollar sign
(**`$`**) symbol, and then on the following line the vector that is the
element itself. You can retrieve individual components in a similar
fashion, the **dollar notation** is one of the easiest ways of accessing
list elements. You refer to the particular element in the list with its
tag by writing the name of the list, followed by a `$`, followed by the element’s tag:
```
person$name # [1] "Ada"
person$job # [1] "Programmer"
```
Obviously, this only works for named lists. There are no dollar notation analogue for atomic vectors, even for named
vectors. `$` extractor only exists for lists (and such data structures
that are derived from lists, like data frames).
You can almost read the dollar sign as like an “apostrophe s” (possessive) in English: so `person$salary` would mean “the `person` list**’s** `salary` value”.
Dollar notation allows list elements to almost be treated as variables in their own right—for example, you specify that you’re talking about the `salary` variable in the `person` list, rather than the `salary` variable in some other list (or not in a list at all).
```
person <- list(first_name = "Ada", job = "Programmer", salary = 78000, in_union = TRUE)
# use elements as function or operation arguments
paste(person$job, person$first_name) # [1] "Programmer Ada"
# assign values to list element
person$job <- "Senior Programmer" # a promotion!
print(person$job) # [1] "Senior Programmer"
# assign value to list element from itself
person$salary <- person$salary * 1.15 # a 15% raise!
print(person$salary) # [1] 89700
```
Dollar\-notation is a drop\-in replacement to double\-brackets extraction
given you know the name of the component. If you do not—as is
often the case when programming—you have to rely on double bracket approach.
### 8\.3\.5 Single vs. Double Brackets vs. Dollar
The list indexing may be confusing: we have single and double brackets,
indexing by position and name, and finally the dollar\-notation. Which
is the right thing to do? As is so often the case, it depends.
* **Dollar notation** is the quickest and easiest way to extract a single
named component in case you know it’s name.
* **Double brackets** is very much a more verbose alternative to the dollar notation. It returns a single component exactly as the dollar notation. However, it also allows one to decide later which components to extract. (This is terribly useful in programs!) For instance,
we can decide if we want to use someones first or last name:
```
person <- list(first_name = "Bob", last_name = "Wong", salary = 77000)
name_to_use <- "last_name" # choose name (i.e., based on formality)
person[[name_to_use]] # [1] "Wong"
name_to_use <- "first_name" # change name to use
person[[name_to_use]] # [1] "Bob"
```
Note: you can often hear that double brackets return a vector. This is only true if the corresponding element is a vector. But they always return the element!
* **Single brackets** is the most powerful and universal way of indexing. If work in a very similar fashion than vector indexing. The main caveat here is that it *returns a sub\-list*, not a vector. (But note that in case of vectors, single\-bracket indexing returns a *sub\-vector*.) It allows by position, by names, and by logical vector.
In some sense it is **filtering** by whatever vector is inside the brackets (which may have just a single element). In R, single brackets *always* mean to filter the collection where the collection may be either atomic vector or list. So if you put single\-brackets after a collection, you get a filtered version of the same collection, containing the desired elements. The type of the collection, list or atomic vector, is not affected.
**Watch out**: In vectors, single\-bracket notation returns a vector, in lists single\-bracket notation returns a list!
We recap this section by an example:
```
animal <- list(class='A', count=201, endangered=TRUE, species='rhinoceros')
## SINGLE brackets returns a list
animal[1]
## $class
## [1] "A"
## can use any vector as the argument to single brackets, just like with vectors
animal[c("species", "endangered")]
## $species
## [1] "rhinoceros"
##
## $endangered
## [1] TRUE
## DOUBLE brackets returns the element (here its a vector)!
animal[[1]] # [1] "A"
## Dollar notation is equivalent to the double brackets
animal$class # [1] "A"
```
Finally, all these methods can also be used for assignment. Just put any of these construct on the left side of the assignment operator `<-`.
8\.4 Modifying Lists
--------------------
As in the case with atomic vectors, you can assign new values to existing elements. However, lists also enable dedicated syntax to *remove* elements. (Remember, you can always “unselect” an element in a vector, including list, by using negative positional index.)
You can add elements to a list simply by assigning a value to a tag (or index) in the list that doesn’t yet exist:
```
person <- list(first_name = "Ada", job = "Programmer", salary = 78000, in_union = TRUE)
# has no `age` element
person$age # NULL
# assign a value to the `age` tag to add it
person$age <- 40
person$age # [1] 40
# assign using index
person[[10]] <- "Tenth field"
# elements 6-9 will be NULL
```
This parallel fairly closely with atomic vectors.
You can remove elements by assiging the special value `NULL` to their tag or index:
```
a_list <- list('A', 201, True)
a_list[[2]] <- NULL # remove element #2
print(a_list)
# [[1]]
# [1] "A"
#
# [[2]]
# [1] TRUE
```
There is no analogue here to atomic vectors.
8\.5 The `lapply()` Function
----------------------------
A large number of common R functions (e.g., `paste()`, `round()`, etc.) and most common operators (like `+`, `>`, etc) are *vectorized* so you can pass vectors as arguments, and the function will be applied to each item in the vector. It “just works”. In case of lists it usually fails. You need to put in a bit more effort if you want to apply a
function to each item in a list.
The effort involves either an explicit loop, or an implicit loop through a function called **`lapply()`** (for ***l**ist apply*). We will discuss the latter approach here.
`lapply()` takes two arguments: the first is a list (or a vector, vectors will do as well) you want to work with, and the second is the function you want to “apply” to each item in that list. For example:
```
# list, not a vector
people <- list("Sarah", "Amit", "Zhang")
# apply the `toupper()` function to each element in `people`
lapply(people, toupper)
## [[1]]
## [1] "SARAH"
##
## [[2]]
## [1] "AMIT"
##
## [[3]]
## [1] "ZHANG"
```
You can add even more arguments to `lapply()`, those will be assumed to belong to the function you are applying:
```
# apply the `paste()` function to each element in `people`,
# with an addition argument `"dances!"` to each call
lapply(people, paste, "dances!")
## [[1]]
## [1] "Sarah dances!"
##
## [[2]]
## [1] "Amit dances!"
##
## [[3]]
## [1] "Zhang dances!"
```
The last unnamed argument, `"dances"`, are taken as the second argument to `paste`. So behind the scenes, `lapply()` runs a loop over `paste("Sarah", "dances!")`, `paste("Amit", "dances!")` and so on.
* Notice that the second argument to `lapply()` is just the function: not the name of the functions as character string (it’s not quoted in `""`). You’re also not actually *calling* that function when you write it’s name in `lapply()` (you don’t put the parenthesis `()` after its name). See more in [section *How to Use Functions*](functions.html#how-to-use-functions).
After the function, you can put any additional arguments you want the applied function to be called with: for example, how many digits to round to, or what value to paste to the end of a string.
Note that the `lapply()` function returns a *new* list; the original one is unmodified. This makes it a [**mapping**](https://en.wikipedia.org/wiki/Map_(parallel_pattern)) operation. It is an operation, and not the same thing as *map* data structure. In mapping operation the code applies the same **elemental function** to the all elements in a list.
You commonly use `lapply()` with your own custom functions which define what you want to do to a single element in that list:
```
# A function that prepends "Hello" to any item
greet <- function(item) {
return(paste("Hello", item))
}
# a list of people
people <- list("Sarah", "Amit", "Zhang")
# greet each name
greetings <- lapply(people, greet)
## [[1]]
## [1] "Hello Sarah"
##
## [[2]]
## [1] "Hello Amit"
##
## [[3]]
## [1] "Hello Zhang"
```
Additionally, `lapply()` is a member of the “`*apply()`” family of functions: a set of functions that each start with different letters and may apply to a different data structure, but otherwise all work in a similar fashion. For example, `lapply()` returns a list, while `sapply()` (**s**implified apply) simplifies the list into a vector, if possible. If you are interested in parallel programming, we recommend you to check out the function `parLapply` and it’s friends in the *parallel* package.
Resources
---------
* [R Tutorial: Lists](http://www.r-tutor.com/r-introduction/list)
* [R Tutorial: Named List Members](http://www.r-tutor.com/r-introduction/list/named-list-members)
* [StackOverflow: Single vs. double brackets](http://stackoverflow.com/questions/1169456/in-r-what-is-the-difference-between-the-and-notations-for-accessing-the)
| Field Specific |
info201.github.io | https://info201.github.io/data-frames.html |
Chapter 9 Data Frames
=====================
This chapter introduces **data frame** objects, which are the primary data storage type used in R. In many ways, data frames are similar to a two\-dimensional row/column layout that you should be familiar with from spreadsheet programs like Microsoft Excel. Rather than interact with this data structure through a UI, we’ll learn how to programmatically and reproducibly perform operations on this data type. This chapter covers various ways of creating, describing, and accessing data frames, as well as how they are related to other data types in R.
9\.1 What is a Data Frame?
--------------------------
At a practical level, **Data Frames** act like *tables*, where data is organized into rows and columns. For example, consider the following table of names, weights, and heights:
A table of data (people’s weights and heights).
In this table, each *row* represents a **record** or **observation**: an instance of a single thing being measured (e.g., a person). Each *column* represents a **feature**: a particular property or aspect of the thing being measured (e.g., the person’s height or weight). This structure is used to organize lots of different *related* data points for easier analysis.
In R, you can use **data frames** to represent these kinds of tables. Data frames are really just **lists** (see [Lists](lists.html#lists)) in which each element is a **vector of the same length**. Each vector represents a **column, *not* a row**. The elements at corresponding indices in the vectors are considered part of the same record (row).
* This makes sense because each row may have a different type of data—e.g., a person’s `name` (string) and `height` (number)—and vector elements must all be of the same type.
For example, you can think of the above table as a *list* of three *vectors*: `name`, `height` and `weight`. The name, height, and weight of the first person measured are represented by the first elements of the `name`, `height` and `weight` vectors respectively.
You can work with data frames as if they were lists, but data frames include additional properties as well that make them particularly well suited for handling tables of data.
### 9\.1\.1 Creating Data Frames
Typically you will *load* data sets from some external source (see [below](data-frames.html#csv-files)), rather than writing out the data by hand. However, it is important to understand that you can construct a data frame by combining multiple vectors. To accomplish this, you can use the `data.frame()` function, which accepts **vectors** as *arguments*, and creates a table with a column for each vector. For example:
```
# vector of names
name <- c("Ada", "Bob", "Chris", "Diya", "Emma")
# Vector of heights
height <- 58:62
# Vector of weights
weight <- c(115, 117, 120, 123, 126)
# Combine the vectors into a data.frame
# Note the names of the variables become the names of the columns!
my_data <- data.frame(name, height, weight, stringsAsFactors = FALSE)
```
* (The last argument to the `data.frame()` function is included because one of the vectors contains strings; it tells R to treat that vector as a *vector* not as a **factor**. This is usually what you’ll want to do. See below for details about [factors](data-frames.html#factors)).
Because data frame elements are lists, you can access the values from `my_data` using the same **dollar notation** and **double\-bracket notation** as lists:
```
# Using the same weights/heights as above:
my_data <- data.frame(height, weight)
# Retrieve weights (the `weight` element of the list: a vector!)
my_weights <- my_data$weight
# Retrieve heights (the whole column: a vector!)
my_heights <- my_data[["height"]]
```
### 9\.1\.2 Describing Structure of Data Frames
While you can interact with data frames as lists, they also offer a number of additional capabilities and functions. For example, here are a few ways you can *inspect* the structure of a data frame:
| Function | Description |
| --- | --- |
| `nrow(my_data_frame)` | Number of rows in the data frame |
| `ncol(my_data_frame)` | Number of columns in the data frame |
| `dim(my_data_frame)` | Dimensions (rows, columns) in the data frame |
| `colnames(my_data_frame)` | Names of the columns of the data frame |
| `rownames(my_data_frame)` | Names of the row of the data frame |
| `head(my_data_frame)` | Extracts the first few rows of the data frame (as a new data frame) |
| `tail(my_data_frame)` | Extracts the last few rows of the data frame (as a new data frame) |
| `View(my_data_frame)` | Opens the data frame in as spreadsheet\-like viewer (only in RStudio) |
Note that many of these description functions can also be used to *modify* the structure of a data frame. For example, you can use the `colnames` functions to assign a new set of column names to a data frame:
```
# Using the same weights/heights as above:
my_data <- data.frame(name, height, weight)
# A vector of new column names
new_col_names <- c('first_name','how_tall','how_heavy')
# Assign that vector to be the vector of column names
colnames(my_data) <- new_col_names
```
### 9\.1\.3 Accessing Data in Data Frames
As stated above, since data frames *are* lists, it’s possible to use **dollar notation** (`my_data_frame$column_name`) or **double\-bracket notation** (`my_data_frame[['column_name']]`) to access entire columns. However, R also uses a variation of **single\-bracket notation** which allows you to access individual data elements (cells) in the table. In this syntax, you put *two* values separated by a comma (**`,`**) inside the brackets—the first for which row and the second for which column you wish you extract:
| Syntax | Description | Example |
| --- | --- | --- |
| `my_df[row_num, col_num]` | Element by row and column indices | `my_frame[2,3]` (element in the second row, third column) |
| `my_df[row_name, col_name]` | Element by row and column names | `my_frame['Ada','height']` (element in row *named* `Ada` and column *named* `height`; the `height` of `Ada`) |
| `my_df[row, col]` | Element by row and col; can mix indices and names | `my_frame[2,'height']` (second element in the `height` column) |
| `my_df[row, ]` | All elements (columns) in row index or name | `my_frame[2,]` (all columns in the second row) |
| `my_df[, col]` | All elements (rows) in a col index or name | `my_frame[,'height']` (all rows in the `height` column; equivalent to list notations) |
Take special note of the 4th option’s syntax (for retrieving rows): you still include the comma (`,`), but because you leave *which column* blank, you get all of the columns!
```
# Extract the second row
my_data[2, ] # comma
# Extract the second column AS A VECTOR
my_data[, 2] # comma
# Extract the second column AS A DATA FRAME (filtering)
my_data[2] # no comma
```
(Extracting from more than one column will produce a *sub\-data frame*; extracting from just one column will produce a vector).
And of course, because *everything is a vector*, you’re actually specifying vectors of indices to extract. This allows you to get multiple rows or columns:
```
# Get the second through fourth rows
my_data[2:4, ]
# Get the `height` and `weight` columns
my_data[, c("height", "weight")]
# Perform filtering
my_data[my_data$height > 60, ] # rows for which `height` is greater than 60
```
9\.2 Working with CSV Data
--------------------------
So far you’ve been constructing your own data frames by “hard\-coding” the data values. But it’s much more common to load that data from somewhere else, such as a separate file on your computer or by downloading it off the internet. While R is able to ingest data from a variety of sources, this chapter will focus on reading tabular data in **comma separated value** (CSV) format, usually stored in a `.csv` file. In this format, each line of the file represents a record (*row*) of data, while each feature (*column*) of that record is separated by a comma:
```
Ada, 58, 115
Bob, 59, 117
Chris, 60, 120
Diya, 61, 123
Emma, 62, 126
```
Most spreadsheet programs like Microsoft Excel, Numbers, or Google Sheets are simply interfaces for formatting and interacting with data that is saved in this format. These programs easily import and export `.csv` files; however `.csv` files are unable to save the formatting done in those programs—the files only store the data!
You can load the data from a `.csv` file into R by using the `read.csv()` function:
```
# Read data from the file `my_file.csv` into a data frame `my_data`
my_data <- read.csv('my_file.csv', stringsAsFactors=FALSE)
```
Again, use the `stringsAsFactors` argument to make sure string data is stored as a *vector* rather than as a *factor* (see [below](data-frames.html#factors)). This function will return a data frame, just like those described above!
**Important Note**: If for whatever reason an element is missing from a data frame (which is very common with real world data!), R will fill that cell with the logical value `NA` (distinct from the string `"NA"`), meaning “**N**ot **A**vailable”. There are multiple ways to handle this in an analysis; see [this link](http://www.statmethods.net/input/missingdata.html) among others for details.
### 9\.2\.1 Working Directory
The biggest complication when loading `.csv` files is that the `read.csv()` function takes as an argument a **path** to the file. Because you want this script to work on any computer (to support collaboration, as well as things like assignment grading), you need to be sure to use a **relative path** to the file. The question is: *relative to what*?
Like the command\-line, the R interpreter (running inside R Studio) has a **current working directory** from which all file paths are relative. The trick is that ***the working directory is not the directory of the current script file!***
* This makes sense if you think about it: you can run R commands through the console without having a script, and you can have open multiple script files from separate folders that are all interacting with the same execution environment.
Just as you can view the current working directory when on the command line (using `pwd`), you can use an R function to view the current working directory when in R:
```
# get the absolute path to the current working directory
getwd()
```
You often will want to change the working directory to be your “project” directory (wherever your scripts and data files happen to be). It is possible to change the current working directory using the `setwd()` function. However, this function would also take an absolute path, so doesn’t fix the problem. You would not want to include this absolute path in your script (though you could use it from the console).
One solution is to use the tilde (`~`) shortcut to specify your directory:
```
# Set working directory on Desktop
setwd("~/Desktop/project-name")
```
This enables you to work across machines, as long as the project is stored in the same location on each machine.
Another solution is to use R Studio itself to change the working directory. This is reasonable because the working directory is a property of the *current running environment*, which is what R Studio makes accessible! The easiest way to do this is to use the **`Session > Set Working Directory`** menu options: you can either set the working directory `To Source File Location` (the folder containing whichever `.R` script you are currently editing; this is usually what you want), or you can browse for a particular directory with `Choose Directory`.
Use `Session > Set Working Directory` to change the working directory through R Studio
You should do this whenever you hit a “path” problem when loading external files. If you want to do this repeatedly by calling `setwd()` from your script to an absolute path, you may want to keep it commented out (`# setwd(...)`) so it doesn’t cause problems for others who try to run your script.
9\.3 Factor Variables
---------------------
**Factors** are a way of *optimizing* variables that consist of a finite set of categories (i.e., they are **categorical (nominal) variables**).
For example, imagine that you had a vector of shirt sizes which could only take on the values `small`, `medium`, or `large`. If you were working with a large dataset (thousands of shirts!), it would end up taking up a lot of memory to store the character strings (5\+ letters per word at 1 or more bytes per letter) for each one of those variables.
A **factor** on the other hand would instead store a *number* (called a **level**) for each of these character strings: for example, `1` for `small`, `2` for `medium`, or `3` for `large` (though the order or specific numbers will vary). R will remember the relationship between the integers and their **labels** (the strings). Since each number only takes 4 bytes (rather than 1 per letter), factors allow R to keep much more information in memory.
```
# Start with a character vector of shirt sizes
shirt_sizes <- c("small", "medium", "small", "large", "medium", "large")
# Convert to a vector of factor data
shirt_sizes_factor <- as.factor(shirt_sizes)
# View the factor and its levels
print(shirt_sizes_factor)
# The length of the factor is still the length of the vector, not the number of levels
length(shirt_sizes_factor) # 6
```
When you print out the `shirt_sizes_factor` variable, R still (intelligently) prints out the **labels** that you are presumably interested in. It also indicates the **levels**, which are the *only* possible values that elements can take on.
It is worth re\-stating: **factors are not vectors**. This means that most all the operations and functions you want to use on vectors *will not work*:
```
# Create a factor of numbers (factors need not be strings)
num_factors <- as.factor(c(10,10,20,20,30,30,40,40))
# Print the factor to see its levels
print(num_factors)
# Multiply the numbers by 2
num_factors * 2 # Error: * not meaningful
# returns vector of NA instead
# Changing entry to a level is fine
num_factors[1] <- 40
# Change entry to a value that ISN'T a level fails
num_factors[1] <- 50 # Error: invalid factor level
# num_factors[1] is now NA
```
If you create a data frame with a string vector as a column (as what happens with `read.csv()`), it will automatically be treated as a factor *unless you explicitly tell it not to*:
```
# Vector of shirt sizes
shirt_size <- c("small", "medium", "small", "large", "medium", "large")
# Vector of costs (in dollars)
cost <- c(15.5, 17, 17, 14, 12, 23)
# Data frame of inventory (with factors, since didn't say otherwise)
shirts_factor <- data.frame(shirt_size, cost)
# The shirt_size column is a factor
is.factor(shirts_factor$shirt_size) # TRUE
# Can treat this as a vector; but better to fix how the data is loaded
as.vector(shirts_factor$shirt_size) # a vector
# Data frame of orders (without factoring)
shirts <- data.frame(shirt_size, cost, stringsAsFactors = FALSE)
# The shirt_size column is NOT a factor
is.factor(shirts$shirt_size) # FALSE
```
This is not to say that factors can’t be useful (beyond just saving memory)! They offer easy ways to group and process data using specialized functions:
```
shirt_size <- c("small", "medium", "small", "large", "medium", "large")
cost <- c(15.5, 17, 17, 14, 12, 23)
# Data frame of inventory (with factors)
shirts_factor <- data.frame(shirt_size, cost)
# Produce a list of data frames, one for each factor level
# first argument is the data frame to split, second is the factor to split by
shirt_size_frames <- split(shirts_factor, shirts_factor$shirt_size)
# Apply a function (mean) to each factor level
# first argument is the vector to apply the function to,
# second argument is the factor to split by
# third argument is the name of the function
tapply(shirts_factor$cost, shirts_factor$shirt_size, mean)
```
However, in general this course is more interested in working with data as vectors, thus you should always use `stringsAsFactors=FALSE` when creating data frames or loading `.csv` files that include strings.
Resources
---------
* [R Tutorial: Data Frames](http://www.r-tutor.com/r-introduction/data-frame)
* [R Tutorial: Data Frame Indexing](http://www.r-tutor.com/r-introduction/data-frame/data-frame-row-slice)
* [Quick\-R: Missing Values](http://www.statmethods.net/input/missingdata.html)
* [Factor Variables (UCLA)](http://www.ats.ucla.edu/stat/r/modules/factor_variables.htm)
9\.1 What is a Data Frame?
--------------------------
At a practical level, **Data Frames** act like *tables*, where data is organized into rows and columns. For example, consider the following table of names, weights, and heights:
A table of data (people’s weights and heights).
In this table, each *row* represents a **record** or **observation**: an instance of a single thing being measured (e.g., a person). Each *column* represents a **feature**: a particular property or aspect of the thing being measured (e.g., the person’s height or weight). This structure is used to organize lots of different *related* data points for easier analysis.
In R, you can use **data frames** to represent these kinds of tables. Data frames are really just **lists** (see [Lists](lists.html#lists)) in which each element is a **vector of the same length**. Each vector represents a **column, *not* a row**. The elements at corresponding indices in the vectors are considered part of the same record (row).
* This makes sense because each row may have a different type of data—e.g., a person’s `name` (string) and `height` (number)—and vector elements must all be of the same type.
For example, you can think of the above table as a *list* of three *vectors*: `name`, `height` and `weight`. The name, height, and weight of the first person measured are represented by the first elements of the `name`, `height` and `weight` vectors respectively.
You can work with data frames as if they were lists, but data frames include additional properties as well that make them particularly well suited for handling tables of data.
### 9\.1\.1 Creating Data Frames
Typically you will *load* data sets from some external source (see [below](data-frames.html#csv-files)), rather than writing out the data by hand. However, it is important to understand that you can construct a data frame by combining multiple vectors. To accomplish this, you can use the `data.frame()` function, which accepts **vectors** as *arguments*, and creates a table with a column for each vector. For example:
```
# vector of names
name <- c("Ada", "Bob", "Chris", "Diya", "Emma")
# Vector of heights
height <- 58:62
# Vector of weights
weight <- c(115, 117, 120, 123, 126)
# Combine the vectors into a data.frame
# Note the names of the variables become the names of the columns!
my_data <- data.frame(name, height, weight, stringsAsFactors = FALSE)
```
* (The last argument to the `data.frame()` function is included because one of the vectors contains strings; it tells R to treat that vector as a *vector* not as a **factor**. This is usually what you’ll want to do. See below for details about [factors](data-frames.html#factors)).
Because data frame elements are lists, you can access the values from `my_data` using the same **dollar notation** and **double\-bracket notation** as lists:
```
# Using the same weights/heights as above:
my_data <- data.frame(height, weight)
# Retrieve weights (the `weight` element of the list: a vector!)
my_weights <- my_data$weight
# Retrieve heights (the whole column: a vector!)
my_heights <- my_data[["height"]]
```
### 9\.1\.2 Describing Structure of Data Frames
While you can interact with data frames as lists, they also offer a number of additional capabilities and functions. For example, here are a few ways you can *inspect* the structure of a data frame:
| Function | Description |
| --- | --- |
| `nrow(my_data_frame)` | Number of rows in the data frame |
| `ncol(my_data_frame)` | Number of columns in the data frame |
| `dim(my_data_frame)` | Dimensions (rows, columns) in the data frame |
| `colnames(my_data_frame)` | Names of the columns of the data frame |
| `rownames(my_data_frame)` | Names of the row of the data frame |
| `head(my_data_frame)` | Extracts the first few rows of the data frame (as a new data frame) |
| `tail(my_data_frame)` | Extracts the last few rows of the data frame (as a new data frame) |
| `View(my_data_frame)` | Opens the data frame in as spreadsheet\-like viewer (only in RStudio) |
Note that many of these description functions can also be used to *modify* the structure of a data frame. For example, you can use the `colnames` functions to assign a new set of column names to a data frame:
```
# Using the same weights/heights as above:
my_data <- data.frame(name, height, weight)
# A vector of new column names
new_col_names <- c('first_name','how_tall','how_heavy')
# Assign that vector to be the vector of column names
colnames(my_data) <- new_col_names
```
### 9\.1\.3 Accessing Data in Data Frames
As stated above, since data frames *are* lists, it’s possible to use **dollar notation** (`my_data_frame$column_name`) or **double\-bracket notation** (`my_data_frame[['column_name']]`) to access entire columns. However, R also uses a variation of **single\-bracket notation** which allows you to access individual data elements (cells) in the table. In this syntax, you put *two* values separated by a comma (**`,`**) inside the brackets—the first for which row and the second for which column you wish you extract:
| Syntax | Description | Example |
| --- | --- | --- |
| `my_df[row_num, col_num]` | Element by row and column indices | `my_frame[2,3]` (element in the second row, third column) |
| `my_df[row_name, col_name]` | Element by row and column names | `my_frame['Ada','height']` (element in row *named* `Ada` and column *named* `height`; the `height` of `Ada`) |
| `my_df[row, col]` | Element by row and col; can mix indices and names | `my_frame[2,'height']` (second element in the `height` column) |
| `my_df[row, ]` | All elements (columns) in row index or name | `my_frame[2,]` (all columns in the second row) |
| `my_df[, col]` | All elements (rows) in a col index or name | `my_frame[,'height']` (all rows in the `height` column; equivalent to list notations) |
Take special note of the 4th option’s syntax (for retrieving rows): you still include the comma (`,`), but because you leave *which column* blank, you get all of the columns!
```
# Extract the second row
my_data[2, ] # comma
# Extract the second column AS A VECTOR
my_data[, 2] # comma
# Extract the second column AS A DATA FRAME (filtering)
my_data[2] # no comma
```
(Extracting from more than one column will produce a *sub\-data frame*; extracting from just one column will produce a vector).
And of course, because *everything is a vector*, you’re actually specifying vectors of indices to extract. This allows you to get multiple rows or columns:
```
# Get the second through fourth rows
my_data[2:4, ]
# Get the `height` and `weight` columns
my_data[, c("height", "weight")]
# Perform filtering
my_data[my_data$height > 60, ] # rows for which `height` is greater than 60
```
### 9\.1\.1 Creating Data Frames
Typically you will *load* data sets from some external source (see [below](data-frames.html#csv-files)), rather than writing out the data by hand. However, it is important to understand that you can construct a data frame by combining multiple vectors. To accomplish this, you can use the `data.frame()` function, which accepts **vectors** as *arguments*, and creates a table with a column for each vector. For example:
```
# vector of names
name <- c("Ada", "Bob", "Chris", "Diya", "Emma")
# Vector of heights
height <- 58:62
# Vector of weights
weight <- c(115, 117, 120, 123, 126)
# Combine the vectors into a data.frame
# Note the names of the variables become the names of the columns!
my_data <- data.frame(name, height, weight, stringsAsFactors = FALSE)
```
* (The last argument to the `data.frame()` function is included because one of the vectors contains strings; it tells R to treat that vector as a *vector* not as a **factor**. This is usually what you’ll want to do. See below for details about [factors](data-frames.html#factors)).
Because data frame elements are lists, you can access the values from `my_data` using the same **dollar notation** and **double\-bracket notation** as lists:
```
# Using the same weights/heights as above:
my_data <- data.frame(height, weight)
# Retrieve weights (the `weight` element of the list: a vector!)
my_weights <- my_data$weight
# Retrieve heights (the whole column: a vector!)
my_heights <- my_data[["height"]]
```
### 9\.1\.2 Describing Structure of Data Frames
While you can interact with data frames as lists, they also offer a number of additional capabilities and functions. For example, here are a few ways you can *inspect* the structure of a data frame:
| Function | Description |
| --- | --- |
| `nrow(my_data_frame)` | Number of rows in the data frame |
| `ncol(my_data_frame)` | Number of columns in the data frame |
| `dim(my_data_frame)` | Dimensions (rows, columns) in the data frame |
| `colnames(my_data_frame)` | Names of the columns of the data frame |
| `rownames(my_data_frame)` | Names of the row of the data frame |
| `head(my_data_frame)` | Extracts the first few rows of the data frame (as a new data frame) |
| `tail(my_data_frame)` | Extracts the last few rows of the data frame (as a new data frame) |
| `View(my_data_frame)` | Opens the data frame in as spreadsheet\-like viewer (only in RStudio) |
Note that many of these description functions can also be used to *modify* the structure of a data frame. For example, you can use the `colnames` functions to assign a new set of column names to a data frame:
```
# Using the same weights/heights as above:
my_data <- data.frame(name, height, weight)
# A vector of new column names
new_col_names <- c('first_name','how_tall','how_heavy')
# Assign that vector to be the vector of column names
colnames(my_data) <- new_col_names
```
### 9\.1\.3 Accessing Data in Data Frames
As stated above, since data frames *are* lists, it’s possible to use **dollar notation** (`my_data_frame$column_name`) or **double\-bracket notation** (`my_data_frame[['column_name']]`) to access entire columns. However, R also uses a variation of **single\-bracket notation** which allows you to access individual data elements (cells) in the table. In this syntax, you put *two* values separated by a comma (**`,`**) inside the brackets—the first for which row and the second for which column you wish you extract:
| Syntax | Description | Example |
| --- | --- | --- |
| `my_df[row_num, col_num]` | Element by row and column indices | `my_frame[2,3]` (element in the second row, third column) |
| `my_df[row_name, col_name]` | Element by row and column names | `my_frame['Ada','height']` (element in row *named* `Ada` and column *named* `height`; the `height` of `Ada`) |
| `my_df[row, col]` | Element by row and col; can mix indices and names | `my_frame[2,'height']` (second element in the `height` column) |
| `my_df[row, ]` | All elements (columns) in row index or name | `my_frame[2,]` (all columns in the second row) |
| `my_df[, col]` | All elements (rows) in a col index or name | `my_frame[,'height']` (all rows in the `height` column; equivalent to list notations) |
Take special note of the 4th option’s syntax (for retrieving rows): you still include the comma (`,`), but because you leave *which column* blank, you get all of the columns!
```
# Extract the second row
my_data[2, ] # comma
# Extract the second column AS A VECTOR
my_data[, 2] # comma
# Extract the second column AS A DATA FRAME (filtering)
my_data[2] # no comma
```
(Extracting from more than one column will produce a *sub\-data frame*; extracting from just one column will produce a vector).
And of course, because *everything is a vector*, you’re actually specifying vectors of indices to extract. This allows you to get multiple rows or columns:
```
# Get the second through fourth rows
my_data[2:4, ]
# Get the `height` and `weight` columns
my_data[, c("height", "weight")]
# Perform filtering
my_data[my_data$height > 60, ] # rows for which `height` is greater than 60
```
9\.2 Working with CSV Data
--------------------------
So far you’ve been constructing your own data frames by “hard\-coding” the data values. But it’s much more common to load that data from somewhere else, such as a separate file on your computer or by downloading it off the internet. While R is able to ingest data from a variety of sources, this chapter will focus on reading tabular data in **comma separated value** (CSV) format, usually stored in a `.csv` file. In this format, each line of the file represents a record (*row*) of data, while each feature (*column*) of that record is separated by a comma:
```
Ada, 58, 115
Bob, 59, 117
Chris, 60, 120
Diya, 61, 123
Emma, 62, 126
```
Most spreadsheet programs like Microsoft Excel, Numbers, or Google Sheets are simply interfaces for formatting and interacting with data that is saved in this format. These programs easily import and export `.csv` files; however `.csv` files are unable to save the formatting done in those programs—the files only store the data!
You can load the data from a `.csv` file into R by using the `read.csv()` function:
```
# Read data from the file `my_file.csv` into a data frame `my_data`
my_data <- read.csv('my_file.csv', stringsAsFactors=FALSE)
```
Again, use the `stringsAsFactors` argument to make sure string data is stored as a *vector* rather than as a *factor* (see [below](data-frames.html#factors)). This function will return a data frame, just like those described above!
**Important Note**: If for whatever reason an element is missing from a data frame (which is very common with real world data!), R will fill that cell with the logical value `NA` (distinct from the string `"NA"`), meaning “**N**ot **A**vailable”. There are multiple ways to handle this in an analysis; see [this link](http://www.statmethods.net/input/missingdata.html) among others for details.
### 9\.2\.1 Working Directory
The biggest complication when loading `.csv` files is that the `read.csv()` function takes as an argument a **path** to the file. Because you want this script to work on any computer (to support collaboration, as well as things like assignment grading), you need to be sure to use a **relative path** to the file. The question is: *relative to what*?
Like the command\-line, the R interpreter (running inside R Studio) has a **current working directory** from which all file paths are relative. The trick is that ***the working directory is not the directory of the current script file!***
* This makes sense if you think about it: you can run R commands through the console without having a script, and you can have open multiple script files from separate folders that are all interacting with the same execution environment.
Just as you can view the current working directory when on the command line (using `pwd`), you can use an R function to view the current working directory when in R:
```
# get the absolute path to the current working directory
getwd()
```
You often will want to change the working directory to be your “project” directory (wherever your scripts and data files happen to be). It is possible to change the current working directory using the `setwd()` function. However, this function would also take an absolute path, so doesn’t fix the problem. You would not want to include this absolute path in your script (though you could use it from the console).
One solution is to use the tilde (`~`) shortcut to specify your directory:
```
# Set working directory on Desktop
setwd("~/Desktop/project-name")
```
This enables you to work across machines, as long as the project is stored in the same location on each machine.
Another solution is to use R Studio itself to change the working directory. This is reasonable because the working directory is a property of the *current running environment*, which is what R Studio makes accessible! The easiest way to do this is to use the **`Session > Set Working Directory`** menu options: you can either set the working directory `To Source File Location` (the folder containing whichever `.R` script you are currently editing; this is usually what you want), or you can browse for a particular directory with `Choose Directory`.
Use `Session > Set Working Directory` to change the working directory through R Studio
You should do this whenever you hit a “path” problem when loading external files. If you want to do this repeatedly by calling `setwd()` from your script to an absolute path, you may want to keep it commented out (`# setwd(...)`) so it doesn’t cause problems for others who try to run your script.
### 9\.2\.1 Working Directory
The biggest complication when loading `.csv` files is that the `read.csv()` function takes as an argument a **path** to the file. Because you want this script to work on any computer (to support collaboration, as well as things like assignment grading), you need to be sure to use a **relative path** to the file. The question is: *relative to what*?
Like the command\-line, the R interpreter (running inside R Studio) has a **current working directory** from which all file paths are relative. The trick is that ***the working directory is not the directory of the current script file!***
* This makes sense if you think about it: you can run R commands through the console without having a script, and you can have open multiple script files from separate folders that are all interacting with the same execution environment.
Just as you can view the current working directory when on the command line (using `pwd`), you can use an R function to view the current working directory when in R:
```
# get the absolute path to the current working directory
getwd()
```
You often will want to change the working directory to be your “project” directory (wherever your scripts and data files happen to be). It is possible to change the current working directory using the `setwd()` function. However, this function would also take an absolute path, so doesn’t fix the problem. You would not want to include this absolute path in your script (though you could use it from the console).
One solution is to use the tilde (`~`) shortcut to specify your directory:
```
# Set working directory on Desktop
setwd("~/Desktop/project-name")
```
This enables you to work across machines, as long as the project is stored in the same location on each machine.
Another solution is to use R Studio itself to change the working directory. This is reasonable because the working directory is a property of the *current running environment*, which is what R Studio makes accessible! The easiest way to do this is to use the **`Session > Set Working Directory`** menu options: you can either set the working directory `To Source File Location` (the folder containing whichever `.R` script you are currently editing; this is usually what you want), or you can browse for a particular directory with `Choose Directory`.
Use `Session > Set Working Directory` to change the working directory through R Studio
You should do this whenever you hit a “path” problem when loading external files. If you want to do this repeatedly by calling `setwd()` from your script to an absolute path, you may want to keep it commented out (`# setwd(...)`) so it doesn’t cause problems for others who try to run your script.
9\.3 Factor Variables
---------------------
**Factors** are a way of *optimizing* variables that consist of a finite set of categories (i.e., they are **categorical (nominal) variables**).
For example, imagine that you had a vector of shirt sizes which could only take on the values `small`, `medium`, or `large`. If you were working with a large dataset (thousands of shirts!), it would end up taking up a lot of memory to store the character strings (5\+ letters per word at 1 or more bytes per letter) for each one of those variables.
A **factor** on the other hand would instead store a *number* (called a **level**) for each of these character strings: for example, `1` for `small`, `2` for `medium`, or `3` for `large` (though the order or specific numbers will vary). R will remember the relationship between the integers and their **labels** (the strings). Since each number only takes 4 bytes (rather than 1 per letter), factors allow R to keep much more information in memory.
```
# Start with a character vector of shirt sizes
shirt_sizes <- c("small", "medium", "small", "large", "medium", "large")
# Convert to a vector of factor data
shirt_sizes_factor <- as.factor(shirt_sizes)
# View the factor and its levels
print(shirt_sizes_factor)
# The length of the factor is still the length of the vector, not the number of levels
length(shirt_sizes_factor) # 6
```
When you print out the `shirt_sizes_factor` variable, R still (intelligently) prints out the **labels** that you are presumably interested in. It also indicates the **levels**, which are the *only* possible values that elements can take on.
It is worth re\-stating: **factors are not vectors**. This means that most all the operations and functions you want to use on vectors *will not work*:
```
# Create a factor of numbers (factors need not be strings)
num_factors <- as.factor(c(10,10,20,20,30,30,40,40))
# Print the factor to see its levels
print(num_factors)
# Multiply the numbers by 2
num_factors * 2 # Error: * not meaningful
# returns vector of NA instead
# Changing entry to a level is fine
num_factors[1] <- 40
# Change entry to a value that ISN'T a level fails
num_factors[1] <- 50 # Error: invalid factor level
# num_factors[1] is now NA
```
If you create a data frame with a string vector as a column (as what happens with `read.csv()`), it will automatically be treated as a factor *unless you explicitly tell it not to*:
```
# Vector of shirt sizes
shirt_size <- c("small", "medium", "small", "large", "medium", "large")
# Vector of costs (in dollars)
cost <- c(15.5, 17, 17, 14, 12, 23)
# Data frame of inventory (with factors, since didn't say otherwise)
shirts_factor <- data.frame(shirt_size, cost)
# The shirt_size column is a factor
is.factor(shirts_factor$shirt_size) # TRUE
# Can treat this as a vector; but better to fix how the data is loaded
as.vector(shirts_factor$shirt_size) # a vector
# Data frame of orders (without factoring)
shirts <- data.frame(shirt_size, cost, stringsAsFactors = FALSE)
# The shirt_size column is NOT a factor
is.factor(shirts$shirt_size) # FALSE
```
This is not to say that factors can’t be useful (beyond just saving memory)! They offer easy ways to group and process data using specialized functions:
```
shirt_size <- c("small", "medium", "small", "large", "medium", "large")
cost <- c(15.5, 17, 17, 14, 12, 23)
# Data frame of inventory (with factors)
shirts_factor <- data.frame(shirt_size, cost)
# Produce a list of data frames, one for each factor level
# first argument is the data frame to split, second is the factor to split by
shirt_size_frames <- split(shirts_factor, shirts_factor$shirt_size)
# Apply a function (mean) to each factor level
# first argument is the vector to apply the function to,
# second argument is the factor to split by
# third argument is the name of the function
tapply(shirts_factor$cost, shirts_factor$shirt_size, mean)
```
However, in general this course is more interested in working with data as vectors, thus you should always use `stringsAsFactors=FALSE` when creating data frames or loading `.csv` files that include strings.
Resources
---------
* [R Tutorial: Data Frames](http://www.r-tutor.com/r-introduction/data-frame)
* [R Tutorial: Data Frame Indexing](http://www.r-tutor.com/r-introduction/data-frame/data-frame-row-slice)
* [Quick\-R: Missing Values](http://www.statmethods.net/input/missingdata.html)
* [Factor Variables (UCLA)](http://www.ats.ucla.edu/stat/r/modules/factor_variables.htm)
| Field Specific |
info201.github.io | https://info201.github.io/dplyr.html |
Chapter 10 The `dplyr` Library
==============================
The **`dplyr`** (“dee\-ply\-er”) package is the preeminent tool for data wrangling in R (and perhaps, in data science more generally). It provides programmers with an intuitive vocabulary for executing data management and analysis tasks. Learning and utilizing this package will make your data preparation and management process faster and easier to understand. This chapter introduces the philosophy behind the library and an overview of how to use the library to work with dataframes using its expressive and efficient syntax.
10\.1 A Grammar of Data Manipulation
------------------------------------
[Hadley Wickham](http://hadley.nz/), the creator of the [`dplyr`](https://github.com/hadley/dplyr) package, fittingly refers to it as a ***Grammar of Data Manipulation***. This is because the package provides a set of **verbs** (functions) to describe and perform common data preparation tasks. One of the core challenge in programming is mapping from questions about a dataset to specific programming operations. The presence of a data manipulation grammar makes this process smoother, as it enables you to use the same vocabulary to both *ask* questions and *write* your program. Specifically, the `dplyr` grammar lets you easily talk about and perform tasks such as:
* **select** specific features (columns) of interest from the data set
* **filter** out irrelevant data and only keep observations (rows) of interest
* **mutate** a data set by adding more features (columns)
* **arrange** the observations (rows) in a particular order
* **summarize** the data in terms of aspects such as the mean, median, or maximum
* **join** multiple data sets together into a single data frame
You can use these words when describing the *algorithm* or process for interrogating data, and then use `dplyr` to write code that will closely follow your “plain language” description because it uses functions and procedures that share the same language. Indeed, many real\-world questions about a dataset come down to isolating specific rows/columns of the data set as the “elements of interest”, and then performing a simple comparison or computation (mean, count, max, etc.). While it is possible to perform this computation with basic R functions—the `dplyr` library makes it much easier to write and read such code.
10\.2 Using `dplyr` Functions
-----------------------------
The `dplyr` package provides functions that mirror the above verbs. Using this package’s functions will allow you to quickly and effectively write code to ask questions of your data sets.
Since `dplyr` is an external package, you will need to install it (once per machine) and load it to make the functions available:
```
install.packages("dplyr") # once per machine
library("dplyr")
```
After loading the library, you can call any of the functions just as if they were the built\-in functions you’ve come to know and love.
For each `dplyr` function discussed here, the **first argument** to the function is a data frame to manipulate, with the rest of the arguments providing more details about the manipulation.
***IMPORTANT NOTE:*** inside the function argument list (inside the parentheses), we refer to data frame columns **without quotation marks**—that is, we just give the column names as *variable names*, rather than as *character strings*. This is refered to as [non\-standard evaluation](#Non-standard%20Evaluation), and is described in more detail below; while it makes code easier to write and read, it can occasionally create challenges.
The images in this section come from the [RStudio’s STRATA NYC R\-Day workshop](http://bit.ly/rday-nyc-strata15), which was presented by [Nathan Stephens](http://conferences.oreilly.com/strata/big-data-conference-ny-2015/public/schedule/speaker/217840).
### 10\.2\.1 Select
The **`select()`** operation allows you to choose and extract **columns** of interest from your data frame.
```
# Select `storm` and `pressure` columns from `storms` data frame
storm_info <- select(storms, storm, pressure)
```
Diagram of the `select()` function (by Nathan Stephens).
The `select()` function takes in the data frame to select from, followed by the names of the columns you wish to select (quotation marks are optional!)
This function is equivalent to simply extracting the columns:
```
# Extract columns by name
storm_info <- storms[, c("storm", "pressure")] # Note the comma!
```
But easier to read and write!
### 10\.2\.2 Filter
The **`filter()`** operation allows you to choose and extract **rows** of interest from your data frame (contrasted with `select()` which extracts *columns*).
```
# Select rows whose `wind` column is greater than or equal to 50
some_storms <- filter(storms, wind >= 50)
```
Diagram of the `filter()` function (by Nathan Stephens).
The `filter()` function takes in the data frame to filter, followed by a comma\-separated list of conditions that each returned *row* must satisfy. Note again that columns are provided without quotation marks!
* R will extract the rows that match **all** conditions. Thus you are specifying that you want to filter down a data frame to contain only the rows that meet Condition 1 **and** Condition 2\.
This function is equivalent to simply extracting the rows:
```
# Extract rows by condition
some_storms <- storms[storms$wind >= 50, ] # Note the comma!
```
As the number of conditions increases, it is **far easier** to read and write `filter()` functions, rather than squeeze your conditions into brackets.
### 10\.2\.3 Mutate
The **`mutate()`** operation allows you to create additional **columns** for your data frame.
```
# Add `ratio` column that is ratio between pressure and wind
storms <- mutate(storms, ratio = pressure/wind) # Replace existing `storms` frame with mutated one!
```
Diagram of the `mutate()` function (by Nathan Stephens).
The `mutate()` function takes in the data frame to mutate, followed by a comma\-separated list of columns to create using the same **`name = vector`** syntax you used when creating **lists** or **data frames** from scratch. As always, the names of the columns in the data frame are used without quotation marks.
* Despite the name, the `mutate()` function doesn’t actually change the data frame; instead it returns a *new* data frame that has the extra columns added. You will often want to replace the old data frame variable with this new value.
In cases where you are creating multiple columns (and therefore writing really long code instructions), you should break the single statement into multiple lines for readability. Because you haven’t closed the parentheses on the function arguments, R will not treat each line as a separate statement.
```
# Generic mutate command
more_columns <- mutate(
my_data_frame,
new_column_1 = old_column * 2,
new_column_2 = old_column * 3,
new_column_3 = old_column * 4
)
```
### 10\.2\.4 Arrange
The **`arrange()`** operation allows you to **sort the rows** of your data frame by some feature (column value).
```
# Arrange storms by INCREASING order of the `wind` column
sorted_storms <- arrange(storms, wind)
```
Diagram of the `arrange()` function (by Nathan Stephens).
By default, the `arrange()` function will sort rows in **increasing** order. To sort in **reverse** (decreasing) order, place a minus sign (**`-`**) in front of the column name (e.g., `-wind`). You can also use the `desc()` helper function (e.g, `desc(wind)`).
* You can pass multiple arguments into the `arrange()` function in order to sort first by `argument_1`, then by `argument_2`, and so on.
* Again, this doesn’t actually modify the argument data frame—instead returning a new data frame you’ll need to store.
### 10\.2\.5 Summarize
The **`summarize()`** function (equivalently `summarise()` for those using the British spelling) will generate a *new* data frame that contains a “summary” of a **column**, computing a single value from the multiple elements in that column.
```
# Compute the median value of the `amount` column
summary <- summarize(pollution, median = median(amount))
```
Diagram of the `summarize()` function (by Nathan Stephens).
The `summarize()` function takes in the data frame to mutate, followed by the values that will be included in the resulting summary table. You can use multiple arguments to include multiple summaries in the same statement:
```
# Compute statistics for the `amount` column
summaries <- summarize(
pollution,
median = median(amount), # median value
mean = mean(amount), # "average" value
sum = sum(amount), # total value
count = n() # number of values (neat trick!)
)
```
Note that the `summarize()` function is particularly useful for grouped operations (see [below](dplyr.html#grouped-operations)), as you can produce summaries of different groups of data.
### 10\.2\.6 Distinct
The **`distinct()`** operation allows you to extract distinct values (rows) from your data frame—that is, you’ll get one row for each different value in the dataframe (or set of selected **columns**). This is a useful tool to confirm that you don’t have **duplicate observations**, which often occurs in messy datasets.
For example (no diagram available):
```
# Create a quick data frame
x <- c(1, 1, 2, 2, 3, 3, 4, 4) # duplicate x values
y <- 1:8 # unique y values
my_df <- data.frame(x, y)
# Select distinct rows, judging by the `x` column
distinct_rows <- distinct(my_df, x)
# x
# 1 1
# 2 2
# 3 3
# 4 4
# Select distinct rows, judging by the `x` and `y`columns
distinct_rows <- distinct(my_df, x, y) # returns whole table, since no duplicate rows
```
While this is a simple way to get a unique set of rows, **be careful** not to lazily remove rows of your data which may be important.
10\.3 Multiple Operations
-------------------------
You’ve likely encountered a number of instances in which you want to take the results from one function and pass them into another function. Your approach thus far has often been to create *temporary variables* for use in your analysis. For example, if you’re using the `mtcars` dataset, you may want to ask a simple question like,
> Which 4\-cylinder car gets the best milage per gallon?
This simple question actually requires a few steps:
1. *Filter* down the dataset to only 4\-cylinder cars
2. Of the 4\-cylinder cars, *filter* down to the one with the highest mpg
3. *Select* the car name of the car
You could then implement each step as follows:
```
# Preparation: add a column that is the car name
mtcars_named <- mutate(mtcars, car_name = row.names(mtcars))
# 1. Filter down to only four cylinder cars
four_cyl <- filter(mtcars_named, cyl == 4)
# 2. Filter down to the one with the highest mpg
best_four_cyl <- filter(four_cyl, mpg == max(mpg))
# 3. Select the car name of the car
best_car_name <- select(best_four_cyl, car_name)
```
While this works fine, it clutters the work environment with variables you won’t need to use again, and which can potentially step on one another’s toes. It can help with readability (the results of each step is explicit), but those extra variables make it harder to modify and change the algorithm later (you have to change them in two places).
An alternative to saving each step as a distinct, named variable would be to utilize **anonymous variables** and write the desired statements **nested** within other functions. For example, you could write the algorithm above as follows:
```
# Preparation: add a column that is the car name
mtcars_named <- mutate(mtcars, car_name = row.names(mtcars))
# Write a nested operation to return the best car name
best_car_name <- select( # 3. Select car name of the car
filter( # 2. Filter down to the one with the highest mpg
filter( # 1. Filter down to only four cylinder cars
mtcars_named, # arguments for the Step 1 filter
cyl == 4
),
mpg == max(mpg) # other arguments for the Step 2 filter
),
car_name # other arguments for the Step 3 select
)
```
This version uses *anonymous variables*—result values which are not assigned to names (so are anonymous), but instead are immediately used as the arguments to another function. You’ve used these frequently with the `print()` function and with filters (those vectors of `TRUE` and `FALSE` values)—and even the `max(mpg)` in the Step 2 filter is an anonymous variable!
This *nested* version performs the same results as the *temporary variable* version without creating the extra variables, but even with only 3 steps it can get quite complicated to read—in a large part because you have to think about it “inside out”, with the stuff in the middle evaluating first. This will obviously become undecipherable for more involved operations.
### 10\.3\.1 The Pipe Operator
Luckily, `dplyr` provides a cleaner and more effective way of achieving the same task (that is, using the result of one function as an argument to the next). The **pipe operator** (**`%>%`**) indicates that the result from the first function operand should be passed in as **the first argument** to the next function operand!
As a simple example:
```
# nested version: evaluate c(), then max(), then print()
print(max(c(2, 0, 1)))
# pipe version
c(1,2,3) %>% # do first function
max() %>% # which becomes the _first_ argument to the next function
print() # which becomes the _first_ argument to the next function
```
Or as another version of the above data wrangling:
```
# Preparation: add a column that is the car name
mtcars_named <- mutate(mtcars, car_name = row.names(mtcars))
best_car_name <- filter(mtcars_named, cyl == 4) %>% # Step 1
filter(mpg == max(mpg)) %>% # Step 2
select(car_name) # Step 3
```
* Yes, the `%>%` operator is awkward to type and takes some getting use to (especially compared to the command\-line’s use of `|` to pipe). However, you can ease the typing by using the [RStudio keyboard shortcut](https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts) `cmd + shift + m`.
The pipe operator is part of the `dplyr` package (it is only available if you load that package), but it will work with *any* function, not just `dplyr` ones! This syntax, while slightly odd, can completely change and simplify the way you write code to ask questions about your data!
10\.4 Grouped Operations
------------------------
`dplyr` functions are powerful, but they are truly awesome when you can apply them to **groups of rows** within a data set. For example, the above use of `summarize()` isn’t particularly useful since it just gives a single summary for a given column (which you could have done anyway). However, a **grouped** operation would allow you to compute the same summary measure (`mean`, `median`, `sum`, etc.) automatically for multiple groups of rows, enabling you to ask more nuanced questions about your data set.
The **`group_by()`** operation allows you to break a data frame down into *groups* of rows, which can then have the other verbs (e.g., `summarize`, `filter`, etc). applied to each one.
```
# Get summary statistics by city
city_summary <- group_by(pollution, city) %>%
summarize( # first argument (the data frame) is received from the pipe
mean = mean(amount),
sum = sum(amount),
n = n()
)
```
Diagram of the `group_by()` function (by Nathan Stephens).
As another example, if you were using the `mtcars` dataset, you may want to answer this question:
> What are the differences in mean miles per gallon for cars with different numbers of gears (3, 4, or 5\)?
This simple question requires computing the mean for different subsets of the data. Rather than explicitly breaking your data into different groups (a.k.a. *bins* or *chunks*) and running the same operations on each, you can use the `group_by()` function to accomplish this in a single command:
```
# Group cars by gear number, then compute the mean and median mpg
gear_summary <- group_by(mtcars, gear) %>% # group by gear
summarize(mean = mean(mpg)) # calculate mean
# Computing the difference between scores is done elsewhere (or by hand!)
```
Thus grouping can allow you to quickly and easily compare different subsets of your data!
10\.5 Joins
-----------
When working with real\-world data, you’ll often find that that data is stored across *multiple* files or data frames. This can be done for a number of reasons. For one, it can help to reduce memory usage (in the same manner as **factors**). For example, if you had a data frame containing information on students enrolled in university courses, you might store information about each course (the instructor, meeting time, and classroom) in a separate data frame rather than duplicating that information for every student that takes the same course. You also may simply want to keep your information organized: e.g., have student information in one file, and course information in another.
* This separation and organization of data is a core concern in the design of [relational databases](https://en.wikipedia.org/wiki/Relational_database), a common topic of study within Information Schools.
But at some point, you’ll want to access information from both data sets (e.g., you need to figure out a student’s schedule), and thus need a way to combine the data frames. This process is called a **join** (because you are “joining” the data frames together). When you perform a join, you identify **columns** which are present in both tables. Those column values are then used as **identifiers** to determine which rows in each table correspond to one another, and thus will be combined into a row in the resulting joined table.
The **`left_join()`** operation is one example of a join. This operation looks for matching columns between the two data frames, and then returns a new data frame that is the first (“left”) operand with extra columns from the second operand added on.
```
# Combine (join) songs and artists data frames
left_join(songs, artists)
```
Diagram of the `left_join()` function (by Nathan Stephens).
To understand how this works, consider a specific example where you have a table of student\_ids and the students’ contact information. You also have a separate table of student\_ids and the students’ majors (your institution very well may store this information in separate tables for privacy or organizational reasons).
```
# Table of contact information
student_contact <- data.frame(
student_id = c(1, 2, 3, 4), # id numbers
email = c("id1@school.edu", "id2@school.edu", "id3@school.edu", "id4@school.edu")
)
# Table of information about majors
student_majors <- data.frame(
student_id = c(1, 2, 3), # id numbers
major = c("sociology", "math", "biology")
)
```
Notice that both tables have a `student_id` column, allowing you to “match” the rows from the `student_contact` table to the `student_majors` table and merge them together:
```
# Join tables by the student_id column
merged_student_info <- left_join(student_contact, student_majors)
# student_id email major
# 1 1 id1@school.edu sociology
# 2 2 id2@school.edu math
# 3 3 id3@school.edu biology
# 4 4 id4@school.edu <NA>
```
When you perform this **left join**, R goes through each row in the table on the “left” (the first argument), looking at the shared column(s) (`student_id`). For each row, it looks for a corresponding value in `student_majors$student_id`, and if it finds one then it adds any data from columns that are in `student_majors` but *not* in `student_contact` (e.g., `major`) to new columns in the resulting table, with values from whatever the matching row was. Thus student \#1 is given a `major` of “sociology”, student \#2 is given a `major` of “math”, and student \#4 is given a `major` of `NA` (because that student had no corresponding row in `student_majors`!)
* In short, a **left join** returns all of the rows from the *first* table, with all of the columns from *both* tables.
R will join tables by any and all shared columns. However, if the names of your columns don’t match specifically, you can also specify a `by` argument indicating which columns should be used for the matching:
```
# Use the named `by` argument to specify (a vector of) columns to match on
left_join(student_contact, student_majors, by="student_id")
```
* With the `by` argument, column name *is* a string (in quotes) because you’re specifying a vector of column names (the string literal is a vector length 1\).
Notice that because of how a left join is defined, **the argument order matters!** The resulting table only has rows for elements in the *left* (first) table; any unmatched elements in the second table are lost. If you switch the order of the operands, you would only have information for students with majors:
```
# Join tables by the student_id column
merged_student_info <- left_join(student_majors, student_contact) # switched order!
# student_id major email
# 1 1 sociology id1@school.edu
# 2 2 math id2@school.edu
# 3 3 biology id3@school.edu
```
You don’t get any information for student \#4, because they didn’t have a record in the left\-hand table!
Because of this behavior, `dplyr` (and relational database systems in general) provide a number of different kinds of joins, each of which influences *which* rows are included in the final table. Note that in any case, *all* columns from *both* tables will be included, with rows taking on any values from their matches in the second table.
* **`left_join`** All rows from the first (left) data frame are returned. That is, you get all the data from the left\-hand table, with extra column values added from the right\-hand table. Left\-hand rows without a match will have `NA` in the right\-hand columns.
* **`right_join`** All rows from the second (right) data frame are returned. That is, you get all the data from the right\-hand table, with extra column values added from the left\-hand table. Right\-hand rows without a match will have `NA` in the left\-hand columns. This is the “opposite” of a `left_join`, and the equivalent of switching the operands.
* **`inner_join`** Only rows in **both** data frames are returned. That is, you get any rows that had matching observations in both tables, with the column values from both tables. There will be no additional `NA` values created by the join. Observations from the left that had no match in the right, or observations in the right that had no match in the left, will not be returned at all.
* **`full_join`** All rows from **both** data frames are returned. That is, you get a row for any observation, whether or not it matched. If it happened to match, it will have values from both tables in that row. Observations without a match will have `NA` in the columns from the other table.
The key to deciding between these is to think about what set of data you want as your set of observations (rows), and which columns you’d be okay with being `NA` if a record is missing.
Note that these are all *mutating joins*, which add columns from one table to another. `dplyr` also provides *filtering joins* which exclude rows based on whether they have a matching observation in another table, and *set operations* which combine observations as if they were set elements. See [the documentation](https://cran.r-project.org/web/packages/dplyr/vignettes/two-table.html) for more detail on these options, but in this course we’ll be primarily focusing on the mutating joins described above.
10\.6 Non\-Standard Evaluation vs. Standard Evaluation
------------------------------------------------------
One of the features that makes `dplyr` such a clean and attractive way to write code is that inside of each function, you’ve been able to write column variable names **without quotes**. This is called **non\-standard evaluation (NSE)** (it is *not* the *standard* way that code is *evaluated*, or interpreted), and is useful primarily because of how it reduces typing (along with some other benefits when working with databases). In particular, `dplyr` will [“quote”](http://dplyr.tidyverse.org/articles/programming.html) expressions for you, converting those variables (symbols) into values that can be used to refer to column names.
Most of the time this won’t cause you any problems—you can either use NSE to refer to column names without quotes, or provide the quotes yourself. You can even use variables to store the name of a column of interest!
```
# Normal, non-standard evaluation version
mpg <- select(mtcars, mpg)
# "Standard-evaluation" version (same result)
mpg <- select(mtcars, "mpg") # with quotes! "mpg" is a normal value!
# Make the column name a variable
which_col <- "mpg"
my_column <- select(mtcars, which_col)
```
However, this NSE can sometimes trip you up when using more complex functions such as `summarize()` or `group_by()`, or when you want to create your own functions that use NSE.
```
which_col <- "mpg"
summarize(mtcars, avg = mean(which_col)) # In mean.default(which_col) :
# argument is not numeric or logical: returning NA
```
In this case, the `summarize()` function is trying to “quote” what we typed in (the `which_col` variable name\&dmash;not it’s `mpg` value), and then hitting a problem because there is no column of that name (it can’t resolve that column name to a column index).
To fix this problem, there are two parts: first, you need to explicitly tell R that the *value* of `which_col` (`mpg`) is actually the value that needs to be automatically “quoted”—that is, that `mpg` is really a variable! Variable names in R are referred to as [**symbols**](https://cran.r-project.org/doc/manuals/r-release/R-lang.html#Symbol-objects)—a symbol refers to the variable label itself. You can explicitly change a value into a symbol by using the [`rlang::sym()`](https://www.rdocumentation.org/packages/rlang/versions/0.1.6/topics/sym) function (the `sym()` function found in the `rlang` library; the `::` indicates that the function belongs to a library).
```
which_col_sym <- rlang::sym(which_col) # convert to a symbol
print(which_col_sym) # => mpg (but not in quotes, because it's not a string!)
```
Second, you will need to tell the `summarize()` function that it should *not* quote this symbol (because you’ve already converted it into a variable)—what is called **unquoting**. In `dplyr`, you “unquote” a parameter to a method by including two exclamation points in front of it:
```
summarize(mtcars, avg = mean(!!which_col_sym)) # arranges by the specified column
```
There are many more details involved in this “quoting/unquoting” process, which are described in [this tutorial](http://dplyr.tidyverse.org/articles/programming.html) (though that is currently [being updated with better examples](http://rpubs.com/lionel-/programming-draft)).
### 10\.6\.1 Explicit Standard Evaluation
Alternatively, older versions of `dplyr` supplied functions that *explicitly* performed **standard evaluation (SE)**—that is, they provide no quoting and expected you to do that work yourself. While now considered deprecated, they can still be useful if you are having problems with the new quoting system. These functions have the exact same names as the normal verb functions, except are followed by an underscore (**`_`**):
```
# Normal, non-standard evaluation version
mpg <- select(mtcars, mpg)
# Standard-evaluation version (same result)
mpg <- select_(mtcars, 'mpg') # with quotes! 'mpg' is a normal value!
# Normal, non-standard evaluation version of equations
mean_mpg <- summarize(mtcars, mean(mpg))
# Standard-evaluation version of equations (same result)
mean_mpg <- summarize_(mtcars, 'mean(mpg)')
# Which column you're interested in
which_column <- 'mpg'
# Use standard evaluation to execute function:
my_column <- arrange_(mtcars, which_column)
```
Yes, it does feel a bit off that the “normal” way of using `dplyr` is the “non\-standard” way. Remember that using SE is the “different” approach
The non\-standard evaluation offered by `dplyr` can make it quick and easy to work with data when you know its structure and variable names, but can be a challenge when trying to work with variables. Often in that case, you may want to instead use the standard data frame syntax (e.g., bracket notation) described in Chapter 9\.
Resources
---------
* [Introduction to dplyr](https://cran.r-project.org/web/packages/dplyr/vignettes/introduction.html)
* [dplyr and pipes: the basics (blog)](http://seananderson.ca/2014/09/13/dplyr-intro.html)
* [Two\-table verbs](https://cran.r-project.org/web/packages/dplyr/vignettes/two-table.html)
* [DPLYR Join Cheatsheet (Jenny Bryan)](http://stat545.com/bit001_dplyr-cheatsheet.html)
* [Non\-standard evaluation](https://cran.r-project.org/web/packages/dplyr/vignettes/nse.html)
* [Data Manipulation with DPLYR (R\-bloggers)](https://www.r-bloggers.com/data-manipulation-with-dplyr/)
10\.1 A Grammar of Data Manipulation
------------------------------------
[Hadley Wickham](http://hadley.nz/), the creator of the [`dplyr`](https://github.com/hadley/dplyr) package, fittingly refers to it as a ***Grammar of Data Manipulation***. This is because the package provides a set of **verbs** (functions) to describe and perform common data preparation tasks. One of the core challenge in programming is mapping from questions about a dataset to specific programming operations. The presence of a data manipulation grammar makes this process smoother, as it enables you to use the same vocabulary to both *ask* questions and *write* your program. Specifically, the `dplyr` grammar lets you easily talk about and perform tasks such as:
* **select** specific features (columns) of interest from the data set
* **filter** out irrelevant data and only keep observations (rows) of interest
* **mutate** a data set by adding more features (columns)
* **arrange** the observations (rows) in a particular order
* **summarize** the data in terms of aspects such as the mean, median, or maximum
* **join** multiple data sets together into a single data frame
You can use these words when describing the *algorithm* or process for interrogating data, and then use `dplyr` to write code that will closely follow your “plain language” description because it uses functions and procedures that share the same language. Indeed, many real\-world questions about a dataset come down to isolating specific rows/columns of the data set as the “elements of interest”, and then performing a simple comparison or computation (mean, count, max, etc.). While it is possible to perform this computation with basic R functions—the `dplyr` library makes it much easier to write and read such code.
10\.2 Using `dplyr` Functions
-----------------------------
The `dplyr` package provides functions that mirror the above verbs. Using this package’s functions will allow you to quickly and effectively write code to ask questions of your data sets.
Since `dplyr` is an external package, you will need to install it (once per machine) and load it to make the functions available:
```
install.packages("dplyr") # once per machine
library("dplyr")
```
After loading the library, you can call any of the functions just as if they were the built\-in functions you’ve come to know and love.
For each `dplyr` function discussed here, the **first argument** to the function is a data frame to manipulate, with the rest of the arguments providing more details about the manipulation.
***IMPORTANT NOTE:*** inside the function argument list (inside the parentheses), we refer to data frame columns **without quotation marks**—that is, we just give the column names as *variable names*, rather than as *character strings*. This is refered to as [non\-standard evaluation](#Non-standard%20Evaluation), and is described in more detail below; while it makes code easier to write and read, it can occasionally create challenges.
The images in this section come from the [RStudio’s STRATA NYC R\-Day workshop](http://bit.ly/rday-nyc-strata15), which was presented by [Nathan Stephens](http://conferences.oreilly.com/strata/big-data-conference-ny-2015/public/schedule/speaker/217840).
### 10\.2\.1 Select
The **`select()`** operation allows you to choose and extract **columns** of interest from your data frame.
```
# Select `storm` and `pressure` columns from `storms` data frame
storm_info <- select(storms, storm, pressure)
```
Diagram of the `select()` function (by Nathan Stephens).
The `select()` function takes in the data frame to select from, followed by the names of the columns you wish to select (quotation marks are optional!)
This function is equivalent to simply extracting the columns:
```
# Extract columns by name
storm_info <- storms[, c("storm", "pressure")] # Note the comma!
```
But easier to read and write!
### 10\.2\.2 Filter
The **`filter()`** operation allows you to choose and extract **rows** of interest from your data frame (contrasted with `select()` which extracts *columns*).
```
# Select rows whose `wind` column is greater than or equal to 50
some_storms <- filter(storms, wind >= 50)
```
Diagram of the `filter()` function (by Nathan Stephens).
The `filter()` function takes in the data frame to filter, followed by a comma\-separated list of conditions that each returned *row* must satisfy. Note again that columns are provided without quotation marks!
* R will extract the rows that match **all** conditions. Thus you are specifying that you want to filter down a data frame to contain only the rows that meet Condition 1 **and** Condition 2\.
This function is equivalent to simply extracting the rows:
```
# Extract rows by condition
some_storms <- storms[storms$wind >= 50, ] # Note the comma!
```
As the number of conditions increases, it is **far easier** to read and write `filter()` functions, rather than squeeze your conditions into brackets.
### 10\.2\.3 Mutate
The **`mutate()`** operation allows you to create additional **columns** for your data frame.
```
# Add `ratio` column that is ratio between pressure and wind
storms <- mutate(storms, ratio = pressure/wind) # Replace existing `storms` frame with mutated one!
```
Diagram of the `mutate()` function (by Nathan Stephens).
The `mutate()` function takes in the data frame to mutate, followed by a comma\-separated list of columns to create using the same **`name = vector`** syntax you used when creating **lists** or **data frames** from scratch. As always, the names of the columns in the data frame are used without quotation marks.
* Despite the name, the `mutate()` function doesn’t actually change the data frame; instead it returns a *new* data frame that has the extra columns added. You will often want to replace the old data frame variable with this new value.
In cases where you are creating multiple columns (and therefore writing really long code instructions), you should break the single statement into multiple lines for readability. Because you haven’t closed the parentheses on the function arguments, R will not treat each line as a separate statement.
```
# Generic mutate command
more_columns <- mutate(
my_data_frame,
new_column_1 = old_column * 2,
new_column_2 = old_column * 3,
new_column_3 = old_column * 4
)
```
### 10\.2\.4 Arrange
The **`arrange()`** operation allows you to **sort the rows** of your data frame by some feature (column value).
```
# Arrange storms by INCREASING order of the `wind` column
sorted_storms <- arrange(storms, wind)
```
Diagram of the `arrange()` function (by Nathan Stephens).
By default, the `arrange()` function will sort rows in **increasing** order. To sort in **reverse** (decreasing) order, place a minus sign (**`-`**) in front of the column name (e.g., `-wind`). You can also use the `desc()` helper function (e.g, `desc(wind)`).
* You can pass multiple arguments into the `arrange()` function in order to sort first by `argument_1`, then by `argument_2`, and so on.
* Again, this doesn’t actually modify the argument data frame—instead returning a new data frame you’ll need to store.
### 10\.2\.5 Summarize
The **`summarize()`** function (equivalently `summarise()` for those using the British spelling) will generate a *new* data frame that contains a “summary” of a **column**, computing a single value from the multiple elements in that column.
```
# Compute the median value of the `amount` column
summary <- summarize(pollution, median = median(amount))
```
Diagram of the `summarize()` function (by Nathan Stephens).
The `summarize()` function takes in the data frame to mutate, followed by the values that will be included in the resulting summary table. You can use multiple arguments to include multiple summaries in the same statement:
```
# Compute statistics for the `amount` column
summaries <- summarize(
pollution,
median = median(amount), # median value
mean = mean(amount), # "average" value
sum = sum(amount), # total value
count = n() # number of values (neat trick!)
)
```
Note that the `summarize()` function is particularly useful for grouped operations (see [below](dplyr.html#grouped-operations)), as you can produce summaries of different groups of data.
### 10\.2\.6 Distinct
The **`distinct()`** operation allows you to extract distinct values (rows) from your data frame—that is, you’ll get one row for each different value in the dataframe (or set of selected **columns**). This is a useful tool to confirm that you don’t have **duplicate observations**, which often occurs in messy datasets.
For example (no diagram available):
```
# Create a quick data frame
x <- c(1, 1, 2, 2, 3, 3, 4, 4) # duplicate x values
y <- 1:8 # unique y values
my_df <- data.frame(x, y)
# Select distinct rows, judging by the `x` column
distinct_rows <- distinct(my_df, x)
# x
# 1 1
# 2 2
# 3 3
# 4 4
# Select distinct rows, judging by the `x` and `y`columns
distinct_rows <- distinct(my_df, x, y) # returns whole table, since no duplicate rows
```
While this is a simple way to get a unique set of rows, **be careful** not to lazily remove rows of your data which may be important.
### 10\.2\.1 Select
The **`select()`** operation allows you to choose and extract **columns** of interest from your data frame.
```
# Select `storm` and `pressure` columns from `storms` data frame
storm_info <- select(storms, storm, pressure)
```
Diagram of the `select()` function (by Nathan Stephens).
The `select()` function takes in the data frame to select from, followed by the names of the columns you wish to select (quotation marks are optional!)
This function is equivalent to simply extracting the columns:
```
# Extract columns by name
storm_info <- storms[, c("storm", "pressure")] # Note the comma!
```
But easier to read and write!
### 10\.2\.2 Filter
The **`filter()`** operation allows you to choose and extract **rows** of interest from your data frame (contrasted with `select()` which extracts *columns*).
```
# Select rows whose `wind` column is greater than or equal to 50
some_storms <- filter(storms, wind >= 50)
```
Diagram of the `filter()` function (by Nathan Stephens).
The `filter()` function takes in the data frame to filter, followed by a comma\-separated list of conditions that each returned *row* must satisfy. Note again that columns are provided without quotation marks!
* R will extract the rows that match **all** conditions. Thus you are specifying that you want to filter down a data frame to contain only the rows that meet Condition 1 **and** Condition 2\.
This function is equivalent to simply extracting the rows:
```
# Extract rows by condition
some_storms <- storms[storms$wind >= 50, ] # Note the comma!
```
As the number of conditions increases, it is **far easier** to read and write `filter()` functions, rather than squeeze your conditions into brackets.
### 10\.2\.3 Mutate
The **`mutate()`** operation allows you to create additional **columns** for your data frame.
```
# Add `ratio` column that is ratio between pressure and wind
storms <- mutate(storms, ratio = pressure/wind) # Replace existing `storms` frame with mutated one!
```
Diagram of the `mutate()` function (by Nathan Stephens).
The `mutate()` function takes in the data frame to mutate, followed by a comma\-separated list of columns to create using the same **`name = vector`** syntax you used when creating **lists** or **data frames** from scratch. As always, the names of the columns in the data frame are used without quotation marks.
* Despite the name, the `mutate()` function doesn’t actually change the data frame; instead it returns a *new* data frame that has the extra columns added. You will often want to replace the old data frame variable with this new value.
In cases where you are creating multiple columns (and therefore writing really long code instructions), you should break the single statement into multiple lines for readability. Because you haven’t closed the parentheses on the function arguments, R will not treat each line as a separate statement.
```
# Generic mutate command
more_columns <- mutate(
my_data_frame,
new_column_1 = old_column * 2,
new_column_2 = old_column * 3,
new_column_3 = old_column * 4
)
```
### 10\.2\.4 Arrange
The **`arrange()`** operation allows you to **sort the rows** of your data frame by some feature (column value).
```
# Arrange storms by INCREASING order of the `wind` column
sorted_storms <- arrange(storms, wind)
```
Diagram of the `arrange()` function (by Nathan Stephens).
By default, the `arrange()` function will sort rows in **increasing** order. To sort in **reverse** (decreasing) order, place a minus sign (**`-`**) in front of the column name (e.g., `-wind`). You can also use the `desc()` helper function (e.g, `desc(wind)`).
* You can pass multiple arguments into the `arrange()` function in order to sort first by `argument_1`, then by `argument_2`, and so on.
* Again, this doesn’t actually modify the argument data frame—instead returning a new data frame you’ll need to store.
### 10\.2\.5 Summarize
The **`summarize()`** function (equivalently `summarise()` for those using the British spelling) will generate a *new* data frame that contains a “summary” of a **column**, computing a single value from the multiple elements in that column.
```
# Compute the median value of the `amount` column
summary <- summarize(pollution, median = median(amount))
```
Diagram of the `summarize()` function (by Nathan Stephens).
The `summarize()` function takes in the data frame to mutate, followed by the values that will be included in the resulting summary table. You can use multiple arguments to include multiple summaries in the same statement:
```
# Compute statistics for the `amount` column
summaries <- summarize(
pollution,
median = median(amount), # median value
mean = mean(amount), # "average" value
sum = sum(amount), # total value
count = n() # number of values (neat trick!)
)
```
Note that the `summarize()` function is particularly useful for grouped operations (see [below](dplyr.html#grouped-operations)), as you can produce summaries of different groups of data.
### 10\.2\.6 Distinct
The **`distinct()`** operation allows you to extract distinct values (rows) from your data frame—that is, you’ll get one row for each different value in the dataframe (or set of selected **columns**). This is a useful tool to confirm that you don’t have **duplicate observations**, which often occurs in messy datasets.
For example (no diagram available):
```
# Create a quick data frame
x <- c(1, 1, 2, 2, 3, 3, 4, 4) # duplicate x values
y <- 1:8 # unique y values
my_df <- data.frame(x, y)
# Select distinct rows, judging by the `x` column
distinct_rows <- distinct(my_df, x)
# x
# 1 1
# 2 2
# 3 3
# 4 4
# Select distinct rows, judging by the `x` and `y`columns
distinct_rows <- distinct(my_df, x, y) # returns whole table, since no duplicate rows
```
While this is a simple way to get a unique set of rows, **be careful** not to lazily remove rows of your data which may be important.
10\.3 Multiple Operations
-------------------------
You’ve likely encountered a number of instances in which you want to take the results from one function and pass them into another function. Your approach thus far has often been to create *temporary variables* for use in your analysis. For example, if you’re using the `mtcars` dataset, you may want to ask a simple question like,
> Which 4\-cylinder car gets the best milage per gallon?
This simple question actually requires a few steps:
1. *Filter* down the dataset to only 4\-cylinder cars
2. Of the 4\-cylinder cars, *filter* down to the one with the highest mpg
3. *Select* the car name of the car
You could then implement each step as follows:
```
# Preparation: add a column that is the car name
mtcars_named <- mutate(mtcars, car_name = row.names(mtcars))
# 1. Filter down to only four cylinder cars
four_cyl <- filter(mtcars_named, cyl == 4)
# 2. Filter down to the one with the highest mpg
best_four_cyl <- filter(four_cyl, mpg == max(mpg))
# 3. Select the car name of the car
best_car_name <- select(best_four_cyl, car_name)
```
While this works fine, it clutters the work environment with variables you won’t need to use again, and which can potentially step on one another’s toes. It can help with readability (the results of each step is explicit), but those extra variables make it harder to modify and change the algorithm later (you have to change them in two places).
An alternative to saving each step as a distinct, named variable would be to utilize **anonymous variables** and write the desired statements **nested** within other functions. For example, you could write the algorithm above as follows:
```
# Preparation: add a column that is the car name
mtcars_named <- mutate(mtcars, car_name = row.names(mtcars))
# Write a nested operation to return the best car name
best_car_name <- select( # 3. Select car name of the car
filter( # 2. Filter down to the one with the highest mpg
filter( # 1. Filter down to only four cylinder cars
mtcars_named, # arguments for the Step 1 filter
cyl == 4
),
mpg == max(mpg) # other arguments for the Step 2 filter
),
car_name # other arguments for the Step 3 select
)
```
This version uses *anonymous variables*—result values which are not assigned to names (so are anonymous), but instead are immediately used as the arguments to another function. You’ve used these frequently with the `print()` function and with filters (those vectors of `TRUE` and `FALSE` values)—and even the `max(mpg)` in the Step 2 filter is an anonymous variable!
This *nested* version performs the same results as the *temporary variable* version without creating the extra variables, but even with only 3 steps it can get quite complicated to read—in a large part because you have to think about it “inside out”, with the stuff in the middle evaluating first. This will obviously become undecipherable for more involved operations.
### 10\.3\.1 The Pipe Operator
Luckily, `dplyr` provides a cleaner and more effective way of achieving the same task (that is, using the result of one function as an argument to the next). The **pipe operator** (**`%>%`**) indicates that the result from the first function operand should be passed in as **the first argument** to the next function operand!
As a simple example:
```
# nested version: evaluate c(), then max(), then print()
print(max(c(2, 0, 1)))
# pipe version
c(1,2,3) %>% # do first function
max() %>% # which becomes the _first_ argument to the next function
print() # which becomes the _first_ argument to the next function
```
Or as another version of the above data wrangling:
```
# Preparation: add a column that is the car name
mtcars_named <- mutate(mtcars, car_name = row.names(mtcars))
best_car_name <- filter(mtcars_named, cyl == 4) %>% # Step 1
filter(mpg == max(mpg)) %>% # Step 2
select(car_name) # Step 3
```
* Yes, the `%>%` operator is awkward to type and takes some getting use to (especially compared to the command\-line’s use of `|` to pipe). However, you can ease the typing by using the [RStudio keyboard shortcut](https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts) `cmd + shift + m`.
The pipe operator is part of the `dplyr` package (it is only available if you load that package), but it will work with *any* function, not just `dplyr` ones! This syntax, while slightly odd, can completely change and simplify the way you write code to ask questions about your data!
### 10\.3\.1 The Pipe Operator
Luckily, `dplyr` provides a cleaner and more effective way of achieving the same task (that is, using the result of one function as an argument to the next). The **pipe operator** (**`%>%`**) indicates that the result from the first function operand should be passed in as **the first argument** to the next function operand!
As a simple example:
```
# nested version: evaluate c(), then max(), then print()
print(max(c(2, 0, 1)))
# pipe version
c(1,2,3) %>% # do first function
max() %>% # which becomes the _first_ argument to the next function
print() # which becomes the _first_ argument to the next function
```
Or as another version of the above data wrangling:
```
# Preparation: add a column that is the car name
mtcars_named <- mutate(mtcars, car_name = row.names(mtcars))
best_car_name <- filter(mtcars_named, cyl == 4) %>% # Step 1
filter(mpg == max(mpg)) %>% # Step 2
select(car_name) # Step 3
```
* Yes, the `%>%` operator is awkward to type and takes some getting use to (especially compared to the command\-line’s use of `|` to pipe). However, you can ease the typing by using the [RStudio keyboard shortcut](https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts) `cmd + shift + m`.
The pipe operator is part of the `dplyr` package (it is only available if you load that package), but it will work with *any* function, not just `dplyr` ones! This syntax, while slightly odd, can completely change and simplify the way you write code to ask questions about your data!
10\.4 Grouped Operations
------------------------
`dplyr` functions are powerful, but they are truly awesome when you can apply them to **groups of rows** within a data set. For example, the above use of `summarize()` isn’t particularly useful since it just gives a single summary for a given column (which you could have done anyway). However, a **grouped** operation would allow you to compute the same summary measure (`mean`, `median`, `sum`, etc.) automatically for multiple groups of rows, enabling you to ask more nuanced questions about your data set.
The **`group_by()`** operation allows you to break a data frame down into *groups* of rows, which can then have the other verbs (e.g., `summarize`, `filter`, etc). applied to each one.
```
# Get summary statistics by city
city_summary <- group_by(pollution, city) %>%
summarize( # first argument (the data frame) is received from the pipe
mean = mean(amount),
sum = sum(amount),
n = n()
)
```
Diagram of the `group_by()` function (by Nathan Stephens).
As another example, if you were using the `mtcars` dataset, you may want to answer this question:
> What are the differences in mean miles per gallon for cars with different numbers of gears (3, 4, or 5\)?
This simple question requires computing the mean for different subsets of the data. Rather than explicitly breaking your data into different groups (a.k.a. *bins* or *chunks*) and running the same operations on each, you can use the `group_by()` function to accomplish this in a single command:
```
# Group cars by gear number, then compute the mean and median mpg
gear_summary <- group_by(mtcars, gear) %>% # group by gear
summarize(mean = mean(mpg)) # calculate mean
# Computing the difference between scores is done elsewhere (or by hand!)
```
Thus grouping can allow you to quickly and easily compare different subsets of your data!
10\.5 Joins
-----------
When working with real\-world data, you’ll often find that that data is stored across *multiple* files or data frames. This can be done for a number of reasons. For one, it can help to reduce memory usage (in the same manner as **factors**). For example, if you had a data frame containing information on students enrolled in university courses, you might store information about each course (the instructor, meeting time, and classroom) in a separate data frame rather than duplicating that information for every student that takes the same course. You also may simply want to keep your information organized: e.g., have student information in one file, and course information in another.
* This separation and organization of data is a core concern in the design of [relational databases](https://en.wikipedia.org/wiki/Relational_database), a common topic of study within Information Schools.
But at some point, you’ll want to access information from both data sets (e.g., you need to figure out a student’s schedule), and thus need a way to combine the data frames. This process is called a **join** (because you are “joining” the data frames together). When you perform a join, you identify **columns** which are present in both tables. Those column values are then used as **identifiers** to determine which rows in each table correspond to one another, and thus will be combined into a row in the resulting joined table.
The **`left_join()`** operation is one example of a join. This operation looks for matching columns between the two data frames, and then returns a new data frame that is the first (“left”) operand with extra columns from the second operand added on.
```
# Combine (join) songs and artists data frames
left_join(songs, artists)
```
Diagram of the `left_join()` function (by Nathan Stephens).
To understand how this works, consider a specific example where you have a table of student\_ids and the students’ contact information. You also have a separate table of student\_ids and the students’ majors (your institution very well may store this information in separate tables for privacy or organizational reasons).
```
# Table of contact information
student_contact <- data.frame(
student_id = c(1, 2, 3, 4), # id numbers
email = c("id1@school.edu", "id2@school.edu", "id3@school.edu", "id4@school.edu")
)
# Table of information about majors
student_majors <- data.frame(
student_id = c(1, 2, 3), # id numbers
major = c("sociology", "math", "biology")
)
```
Notice that both tables have a `student_id` column, allowing you to “match” the rows from the `student_contact` table to the `student_majors` table and merge them together:
```
# Join tables by the student_id column
merged_student_info <- left_join(student_contact, student_majors)
# student_id email major
# 1 1 id1@school.edu sociology
# 2 2 id2@school.edu math
# 3 3 id3@school.edu biology
# 4 4 id4@school.edu <NA>
```
When you perform this **left join**, R goes through each row in the table on the “left” (the first argument), looking at the shared column(s) (`student_id`). For each row, it looks for a corresponding value in `student_majors$student_id`, and if it finds one then it adds any data from columns that are in `student_majors` but *not* in `student_contact` (e.g., `major`) to new columns in the resulting table, with values from whatever the matching row was. Thus student \#1 is given a `major` of “sociology”, student \#2 is given a `major` of “math”, and student \#4 is given a `major` of `NA` (because that student had no corresponding row in `student_majors`!)
* In short, a **left join** returns all of the rows from the *first* table, with all of the columns from *both* tables.
R will join tables by any and all shared columns. However, if the names of your columns don’t match specifically, you can also specify a `by` argument indicating which columns should be used for the matching:
```
# Use the named `by` argument to specify (a vector of) columns to match on
left_join(student_contact, student_majors, by="student_id")
```
* With the `by` argument, column name *is* a string (in quotes) because you’re specifying a vector of column names (the string literal is a vector length 1\).
Notice that because of how a left join is defined, **the argument order matters!** The resulting table only has rows for elements in the *left* (first) table; any unmatched elements in the second table are lost. If you switch the order of the operands, you would only have information for students with majors:
```
# Join tables by the student_id column
merged_student_info <- left_join(student_majors, student_contact) # switched order!
# student_id major email
# 1 1 sociology id1@school.edu
# 2 2 math id2@school.edu
# 3 3 biology id3@school.edu
```
You don’t get any information for student \#4, because they didn’t have a record in the left\-hand table!
Because of this behavior, `dplyr` (and relational database systems in general) provide a number of different kinds of joins, each of which influences *which* rows are included in the final table. Note that in any case, *all* columns from *both* tables will be included, with rows taking on any values from their matches in the second table.
* **`left_join`** All rows from the first (left) data frame are returned. That is, you get all the data from the left\-hand table, with extra column values added from the right\-hand table. Left\-hand rows without a match will have `NA` in the right\-hand columns.
* **`right_join`** All rows from the second (right) data frame are returned. That is, you get all the data from the right\-hand table, with extra column values added from the left\-hand table. Right\-hand rows without a match will have `NA` in the left\-hand columns. This is the “opposite” of a `left_join`, and the equivalent of switching the operands.
* **`inner_join`** Only rows in **both** data frames are returned. That is, you get any rows that had matching observations in both tables, with the column values from both tables. There will be no additional `NA` values created by the join. Observations from the left that had no match in the right, or observations in the right that had no match in the left, will not be returned at all.
* **`full_join`** All rows from **both** data frames are returned. That is, you get a row for any observation, whether or not it matched. If it happened to match, it will have values from both tables in that row. Observations without a match will have `NA` in the columns from the other table.
The key to deciding between these is to think about what set of data you want as your set of observations (rows), and which columns you’d be okay with being `NA` if a record is missing.
Note that these are all *mutating joins*, which add columns from one table to another. `dplyr` also provides *filtering joins* which exclude rows based on whether they have a matching observation in another table, and *set operations* which combine observations as if they were set elements. See [the documentation](https://cran.r-project.org/web/packages/dplyr/vignettes/two-table.html) for more detail on these options, but in this course we’ll be primarily focusing on the mutating joins described above.
10\.6 Non\-Standard Evaluation vs. Standard Evaluation
------------------------------------------------------
One of the features that makes `dplyr` such a clean and attractive way to write code is that inside of each function, you’ve been able to write column variable names **without quotes**. This is called **non\-standard evaluation (NSE)** (it is *not* the *standard* way that code is *evaluated*, or interpreted), and is useful primarily because of how it reduces typing (along with some other benefits when working with databases). In particular, `dplyr` will [“quote”](http://dplyr.tidyverse.org/articles/programming.html) expressions for you, converting those variables (symbols) into values that can be used to refer to column names.
Most of the time this won’t cause you any problems—you can either use NSE to refer to column names without quotes, or provide the quotes yourself. You can even use variables to store the name of a column of interest!
```
# Normal, non-standard evaluation version
mpg <- select(mtcars, mpg)
# "Standard-evaluation" version (same result)
mpg <- select(mtcars, "mpg") # with quotes! "mpg" is a normal value!
# Make the column name a variable
which_col <- "mpg"
my_column <- select(mtcars, which_col)
```
However, this NSE can sometimes trip you up when using more complex functions such as `summarize()` or `group_by()`, or when you want to create your own functions that use NSE.
```
which_col <- "mpg"
summarize(mtcars, avg = mean(which_col)) # In mean.default(which_col) :
# argument is not numeric or logical: returning NA
```
In this case, the `summarize()` function is trying to “quote” what we typed in (the `which_col` variable name\&dmash;not it’s `mpg` value), and then hitting a problem because there is no column of that name (it can’t resolve that column name to a column index).
To fix this problem, there are two parts: first, you need to explicitly tell R that the *value* of `which_col` (`mpg`) is actually the value that needs to be automatically “quoted”—that is, that `mpg` is really a variable! Variable names in R are referred to as [**symbols**](https://cran.r-project.org/doc/manuals/r-release/R-lang.html#Symbol-objects)—a symbol refers to the variable label itself. You can explicitly change a value into a symbol by using the [`rlang::sym()`](https://www.rdocumentation.org/packages/rlang/versions/0.1.6/topics/sym) function (the `sym()` function found in the `rlang` library; the `::` indicates that the function belongs to a library).
```
which_col_sym <- rlang::sym(which_col) # convert to a symbol
print(which_col_sym) # => mpg (but not in quotes, because it's not a string!)
```
Second, you will need to tell the `summarize()` function that it should *not* quote this symbol (because you’ve already converted it into a variable)—what is called **unquoting**. In `dplyr`, you “unquote” a parameter to a method by including two exclamation points in front of it:
```
summarize(mtcars, avg = mean(!!which_col_sym)) # arranges by the specified column
```
There are many more details involved in this “quoting/unquoting” process, which are described in [this tutorial](http://dplyr.tidyverse.org/articles/programming.html) (though that is currently [being updated with better examples](http://rpubs.com/lionel-/programming-draft)).
### 10\.6\.1 Explicit Standard Evaluation
Alternatively, older versions of `dplyr` supplied functions that *explicitly* performed **standard evaluation (SE)**—that is, they provide no quoting and expected you to do that work yourself. While now considered deprecated, they can still be useful if you are having problems with the new quoting system. These functions have the exact same names as the normal verb functions, except are followed by an underscore (**`_`**):
```
# Normal, non-standard evaluation version
mpg <- select(mtcars, mpg)
# Standard-evaluation version (same result)
mpg <- select_(mtcars, 'mpg') # with quotes! 'mpg' is a normal value!
# Normal, non-standard evaluation version of equations
mean_mpg <- summarize(mtcars, mean(mpg))
# Standard-evaluation version of equations (same result)
mean_mpg <- summarize_(mtcars, 'mean(mpg)')
# Which column you're interested in
which_column <- 'mpg'
# Use standard evaluation to execute function:
my_column <- arrange_(mtcars, which_column)
```
Yes, it does feel a bit off that the “normal” way of using `dplyr` is the “non\-standard” way. Remember that using SE is the “different” approach
The non\-standard evaluation offered by `dplyr` can make it quick and easy to work with data when you know its structure and variable names, but can be a challenge when trying to work with variables. Often in that case, you may want to instead use the standard data frame syntax (e.g., bracket notation) described in Chapter 9\.
### 10\.6\.1 Explicit Standard Evaluation
Alternatively, older versions of `dplyr` supplied functions that *explicitly* performed **standard evaluation (SE)**—that is, they provide no quoting and expected you to do that work yourself. While now considered deprecated, they can still be useful if you are having problems with the new quoting system. These functions have the exact same names as the normal verb functions, except are followed by an underscore (**`_`**):
```
# Normal, non-standard evaluation version
mpg <- select(mtcars, mpg)
# Standard-evaluation version (same result)
mpg <- select_(mtcars, 'mpg') # with quotes! 'mpg' is a normal value!
# Normal, non-standard evaluation version of equations
mean_mpg <- summarize(mtcars, mean(mpg))
# Standard-evaluation version of equations (same result)
mean_mpg <- summarize_(mtcars, 'mean(mpg)')
# Which column you're interested in
which_column <- 'mpg'
# Use standard evaluation to execute function:
my_column <- arrange_(mtcars, which_column)
```
Yes, it does feel a bit off that the “normal” way of using `dplyr` is the “non\-standard” way. Remember that using SE is the “different” approach
The non\-standard evaluation offered by `dplyr` can make it quick and easy to work with data when you know its structure and variable names, but can be a challenge when trying to work with variables. Often in that case, you may want to instead use the standard data frame syntax (e.g., bracket notation) described in Chapter 9\.
Resources
---------
* [Introduction to dplyr](https://cran.r-project.org/web/packages/dplyr/vignettes/introduction.html)
* [dplyr and pipes: the basics (blog)](http://seananderson.ca/2014/09/13/dplyr-intro.html)
* [Two\-table verbs](https://cran.r-project.org/web/packages/dplyr/vignettes/two-table.html)
* [DPLYR Join Cheatsheet (Jenny Bryan)](http://stat545.com/bit001_dplyr-cheatsheet.html)
* [Non\-standard evaluation](https://cran.r-project.org/web/packages/dplyr/vignettes/nse.html)
* [Data Manipulation with DPLYR (R\-bloggers)](https://www.r-bloggers.com/data-manipulation-with-dplyr/)
| Field Specific |
info201.github.io | https://info201.github.io/apis.html |
Chapter 11 Accessing Web APIs
=============================
R is able to load data from external packages or read it from locally\-saved `.csv` files, but it is also able to download data directly from web sites on the internet. This allows scripts to always work with the latest data available, performing analysis on data that may be changing rapidly (such as from social networks or other live events). Web services may make their data easily accessible to computer programs like R scripts by offering an **Application Programming Interface (API)**. A web service’s API specifies *where* and *how* particular data may be accessed, and many web services follow a particular style known as *Representational State Transfer (REST)*. This chapter will cover how to access and work with data from these *RESTful APIs*.
11\.1 What is a Web API?
------------------------
An **interface** is the point at which two different systems meet and *communicate*: exchanging informations and instructions. An **Application Programming Interface (API)** thus represents a way of communicating with a computer application by writing a computer program (a set of formal instructions understandable by a machine). APIs commonly take the form of **functions** that can be called to give instructions to programs—the set of functions provided by a library like `dplyr` make up the API for that library.
While most APIs provide an interface for utilizing *functionality*, other APIs provide an interface for accessing *data*. One of the most common sources of these data apis are **web services**: websites that offer an interface for accessing their data.
With web services, the interface (the set of “functions” you can call to access the data) takes the form of **HTTP Requests**—a *request* for data sent following the ***H**yper**T**ext **T**ransfer **P**rotocol*. This is the same protocol (way of communicating) used by your browser to view a web page! An HTTP Request represents a message that your computer sends to a web server (another computer on the internet which “serves”, or provides, information). That server, upon receiving the request, will determine what data to include in the **response** it sends *back* to the requesting computer. With a web browser, the response data takes the form of HTML files that the browser can *render* as web pages. With data APIs, the response data will be structured data that you can convert into R structures such as lists or data frames.
In short, loading data from a Web API involves sending an **HTTP Request** to a server for a particular piece of data, and then receiving and parsing the **response** to that request.
11\.2 RESTful Requests
----------------------
There are two parts to a request sent to an API: the name of the **resource** (data) that you wish to access, and a **verb** indicating what you want to do with that resource. In many ways, the *verb* is the function you want to call on the API, and the *resource* is an argument to that function.
### 11\.2\.1 URIs
Which **resource** you want to access is specified with a **Uniform Resource Identifier (URI)**. A URI is a generalization of a URL (Uniform Resource Locator)—what you commonly think of as “web addresses”. URIs act a lot like the *address* on a postal letter sent within a large organization such as a university: you indicate the business address as well as the department and the person, and will get a different response (and different data) from Alice in Accounting than from Sally in Sales.
* Note that the URI is the **identifier** (think: variable name) for the resource, while the **resource** is the actual *data* value that you want to access.
Like postal letter addresses, URIs have a very specific format used to direct the request to the right resource.
The format (schema) of a URI.
Not all parts of the format are required—for example, you don’t need a `port`, `query`, or `fragment`. Important parts of the format include:
* `scheme` (`protocol`): the “language” that the computer will use to communicate the request to this resource. With web services this is normally `https` (**s**ecure HTTP)
* `domain`: the address of the web server to request information from
* `path`: which resource on that web server you wish to access. This may be the name of a file with an extension if you’re trying to access a particular file, but with web services it often just looks like a folder path!
* `query`: extra **parameters** (arguments) about what resource to access.
The `domain` and `path` usually specify the resource. For example, `www.domain.com/users` might be an *identifier* for a resource which is a list of users. Note that web services can also have “subresources” by adding extra pieces to the path: `www.domain.com/users/mike` might refer to the specific “mike” user in that list.
With an API, the domain and path are often viewed as being broken up into two parts:
* The **Base URI** is the domain and part of the path that is included on *all* resources. It acts as the “root” for any particular resource. For example, the [GitHub API](https://developer.github.com/v3/) has a base URI of `https://api.github.com/`.
* An **Endpoint**, or which resource on that domain you want to access. Each API will have *many* different endpoints.
For example, GitHub includes endpoints such as:
+ `/users/{user}` to get information about a specific `:user:` `id` (the `{}` indicate a “variable”, in that you can put any username in there in place of the string `"{user}"`). Check out this [example](https://api.github.com/users/mkfreeman) in your browser.
+ `/orgs/{organization}/repos` to get the repositories that are on an organization page. See an [example](https://api.github.com/orgs/info201a-au17/repos).
Thus you can equivalently talk about accessing a particular **resource** and sending a request to a particular **endpoint**. The **endpoint** is appended to the end of the **Base URI**, so you could access a GitHub user by combining the **Base URI** (`https://api.github.com`) and **endpoint** (`/users/mkfreeman`) into a single string: `https://api.github.com/users/mkfreeman`. That URL will return a data structure of new releases, which you can request from your R program or simply view in your web\-browser.
#### 11\.2\.1\.1 Query Parameters
Often in order to access only partial sets of data from a resource (e.g., to only get some users) you also include a set of **query parameters**. These are like extra arguments that are given to the request function. Query parameters are listed after a question mark **`?`** in the URI, and are formed as key\-value pairs similar to how you named items in *lists*. The **key** (*parameter name*) is listed first, followed by an equal sign **`=`**, followed by the **value** (*parameter value*); note that you can’t include any spaces in URIs! You can include multiple query parameters by putting an ampersand **`&`** between each key\-value pair:
```
?firstParam=firstValue&secondParam=secondValue&thirdParam=thirdValue
```
Exactly what parameter names you need to include (and what are legal values to assign to that name) depends on the particular web service. Common examples include having parameters named `q` or `query` for searching, with a value being whatever term you want to search for: in [`https://www.google.com/search?q=informatics`](https://www.google.com/search?q=informatics), the **resource** at the `/search` **endpoint** takes a query parameter `q` with the term you want to search for!
#### 11\.2\.1\.2 Access Tokens and API Keys
Many web services require you to register with them in order to send them requests. This allows them to limit access to the data, as well as to keep track of who is asking for what data (usually so that if someone starts “spamming” the service, they can be blocked).
To facilitate this tracking, many services provide **Access Tokens** (also called **API Keys**). These are unique strings of letters and numbers that identify a particular developer (like a secret password that only works for you). Web services will require you to include your *access token* as a query parameter in the request; the exact name of the parameter varies, but it often looks like `access_token` or `api_key`. When exploring a web service, keep an eye out for whether they require such tokens.
*Access tokens* act a lot like passwords; you will want to keep them secret and not share them with others. This means that you **should not include them in your committed files**, so that the passwords don’t get pushed to GitHub and shared with the world. The best way to get around this in R is to create a separate script file in your repo (e.g., `api-keys.R`) which includes exactly one line: assigning the key to a variable:
```
# in `api-keys.R`
api_key <- "123456789abcdefg"
```
You can then include this file\_name\_ in a **`.gitignore`** file in your repo; that will keep it from even possibly being committed with your code!
In order to access this variable in your “main” script, you can use the `source()` function to load and run your `api-keys.R` script. This will execute the line of code that assigns the `api_key` variable, making it available in your environment for your use:
```
# in `my-script.R`
# (make sure working directory is set)
source('api-keys.R') # load the script
print(api_key) # key is now available!
```
Anyone else who runs the script will simply need to provide an `api_key` variable to access the API using their key, keeping everyone’s account separate!
Watch out for APIs that mention using [OAuth](https://en.wikipedia.org/wiki/OAuth) when explaining API keys. OAuth is a system for performing **authentification**—that is, letting someone log into a website from your application (like what a “Log in with Facebook” button does). OAuth systems require more than one access key, and these keys ***must*** be kept secret and usually require you to run a web server to utilize them correctly (which requires lots of extra setup, see [the full `httr` docs](https://cran.r-project.org/web/packages/httr/httr.pdf) for details). So for this course, we encourage you to avoid anything that needs OAuth
### 11\.2\.2 HTTP Verbs
When you send a request to a particular resource, you need to indicate what you want to *do* with that resource. When you load web content, you are typically sending a request to retrieve information (logically, this is a `GET` request). However, there are other actions you can perform to modify the data structure on the server. This is done by specifying an **HTTP Verb** in the request. The HTTP protocol supports the following verbs:
* `GET` Return a representation of the current state of the resource
* `POST` Add a new subresource (e.g., insert a record)
* `PUT` Update the resource to have a new state
* `PATCH` Update a portion of the resource’s state
* `DELETE` Remove the resource
* `OPTIONS` Return the set of methods that can be performed on the resource
By far the most common verb is `GET`, which is used to “get” (download) data from a web service. Depending on how you connect to your API (i.e., which programming language you are using), you’ll specify the verb of interest to indicate what we want to do to a particular resource.
Overall, this structure of treating each datum on the web as a **resource** which we can interact with via **HTTP Requests** is referred to as the **REST Architecture** (REST stands for *REpresentational State Transfer*). This is a standard way of structuring computer applications that allows them to be interacted with in the same way as everyday websites. Thus a web service that enabled data access through named resources and responds to HTTP requests is known as a **RESTful** service, with a *RESTful API*.
11\.3 Accessing Web APIs
------------------------
To access a Web API, you just need to send an HTTP Request to a particular URI. You can easily do this with the browser: simply navigate to a particular address (base URI \+ endpoint), and that will cause the browser to send a `GET` request and display the resulting data. For example, you can send a request to search GitHub for repositories named `d3` by visiting:
```
https://api.github.com/search/repositories?q=d3&sort=forks
```
This query accesses the `/search/repositories/` endpoint, and also specifies 2 query parameters:
* `q`: The term(s) you are searching for, and
* `sort`: The attribute of each repository that you would like to use to sort the results
(Note that the data you’ll get back is structured in JSON format. See [below](apis.html#json) for details).
In `R` you can send GET requests using the [`httr`](https://cran.r-project.org/web/packages/httr/vignettes/quickstart.html) library. Like `dplyr`, you will need to install and load it to use it:
```
install.packages("httr") # once per machine
library("httr")
```
This library provides a number of functions that reflect HTTP verbs. For example, the **`GET()`** function will send an HTTP GET Request to the URI specified as an argument:
```
# Get search results for artist
response <- GET("https://api.github.com/search/repositories?q=d3&sort=forks")
```
While it is possible to include *query parameters* in the URI string, `httr` also allows you to include them as a *list*, making it easy to set and change variables (instead of needing to do a complex `paste0()` operation to produce the correct string):
```
# Equivalent to the above, but easier to read and change
query_params <- list(q = "d3", sort = "forks")
response <- GET("https://api.github.com", query = query_params)
```
If you try printing out the `response` variable, you’ll see information about the response:
```
Response [https://api.github.com/?q=d3&sort=forks]
Date: 2017-10-22 19:05
Status: 200
Content-Type: application/json; charset=utf-8
Size: 2.16 kB
```
This is called the **response header**. Each **response** has two parts: the **header**, and the **body**. You can think of the response as a envelope: the *header* contains meta\-data like the address and postage date, while the *body* contains the actual contents of the letter (the data).
Since you’re almost always interested in working with the *body*, you will need to extract that data from the response (e.g., open up the envelope and pull out the letter). You can do this with the `content()` method:
```
# extract content from response, as a text string (not a list!)
body <- content(response, "text")
```
Note the second argument `"text"`; this is needed to keep `httr` from doing it’s own processing on the body data, since we’ll be using other methods to handle that; keep reading for details!
*Pro\-tip*: The URI shown when you print out the `response` is a good way to check exactly what URI you were sending the request to: copy that into your browser to make sure it goes where you expected!
11\.4 JSON Data
---------------
Most APIs will return data in **JavaScript Object Notation (JSON)** format. Like CSV, this is a format for writing down structured data—but while `.csv` files organize data into rows and columns (like a data frame), JSON allows you to organize elements into **key\-value pairs** similar to an R *list*! This allows the data to have much more complex structure, which is useful for web services (but can be challenging for us).
In JSON, lists of key\-value pairs (called *objects*) are put inside braces (**`{ }`**), with the key and value separated by a colon (**`:`**) and each pair separated by a comma (**`,`**). Key\-value pairs are often written on separate lines for readability, but this isn’t required. Note that keys need to be character strings (so in quotes), while values can either be character strings, numbers, booleans (written in lower\-case as `true` and `false`), or even other lists! For example:
```
{
"first_name": "Ada",
"job": "Programmer",
"salary": 78000,
"in_union": true,
"favorites": {
"music": "jazz",
"food": "pizza",
}
}
```
(In JavaScript the period `.` has special meaning, so it is not used in key names; hence the underscores `_`). The above is equivalent to the `R` list:
```
list(
first_name = "Ada", job = "Programmer", salary = 78000, in_union = TRUE,
favorites = list(music = "jazz", food = "pizza") # nested list in the list!
)
```
Additionally, JSON supports what are called *arrays* of data. These are like lists without keys (and so are only accessed by index). Key\-less arrays are written in square brackets (**`[ ]`**), with values separated by commas. For example:
```
["Aardvark", "Baboon", "Camel"]
```
which is equvalent to the `R` list:
```
list("Aardvark", "Baboon", "Camel")
```
(Like *objects* , array elements may or may not be written on separate lines).
Just as R allows you to have nested lists of lists, and those lists may or may not have keys, JSON can have any form of nested *objects* and *arrays*. This can be arrays (unkeyed lists) within objects (keyed lists), such as a more complex set of data about Ada:
```
{
"first_name": "Ada",
"job": "Programmer",
"pets": ["rover", "fluffy", "mittens"],
"favorites": {
"music": "jazz",
"food": "pizza",
"numbers": [12, 42]
}
}
```
JSON can also be structured as *arrays* of *objects* (unkeyed lists of keyed lists), such as a list of data about Seahawks games:
```
[
{ "opponent": "Dolphins", "sea_score": 12, "opp_score": 10 },
{ "opponent": "Rams", "sea_score": 3, "opp_score": 9 },
{ "opponent": "49ers", "sea_score": 37, "opp_score": 18 },
{ "opponent": "Jets", "sea_score": 27, "opp_score": 17 },
{ "opponent": "Falcons", "sea_score": 26, "opp_score": 24 }
]
```
The latter format is incredibly common in web API data: as long as each *object* in the *array* has the same set of keys, then you can easily consider this as a data table where each *object* (keyed list) represents an **observation** (row), and each key represents a **feature** (column) of that observation.
### 11\.4\.1 Parsing JSON
When working with a web API, the usual goal is to take the JSON data contained in the *response* and convert it into an `R` data structure you can use, such as *list* or *data frame*. While the `httr` package is able to parse the JSON body of a response into a *list*, it doesn’t do a very clean job of it (particularly for complex data structures).
A more effective solution is to use *another* library called [`jsonlite`](https://cran.r-project.org/web/packages/jsonlite/jsonlite.pdf). This library provides helpful methods to convert JSON data into R data, and does a much more effective job of converting content into data frames that you can use.
As always, you will need to install and load this library:
```
install.packages("jsonlite") # once per machine
library("jsonlite")
```
`jsonlite` provides a function called **`fromJSON()`** that allows you to convert a JSON string into a list—or even a data frame if the columns have the right lengths!
```
# send request for albums by David Bowie
<<<<<<< HEAD
query.params <- list(q = "d3", sort = "forks")
response <- GET("https://api.github.com/search/repositories", query = query.params)
=======
query_params <- list(q = "d3", sort = "forks")
response <- GET("https://api.github.com", query = query_params)
>>>>>>> 112475b2d49fb2adbcb635aef504ee74c89a9a15
body <- content(response, "text") # extract the body JSON
parsed_data <- fromJSON(body) # convert the JSON string to a list
```
The `parsed_data` will contain a *list* built out of the JSON. Depending on the complexity of the JSON, this may already be a data frame you can `View()`… but more likely you’ll need to *explore* the list more to locate the “main” data you are interested in. Good strategies for this include:
* You can `print()` the data, but that is often hard to read (it requires a lot of scrolling).
* The `str()` method will produce a more organized printed list, though it can still be hard to read.
* The `names()` method will let you see a list of the what keys the list has, which is good for delving into the data.
As an example continuing the above code:
```
is.data.frame(parsed_data) # FALSE; not a data frame you can work with
names(parsed_data) # "href" "items" "limit" "next" "offset" "previous" "total"
# looking at the JSON data itself (e.g., in the browser), `items` is the
# key that contains the value we want
items <- parsed_data$items # extract that element from the list
is.data.frame(items) # TRUE; you can work with that!
```
### 11\.4\.2 Flattening Data
Because JSON supports—and in fact encourages—nested lists (lists within lists), parsing a JSON string is likely to produce a data frame whose columns *are themselves data frames*. As an example:
```
# A somewhat contrived example
people <- data.frame(names = c("Spencer", "Jessica", "Keagan")) # a data frame with one column
favorites <- data.frame( # a data frame with two columns
food = c("Pizza", "Pasta", "salad"),
music = c("Bluegrass", "Indie", "Electronic")
)
# Store second dataframe as column of first
people$favorites <- favorites # the `favorites` column is a data frame!
# This prints nicely...
print(people)
# names favorites.food favorites.music
# 1 Spencer Pizza Bluegrass
# 2 Jessica Pasta Indie
# 3 Keagan salad Electronic
# but doesn't actually work like you expect!
people$favorites.food # NULL
people$favorites$food # [1] Pizza Pasta salad
```
Nested data frames make it hard to work with the data using previously established techniques. Luckily, the `jsonlite` package provides a helpful function for addressing this called **`flatten()`**. This function takes the columns of each *nested* data frame and converts them into appropriately named columns in the “outer” data frame:
```
people <- flatten(people)
people$favorites.food # this just got created! Woo!
```
Note that `flatten()` only works on values that are *already data frames*; thus you may need to find the appropriate element inside of the list (that is, the item which is the data frame you want to flatten).
In practice, you will almost always want to flatten the data returned from a web API. Thus your “algorithm” for downloading web data is as follows:
1. Use `GET()` to download the data, specifying the URI (and any query parameters).
2. Use `content()` to extract the data as a JSON string.
3. Use `fromJSON()` to convert the JSON string into a list.
4. Find which element in that list is your data frame of interest. You may need to go “multiple levels” deep.
5. Use `flatten()` to flatten that data frame.
6. …
7. Profit!
*Pro\-tip*: JSON data can be quite messy when viewed in your web\-browser. Installing a browser extension such as
[JSONView](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc) will format JSON responses in a more readable way, and even enable you to interactively explore the data structure.
Resources
---------
* [URIs (Wikipedia)](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier)
* [HTTP Protocol Tutorial](https://code.tutsplus.com/tutorials/http-the-protocol-every-web-developer-must-know-part-1--net-31177)
* [Programmable Web](http://www.programmableweb.com/) (list of web APIs; may be out of date)
* [RESTful Architecture](https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm) (original specification; not for beginners)
* [JSON View Extension](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc?hl=en)
* [`httr` documentation](https://cran.r-project.org/web/packages/httr/vignettes/quickstart.html)
* [`jsonlite` documentation](https://cran.r-project.org/web/packages/jsonlite/jsonlite.pdf)
11\.1 What is a Web API?
------------------------
An **interface** is the point at which two different systems meet and *communicate*: exchanging informations and instructions. An **Application Programming Interface (API)** thus represents a way of communicating with a computer application by writing a computer program (a set of formal instructions understandable by a machine). APIs commonly take the form of **functions** that can be called to give instructions to programs—the set of functions provided by a library like `dplyr` make up the API for that library.
While most APIs provide an interface for utilizing *functionality*, other APIs provide an interface for accessing *data*. One of the most common sources of these data apis are **web services**: websites that offer an interface for accessing their data.
With web services, the interface (the set of “functions” you can call to access the data) takes the form of **HTTP Requests**—a *request* for data sent following the ***H**yper**T**ext **T**ransfer **P**rotocol*. This is the same protocol (way of communicating) used by your browser to view a web page! An HTTP Request represents a message that your computer sends to a web server (another computer on the internet which “serves”, or provides, information). That server, upon receiving the request, will determine what data to include in the **response** it sends *back* to the requesting computer. With a web browser, the response data takes the form of HTML files that the browser can *render* as web pages. With data APIs, the response data will be structured data that you can convert into R structures such as lists or data frames.
In short, loading data from a Web API involves sending an **HTTP Request** to a server for a particular piece of data, and then receiving and parsing the **response** to that request.
11\.2 RESTful Requests
----------------------
There are two parts to a request sent to an API: the name of the **resource** (data) that you wish to access, and a **verb** indicating what you want to do with that resource. In many ways, the *verb* is the function you want to call on the API, and the *resource* is an argument to that function.
### 11\.2\.1 URIs
Which **resource** you want to access is specified with a **Uniform Resource Identifier (URI)**. A URI is a generalization of a URL (Uniform Resource Locator)—what you commonly think of as “web addresses”. URIs act a lot like the *address* on a postal letter sent within a large organization such as a university: you indicate the business address as well as the department and the person, and will get a different response (and different data) from Alice in Accounting than from Sally in Sales.
* Note that the URI is the **identifier** (think: variable name) for the resource, while the **resource** is the actual *data* value that you want to access.
Like postal letter addresses, URIs have a very specific format used to direct the request to the right resource.
The format (schema) of a URI.
Not all parts of the format are required—for example, you don’t need a `port`, `query`, or `fragment`. Important parts of the format include:
* `scheme` (`protocol`): the “language” that the computer will use to communicate the request to this resource. With web services this is normally `https` (**s**ecure HTTP)
* `domain`: the address of the web server to request information from
* `path`: which resource on that web server you wish to access. This may be the name of a file with an extension if you’re trying to access a particular file, but with web services it often just looks like a folder path!
* `query`: extra **parameters** (arguments) about what resource to access.
The `domain` and `path` usually specify the resource. For example, `www.domain.com/users` might be an *identifier* for a resource which is a list of users. Note that web services can also have “subresources” by adding extra pieces to the path: `www.domain.com/users/mike` might refer to the specific “mike” user in that list.
With an API, the domain and path are often viewed as being broken up into two parts:
* The **Base URI** is the domain and part of the path that is included on *all* resources. It acts as the “root” for any particular resource. For example, the [GitHub API](https://developer.github.com/v3/) has a base URI of `https://api.github.com/`.
* An **Endpoint**, or which resource on that domain you want to access. Each API will have *many* different endpoints.
For example, GitHub includes endpoints such as:
+ `/users/{user}` to get information about a specific `:user:` `id` (the `{}` indicate a “variable”, in that you can put any username in there in place of the string `"{user}"`). Check out this [example](https://api.github.com/users/mkfreeman) in your browser.
+ `/orgs/{organization}/repos` to get the repositories that are on an organization page. See an [example](https://api.github.com/orgs/info201a-au17/repos).
Thus you can equivalently talk about accessing a particular **resource** and sending a request to a particular **endpoint**. The **endpoint** is appended to the end of the **Base URI**, so you could access a GitHub user by combining the **Base URI** (`https://api.github.com`) and **endpoint** (`/users/mkfreeman`) into a single string: `https://api.github.com/users/mkfreeman`. That URL will return a data structure of new releases, which you can request from your R program or simply view in your web\-browser.
#### 11\.2\.1\.1 Query Parameters
Often in order to access only partial sets of data from a resource (e.g., to only get some users) you also include a set of **query parameters**. These are like extra arguments that are given to the request function. Query parameters are listed after a question mark **`?`** in the URI, and are formed as key\-value pairs similar to how you named items in *lists*. The **key** (*parameter name*) is listed first, followed by an equal sign **`=`**, followed by the **value** (*parameter value*); note that you can’t include any spaces in URIs! You can include multiple query parameters by putting an ampersand **`&`** between each key\-value pair:
```
?firstParam=firstValue&secondParam=secondValue&thirdParam=thirdValue
```
Exactly what parameter names you need to include (and what are legal values to assign to that name) depends on the particular web service. Common examples include having parameters named `q` or `query` for searching, with a value being whatever term you want to search for: in [`https://www.google.com/search?q=informatics`](https://www.google.com/search?q=informatics), the **resource** at the `/search` **endpoint** takes a query parameter `q` with the term you want to search for!
#### 11\.2\.1\.2 Access Tokens and API Keys
Many web services require you to register with them in order to send them requests. This allows them to limit access to the data, as well as to keep track of who is asking for what data (usually so that if someone starts “spamming” the service, they can be blocked).
To facilitate this tracking, many services provide **Access Tokens** (also called **API Keys**). These are unique strings of letters and numbers that identify a particular developer (like a secret password that only works for you). Web services will require you to include your *access token* as a query parameter in the request; the exact name of the parameter varies, but it often looks like `access_token` or `api_key`. When exploring a web service, keep an eye out for whether they require such tokens.
*Access tokens* act a lot like passwords; you will want to keep them secret and not share them with others. This means that you **should not include them in your committed files**, so that the passwords don’t get pushed to GitHub and shared with the world. The best way to get around this in R is to create a separate script file in your repo (e.g., `api-keys.R`) which includes exactly one line: assigning the key to a variable:
```
# in `api-keys.R`
api_key <- "123456789abcdefg"
```
You can then include this file\_name\_ in a **`.gitignore`** file in your repo; that will keep it from even possibly being committed with your code!
In order to access this variable in your “main” script, you can use the `source()` function to load and run your `api-keys.R` script. This will execute the line of code that assigns the `api_key` variable, making it available in your environment for your use:
```
# in `my-script.R`
# (make sure working directory is set)
source('api-keys.R') # load the script
print(api_key) # key is now available!
```
Anyone else who runs the script will simply need to provide an `api_key` variable to access the API using their key, keeping everyone’s account separate!
Watch out for APIs that mention using [OAuth](https://en.wikipedia.org/wiki/OAuth) when explaining API keys. OAuth is a system for performing **authentification**—that is, letting someone log into a website from your application (like what a “Log in with Facebook” button does). OAuth systems require more than one access key, and these keys ***must*** be kept secret and usually require you to run a web server to utilize them correctly (which requires lots of extra setup, see [the full `httr` docs](https://cran.r-project.org/web/packages/httr/httr.pdf) for details). So for this course, we encourage you to avoid anything that needs OAuth
### 11\.2\.2 HTTP Verbs
When you send a request to a particular resource, you need to indicate what you want to *do* with that resource. When you load web content, you are typically sending a request to retrieve information (logically, this is a `GET` request). However, there are other actions you can perform to modify the data structure on the server. This is done by specifying an **HTTP Verb** in the request. The HTTP protocol supports the following verbs:
* `GET` Return a representation of the current state of the resource
* `POST` Add a new subresource (e.g., insert a record)
* `PUT` Update the resource to have a new state
* `PATCH` Update a portion of the resource’s state
* `DELETE` Remove the resource
* `OPTIONS` Return the set of methods that can be performed on the resource
By far the most common verb is `GET`, which is used to “get” (download) data from a web service. Depending on how you connect to your API (i.e., which programming language you are using), you’ll specify the verb of interest to indicate what we want to do to a particular resource.
Overall, this structure of treating each datum on the web as a **resource** which we can interact with via **HTTP Requests** is referred to as the **REST Architecture** (REST stands for *REpresentational State Transfer*). This is a standard way of structuring computer applications that allows them to be interacted with in the same way as everyday websites. Thus a web service that enabled data access through named resources and responds to HTTP requests is known as a **RESTful** service, with a *RESTful API*.
### 11\.2\.1 URIs
Which **resource** you want to access is specified with a **Uniform Resource Identifier (URI)**. A URI is a generalization of a URL (Uniform Resource Locator)—what you commonly think of as “web addresses”. URIs act a lot like the *address* on a postal letter sent within a large organization such as a university: you indicate the business address as well as the department and the person, and will get a different response (and different data) from Alice in Accounting than from Sally in Sales.
* Note that the URI is the **identifier** (think: variable name) for the resource, while the **resource** is the actual *data* value that you want to access.
Like postal letter addresses, URIs have a very specific format used to direct the request to the right resource.
The format (schema) of a URI.
Not all parts of the format are required—for example, you don’t need a `port`, `query`, or `fragment`. Important parts of the format include:
* `scheme` (`protocol`): the “language” that the computer will use to communicate the request to this resource. With web services this is normally `https` (**s**ecure HTTP)
* `domain`: the address of the web server to request information from
* `path`: which resource on that web server you wish to access. This may be the name of a file with an extension if you’re trying to access a particular file, but with web services it often just looks like a folder path!
* `query`: extra **parameters** (arguments) about what resource to access.
The `domain` and `path` usually specify the resource. For example, `www.domain.com/users` might be an *identifier* for a resource which is a list of users. Note that web services can also have “subresources” by adding extra pieces to the path: `www.domain.com/users/mike` might refer to the specific “mike” user in that list.
With an API, the domain and path are often viewed as being broken up into two parts:
* The **Base URI** is the domain and part of the path that is included on *all* resources. It acts as the “root” for any particular resource. For example, the [GitHub API](https://developer.github.com/v3/) has a base URI of `https://api.github.com/`.
* An **Endpoint**, or which resource on that domain you want to access. Each API will have *many* different endpoints.
For example, GitHub includes endpoints such as:
+ `/users/{user}` to get information about a specific `:user:` `id` (the `{}` indicate a “variable”, in that you can put any username in there in place of the string `"{user}"`). Check out this [example](https://api.github.com/users/mkfreeman) in your browser.
+ `/orgs/{organization}/repos` to get the repositories that are on an organization page. See an [example](https://api.github.com/orgs/info201a-au17/repos).
Thus you can equivalently talk about accessing a particular **resource** and sending a request to a particular **endpoint**. The **endpoint** is appended to the end of the **Base URI**, so you could access a GitHub user by combining the **Base URI** (`https://api.github.com`) and **endpoint** (`/users/mkfreeman`) into a single string: `https://api.github.com/users/mkfreeman`. That URL will return a data structure of new releases, which you can request from your R program or simply view in your web\-browser.
#### 11\.2\.1\.1 Query Parameters
Often in order to access only partial sets of data from a resource (e.g., to only get some users) you also include a set of **query parameters**. These are like extra arguments that are given to the request function. Query parameters are listed after a question mark **`?`** in the URI, and are formed as key\-value pairs similar to how you named items in *lists*. The **key** (*parameter name*) is listed first, followed by an equal sign **`=`**, followed by the **value** (*parameter value*); note that you can’t include any spaces in URIs! You can include multiple query parameters by putting an ampersand **`&`** between each key\-value pair:
```
?firstParam=firstValue&secondParam=secondValue&thirdParam=thirdValue
```
Exactly what parameter names you need to include (and what are legal values to assign to that name) depends on the particular web service. Common examples include having parameters named `q` or `query` for searching, with a value being whatever term you want to search for: in [`https://www.google.com/search?q=informatics`](https://www.google.com/search?q=informatics), the **resource** at the `/search` **endpoint** takes a query parameter `q` with the term you want to search for!
#### 11\.2\.1\.2 Access Tokens and API Keys
Many web services require you to register with them in order to send them requests. This allows them to limit access to the data, as well as to keep track of who is asking for what data (usually so that if someone starts “spamming” the service, they can be blocked).
To facilitate this tracking, many services provide **Access Tokens** (also called **API Keys**). These are unique strings of letters and numbers that identify a particular developer (like a secret password that only works for you). Web services will require you to include your *access token* as a query parameter in the request; the exact name of the parameter varies, but it often looks like `access_token` or `api_key`. When exploring a web service, keep an eye out for whether they require such tokens.
*Access tokens* act a lot like passwords; you will want to keep them secret and not share them with others. This means that you **should not include them in your committed files**, so that the passwords don’t get pushed to GitHub and shared with the world. The best way to get around this in R is to create a separate script file in your repo (e.g., `api-keys.R`) which includes exactly one line: assigning the key to a variable:
```
# in `api-keys.R`
api_key <- "123456789abcdefg"
```
You can then include this file\_name\_ in a **`.gitignore`** file in your repo; that will keep it from even possibly being committed with your code!
In order to access this variable in your “main” script, you can use the `source()` function to load and run your `api-keys.R` script. This will execute the line of code that assigns the `api_key` variable, making it available in your environment for your use:
```
# in `my-script.R`
# (make sure working directory is set)
source('api-keys.R') # load the script
print(api_key) # key is now available!
```
Anyone else who runs the script will simply need to provide an `api_key` variable to access the API using their key, keeping everyone’s account separate!
Watch out for APIs that mention using [OAuth](https://en.wikipedia.org/wiki/OAuth) when explaining API keys. OAuth is a system for performing **authentification**—that is, letting someone log into a website from your application (like what a “Log in with Facebook” button does). OAuth systems require more than one access key, and these keys ***must*** be kept secret and usually require you to run a web server to utilize them correctly (which requires lots of extra setup, see [the full `httr` docs](https://cran.r-project.org/web/packages/httr/httr.pdf) for details). So for this course, we encourage you to avoid anything that needs OAuth
#### 11\.2\.1\.1 Query Parameters
Often in order to access only partial sets of data from a resource (e.g., to only get some users) you also include a set of **query parameters**. These are like extra arguments that are given to the request function. Query parameters are listed after a question mark **`?`** in the URI, and are formed as key\-value pairs similar to how you named items in *lists*. The **key** (*parameter name*) is listed first, followed by an equal sign **`=`**, followed by the **value** (*parameter value*); note that you can’t include any spaces in URIs! You can include multiple query parameters by putting an ampersand **`&`** between each key\-value pair:
```
?firstParam=firstValue&secondParam=secondValue&thirdParam=thirdValue
```
Exactly what parameter names you need to include (and what are legal values to assign to that name) depends on the particular web service. Common examples include having parameters named `q` or `query` for searching, with a value being whatever term you want to search for: in [`https://www.google.com/search?q=informatics`](https://www.google.com/search?q=informatics), the **resource** at the `/search` **endpoint** takes a query parameter `q` with the term you want to search for!
#### 11\.2\.1\.2 Access Tokens and API Keys
Many web services require you to register with them in order to send them requests. This allows them to limit access to the data, as well as to keep track of who is asking for what data (usually so that if someone starts “spamming” the service, they can be blocked).
To facilitate this tracking, many services provide **Access Tokens** (also called **API Keys**). These are unique strings of letters and numbers that identify a particular developer (like a secret password that only works for you). Web services will require you to include your *access token* as a query parameter in the request; the exact name of the parameter varies, but it often looks like `access_token` or `api_key`. When exploring a web service, keep an eye out for whether they require such tokens.
*Access tokens* act a lot like passwords; you will want to keep them secret and not share them with others. This means that you **should not include them in your committed files**, so that the passwords don’t get pushed to GitHub and shared with the world. The best way to get around this in R is to create a separate script file in your repo (e.g., `api-keys.R`) which includes exactly one line: assigning the key to a variable:
```
# in `api-keys.R`
api_key <- "123456789abcdefg"
```
You can then include this file\_name\_ in a **`.gitignore`** file in your repo; that will keep it from even possibly being committed with your code!
In order to access this variable in your “main” script, you can use the `source()` function to load and run your `api-keys.R` script. This will execute the line of code that assigns the `api_key` variable, making it available in your environment for your use:
```
# in `my-script.R`
# (make sure working directory is set)
source('api-keys.R') # load the script
print(api_key) # key is now available!
```
Anyone else who runs the script will simply need to provide an `api_key` variable to access the API using their key, keeping everyone’s account separate!
Watch out for APIs that mention using [OAuth](https://en.wikipedia.org/wiki/OAuth) when explaining API keys. OAuth is a system for performing **authentification**—that is, letting someone log into a website from your application (like what a “Log in with Facebook” button does). OAuth systems require more than one access key, and these keys ***must*** be kept secret and usually require you to run a web server to utilize them correctly (which requires lots of extra setup, see [the full `httr` docs](https://cran.r-project.org/web/packages/httr/httr.pdf) for details). So for this course, we encourage you to avoid anything that needs OAuth
### 11\.2\.2 HTTP Verbs
When you send a request to a particular resource, you need to indicate what you want to *do* with that resource. When you load web content, you are typically sending a request to retrieve information (logically, this is a `GET` request). However, there are other actions you can perform to modify the data structure on the server. This is done by specifying an **HTTP Verb** in the request. The HTTP protocol supports the following verbs:
* `GET` Return a representation of the current state of the resource
* `POST` Add a new subresource (e.g., insert a record)
* `PUT` Update the resource to have a new state
* `PATCH` Update a portion of the resource’s state
* `DELETE` Remove the resource
* `OPTIONS` Return the set of methods that can be performed on the resource
By far the most common verb is `GET`, which is used to “get” (download) data from a web service. Depending on how you connect to your API (i.e., which programming language you are using), you’ll specify the verb of interest to indicate what we want to do to a particular resource.
Overall, this structure of treating each datum on the web as a **resource** which we can interact with via **HTTP Requests** is referred to as the **REST Architecture** (REST stands for *REpresentational State Transfer*). This is a standard way of structuring computer applications that allows them to be interacted with in the same way as everyday websites. Thus a web service that enabled data access through named resources and responds to HTTP requests is known as a **RESTful** service, with a *RESTful API*.
11\.3 Accessing Web APIs
------------------------
To access a Web API, you just need to send an HTTP Request to a particular URI. You can easily do this with the browser: simply navigate to a particular address (base URI \+ endpoint), and that will cause the browser to send a `GET` request and display the resulting data. For example, you can send a request to search GitHub for repositories named `d3` by visiting:
```
https://api.github.com/search/repositories?q=d3&sort=forks
```
This query accesses the `/search/repositories/` endpoint, and also specifies 2 query parameters:
* `q`: The term(s) you are searching for, and
* `sort`: The attribute of each repository that you would like to use to sort the results
(Note that the data you’ll get back is structured in JSON format. See [below](apis.html#json) for details).
In `R` you can send GET requests using the [`httr`](https://cran.r-project.org/web/packages/httr/vignettes/quickstart.html) library. Like `dplyr`, you will need to install and load it to use it:
```
install.packages("httr") # once per machine
library("httr")
```
This library provides a number of functions that reflect HTTP verbs. For example, the **`GET()`** function will send an HTTP GET Request to the URI specified as an argument:
```
# Get search results for artist
response <- GET("https://api.github.com/search/repositories?q=d3&sort=forks")
```
While it is possible to include *query parameters* in the URI string, `httr` also allows you to include them as a *list*, making it easy to set and change variables (instead of needing to do a complex `paste0()` operation to produce the correct string):
```
# Equivalent to the above, but easier to read and change
query_params <- list(q = "d3", sort = "forks")
response <- GET("https://api.github.com", query = query_params)
```
If you try printing out the `response` variable, you’ll see information about the response:
```
Response [https://api.github.com/?q=d3&sort=forks]
Date: 2017-10-22 19:05
Status: 200
Content-Type: application/json; charset=utf-8
Size: 2.16 kB
```
This is called the **response header**. Each **response** has two parts: the **header**, and the **body**. You can think of the response as a envelope: the *header* contains meta\-data like the address and postage date, while the *body* contains the actual contents of the letter (the data).
Since you’re almost always interested in working with the *body*, you will need to extract that data from the response (e.g., open up the envelope and pull out the letter). You can do this with the `content()` method:
```
# extract content from response, as a text string (not a list!)
body <- content(response, "text")
```
Note the second argument `"text"`; this is needed to keep `httr` from doing it’s own processing on the body data, since we’ll be using other methods to handle that; keep reading for details!
*Pro\-tip*: The URI shown when you print out the `response` is a good way to check exactly what URI you were sending the request to: copy that into your browser to make sure it goes where you expected!
11\.4 JSON Data
---------------
Most APIs will return data in **JavaScript Object Notation (JSON)** format. Like CSV, this is a format for writing down structured data—but while `.csv` files organize data into rows and columns (like a data frame), JSON allows you to organize elements into **key\-value pairs** similar to an R *list*! This allows the data to have much more complex structure, which is useful for web services (but can be challenging for us).
In JSON, lists of key\-value pairs (called *objects*) are put inside braces (**`{ }`**), with the key and value separated by a colon (**`:`**) and each pair separated by a comma (**`,`**). Key\-value pairs are often written on separate lines for readability, but this isn’t required. Note that keys need to be character strings (so in quotes), while values can either be character strings, numbers, booleans (written in lower\-case as `true` and `false`), or even other lists! For example:
```
{
"first_name": "Ada",
"job": "Programmer",
"salary": 78000,
"in_union": true,
"favorites": {
"music": "jazz",
"food": "pizza",
}
}
```
(In JavaScript the period `.` has special meaning, so it is not used in key names; hence the underscores `_`). The above is equivalent to the `R` list:
```
list(
first_name = "Ada", job = "Programmer", salary = 78000, in_union = TRUE,
favorites = list(music = "jazz", food = "pizza") # nested list in the list!
)
```
Additionally, JSON supports what are called *arrays* of data. These are like lists without keys (and so are only accessed by index). Key\-less arrays are written in square brackets (**`[ ]`**), with values separated by commas. For example:
```
["Aardvark", "Baboon", "Camel"]
```
which is equvalent to the `R` list:
```
list("Aardvark", "Baboon", "Camel")
```
(Like *objects* , array elements may or may not be written on separate lines).
Just as R allows you to have nested lists of lists, and those lists may or may not have keys, JSON can have any form of nested *objects* and *arrays*. This can be arrays (unkeyed lists) within objects (keyed lists), such as a more complex set of data about Ada:
```
{
"first_name": "Ada",
"job": "Programmer",
"pets": ["rover", "fluffy", "mittens"],
"favorites": {
"music": "jazz",
"food": "pizza",
"numbers": [12, 42]
}
}
```
JSON can also be structured as *arrays* of *objects* (unkeyed lists of keyed lists), such as a list of data about Seahawks games:
```
[
{ "opponent": "Dolphins", "sea_score": 12, "opp_score": 10 },
{ "opponent": "Rams", "sea_score": 3, "opp_score": 9 },
{ "opponent": "49ers", "sea_score": 37, "opp_score": 18 },
{ "opponent": "Jets", "sea_score": 27, "opp_score": 17 },
{ "opponent": "Falcons", "sea_score": 26, "opp_score": 24 }
]
```
The latter format is incredibly common in web API data: as long as each *object* in the *array* has the same set of keys, then you can easily consider this as a data table where each *object* (keyed list) represents an **observation** (row), and each key represents a **feature** (column) of that observation.
### 11\.4\.1 Parsing JSON
When working with a web API, the usual goal is to take the JSON data contained in the *response* and convert it into an `R` data structure you can use, such as *list* or *data frame*. While the `httr` package is able to parse the JSON body of a response into a *list*, it doesn’t do a very clean job of it (particularly for complex data structures).
A more effective solution is to use *another* library called [`jsonlite`](https://cran.r-project.org/web/packages/jsonlite/jsonlite.pdf). This library provides helpful methods to convert JSON data into R data, and does a much more effective job of converting content into data frames that you can use.
As always, you will need to install and load this library:
```
install.packages("jsonlite") # once per machine
library("jsonlite")
```
`jsonlite` provides a function called **`fromJSON()`** that allows you to convert a JSON string into a list—or even a data frame if the columns have the right lengths!
```
# send request for albums by David Bowie
<<<<<<< HEAD
query.params <- list(q = "d3", sort = "forks")
response <- GET("https://api.github.com/search/repositories", query = query.params)
=======
query_params <- list(q = "d3", sort = "forks")
response <- GET("https://api.github.com", query = query_params)
>>>>>>> 112475b2d49fb2adbcb635aef504ee74c89a9a15
body <- content(response, "text") # extract the body JSON
parsed_data <- fromJSON(body) # convert the JSON string to a list
```
The `parsed_data` will contain a *list* built out of the JSON. Depending on the complexity of the JSON, this may already be a data frame you can `View()`… but more likely you’ll need to *explore* the list more to locate the “main” data you are interested in. Good strategies for this include:
* You can `print()` the data, but that is often hard to read (it requires a lot of scrolling).
* The `str()` method will produce a more organized printed list, though it can still be hard to read.
* The `names()` method will let you see a list of the what keys the list has, which is good for delving into the data.
As an example continuing the above code:
```
is.data.frame(parsed_data) # FALSE; not a data frame you can work with
names(parsed_data) # "href" "items" "limit" "next" "offset" "previous" "total"
# looking at the JSON data itself (e.g., in the browser), `items` is the
# key that contains the value we want
items <- parsed_data$items # extract that element from the list
is.data.frame(items) # TRUE; you can work with that!
```
### 11\.4\.2 Flattening Data
Because JSON supports—and in fact encourages—nested lists (lists within lists), parsing a JSON string is likely to produce a data frame whose columns *are themselves data frames*. As an example:
```
# A somewhat contrived example
people <- data.frame(names = c("Spencer", "Jessica", "Keagan")) # a data frame with one column
favorites <- data.frame( # a data frame with two columns
food = c("Pizza", "Pasta", "salad"),
music = c("Bluegrass", "Indie", "Electronic")
)
# Store second dataframe as column of first
people$favorites <- favorites # the `favorites` column is a data frame!
# This prints nicely...
print(people)
# names favorites.food favorites.music
# 1 Spencer Pizza Bluegrass
# 2 Jessica Pasta Indie
# 3 Keagan salad Electronic
# but doesn't actually work like you expect!
people$favorites.food # NULL
people$favorites$food # [1] Pizza Pasta salad
```
Nested data frames make it hard to work with the data using previously established techniques. Luckily, the `jsonlite` package provides a helpful function for addressing this called **`flatten()`**. This function takes the columns of each *nested* data frame and converts them into appropriately named columns in the “outer” data frame:
```
people <- flatten(people)
people$favorites.food # this just got created! Woo!
```
Note that `flatten()` only works on values that are *already data frames*; thus you may need to find the appropriate element inside of the list (that is, the item which is the data frame you want to flatten).
In practice, you will almost always want to flatten the data returned from a web API. Thus your “algorithm” for downloading web data is as follows:
1. Use `GET()` to download the data, specifying the URI (and any query parameters).
2. Use `content()` to extract the data as a JSON string.
3. Use `fromJSON()` to convert the JSON string into a list.
4. Find which element in that list is your data frame of interest. You may need to go “multiple levels” deep.
5. Use `flatten()` to flatten that data frame.
6. …
7. Profit!
*Pro\-tip*: JSON data can be quite messy when viewed in your web\-browser. Installing a browser extension such as
[JSONView](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc) will format JSON responses in a more readable way, and even enable you to interactively explore the data structure.
### 11\.4\.1 Parsing JSON
When working with a web API, the usual goal is to take the JSON data contained in the *response* and convert it into an `R` data structure you can use, such as *list* or *data frame*. While the `httr` package is able to parse the JSON body of a response into a *list*, it doesn’t do a very clean job of it (particularly for complex data structures).
A more effective solution is to use *another* library called [`jsonlite`](https://cran.r-project.org/web/packages/jsonlite/jsonlite.pdf). This library provides helpful methods to convert JSON data into R data, and does a much more effective job of converting content into data frames that you can use.
As always, you will need to install and load this library:
```
install.packages("jsonlite") # once per machine
library("jsonlite")
```
`jsonlite` provides a function called **`fromJSON()`** that allows you to convert a JSON string into a list—or even a data frame if the columns have the right lengths!
```
# send request for albums by David Bowie
<<<<<<< HEAD
query.params <- list(q = "d3", sort = "forks")
response <- GET("https://api.github.com/search/repositories", query = query.params)
=======
query_params <- list(q = "d3", sort = "forks")
response <- GET("https://api.github.com", query = query_params)
>>>>>>> 112475b2d49fb2adbcb635aef504ee74c89a9a15
body <- content(response, "text") # extract the body JSON
parsed_data <- fromJSON(body) # convert the JSON string to a list
```
The `parsed_data` will contain a *list* built out of the JSON. Depending on the complexity of the JSON, this may already be a data frame you can `View()`… but more likely you’ll need to *explore* the list more to locate the “main” data you are interested in. Good strategies for this include:
* You can `print()` the data, but that is often hard to read (it requires a lot of scrolling).
* The `str()` method will produce a more organized printed list, though it can still be hard to read.
* The `names()` method will let you see a list of the what keys the list has, which is good for delving into the data.
As an example continuing the above code:
```
is.data.frame(parsed_data) # FALSE; not a data frame you can work with
names(parsed_data) # "href" "items" "limit" "next" "offset" "previous" "total"
# looking at the JSON data itself (e.g., in the browser), `items` is the
# key that contains the value we want
items <- parsed_data$items # extract that element from the list
is.data.frame(items) # TRUE; you can work with that!
```
### 11\.4\.2 Flattening Data
Because JSON supports—and in fact encourages—nested lists (lists within lists), parsing a JSON string is likely to produce a data frame whose columns *are themselves data frames*. As an example:
```
# A somewhat contrived example
people <- data.frame(names = c("Spencer", "Jessica", "Keagan")) # a data frame with one column
favorites <- data.frame( # a data frame with two columns
food = c("Pizza", "Pasta", "salad"),
music = c("Bluegrass", "Indie", "Electronic")
)
# Store second dataframe as column of first
people$favorites <- favorites # the `favorites` column is a data frame!
# This prints nicely...
print(people)
# names favorites.food favorites.music
# 1 Spencer Pizza Bluegrass
# 2 Jessica Pasta Indie
# 3 Keagan salad Electronic
# but doesn't actually work like you expect!
people$favorites.food # NULL
people$favorites$food # [1] Pizza Pasta salad
```
Nested data frames make it hard to work with the data using previously established techniques. Luckily, the `jsonlite` package provides a helpful function for addressing this called **`flatten()`**. This function takes the columns of each *nested* data frame and converts them into appropriately named columns in the “outer” data frame:
```
people <- flatten(people)
people$favorites.food # this just got created! Woo!
```
Note that `flatten()` only works on values that are *already data frames*; thus you may need to find the appropriate element inside of the list (that is, the item which is the data frame you want to flatten).
In practice, you will almost always want to flatten the data returned from a web API. Thus your “algorithm” for downloading web data is as follows:
1. Use `GET()` to download the data, specifying the URI (and any query parameters).
2. Use `content()` to extract the data as a JSON string.
3. Use `fromJSON()` to convert the JSON string into a list.
4. Find which element in that list is your data frame of interest. You may need to go “multiple levels” deep.
5. Use `flatten()` to flatten that data frame.
6. …
7. Profit!
*Pro\-tip*: JSON data can be quite messy when viewed in your web\-browser. Installing a browser extension such as
[JSONView](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc) will format JSON responses in a more readable way, and even enable you to interactively explore the data structure.
Resources
---------
* [URIs (Wikipedia)](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier)
* [HTTP Protocol Tutorial](https://code.tutsplus.com/tutorials/http-the-protocol-every-web-developer-must-know-part-1--net-31177)
* [Programmable Web](http://www.programmableweb.com/) (list of web APIs; may be out of date)
* [RESTful Architecture](https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm) (original specification; not for beginners)
* [JSON View Extension](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc?hl=en)
* [`httr` documentation](https://cran.r-project.org/web/packages/httr/vignettes/quickstart.html)
* [`jsonlite` documentation](https://cran.r-project.org/web/packages/jsonlite/jsonlite.pdf)
| Field Specific |
info201.github.io | https://info201.github.io/r-markdown.html |
Chapter 12 R Markdown
=====================
R Markdown is a package that supports using `R` to dynamically create *documents*, such as websites (`.html` files), reports (`.pdf` files), slideshows (using `ioslides` or `slidy`), and even interactive web apps (using `shiny`).
As you may have guessed, R Markdown does this by providing the ability to blend Markdown syntax and `R` code so that, when executed, scripts will automatically inject your code results into a formatted document. The ability to automatically generate reports and documents from a computer script eliminates the need to manually update the *results* of a data analysis project, enabling you to more effectively share the *information* that you’ve produced from your data. In this chapter, you’ll learn the fundamentals of the RMarkdown library to create well\-formatted documents that combine analysis and reporting.
12\.1 R Markdown and RStudio
----------------------------
R Markdown documents are created from a combination of two libraries: `rmarkdown` (which process the markdown and generates the output) and `knitr` (which runs R code and produces Markdown\-like output). These packages are already included in RStudio, which provides built\-in support for creating and viewing R Markdown documents.
### 12\.1\.1 Creating `.Rmd` Files
The easiest way to begin a new R\-Markdown document in RStudio is to use the **File \> New File \> R Markdown** menu option:
Create a new R Markdown document in RStudio.
RStudio will then prompt you to provide some additional details abour what kind of R Markdown document you want. In particular, you will need to choose a default *document type* and *output format*. You can also provide a title and author information which will be included in the document. This chapter will focus on creating HTML documents (websites; the default format)—other formats require the installation of additional software.
Specify document type.
Once you’ve chosen *R Markdown* as your desired file type, you’ll be prompted to choose a default *document type* and *output format* that you would like to create. In this module, we’ll discuss creating HTML documents (websites).
Once you’ve chosen your desired document type and output format, RStudio will open up a new script file for you. The file contains some example code for you.
### 12\.1\.2 `.Rmd` Content
At the top of the file is some text that has the format:
```
---
title: "Example"
author: "YOUR NAME HERE"
date: "1/30/2017"
output: html_document
---
```
This is the document “header” information, which tells R Markdown details about the file and how the file should be processed. For example, the `title`, `author`, and `date` will automatically be added to the top of your document. You can include additional information as well, such as [whether there should be a table of contents](http://rmarkdown.rstudio.com/html_document_format.html) or even [variable defaults](http://rmarkdown.rstudio.com/developer_parameterized_reports.html).
* The header is written in [YAML](https://en.wikipedia.org/wiki/YAML) format, which is yet another way of formatting structured data similar to `.csv` or JSON (in fact, YAML is a superset of JSON and can represent the same data structure, just using indentation and dashes instead of braces and commas).
Below the header, you will find two types of content:
* **Markdown**: normal Markdown text like you learned in [Chapter 3](markdown.html#markdown). For example, you can use two pound symbols (`##`) for a second\-level heading.
* **Code Chunks**: These are segments (chunks) of R code that look like normal code block elements (using `````), but with an extra `{r}` immediately after the opening backticks.
R Markdown will be able to execute the R code you include in code chunks, and render that output *in your Markdown*. More on this [below](r-markdown.html#r-markdown-syntax).
**Important** This file should be saved with the extension **`.Rmd`** (for “R Markdown”), which tells the computer and RStudio that the document contains Markdown content with embedded `R` code.
### 12\.1\.3 Knitting Documents
RStudio provides an easy interface to compile your `.Rmd` source code into an actual document (a process called **“knitting”**). Simply click the **Knit** button at the top of the script panel:
RStudio’s Knit button
This will generate the document (in the same directory as your `.Rmd` file), as well as open up a preview window in RStudio.
While it is easy to generate such documents, the knitting process can make it hard to debug errors in your `R` code (whether syntax or logical), in part because the output may or may not show up in the document! We suggest that you write complex `R` code in another script and then `source()` that script into your `.Rmd` file for use the the output. This makes it possible to test your data processing work outside of the knit application, as well as *separates the concerns* of the data and its representation—which is good programming practice.
Nevertheless, you should still be sure and knit your document frequently, paying close attention to any errors that appear in the console.
*Pro\-tip*: If you’re having trouble finding your error, a good strategy is to systematically remove segments of your code and attempt to re\-knit the document. This will help you identify the problematic syntax.
### 12\.1\.4 HTML
Assuming that you’ve chosen HTML as your desired output type, RStudio will knit your `.Rmd` into a `.html` file. HTML stands for ***H**yper**T**ext **M**arkup **L**anguage* and, like Markdown, is a syntax for describing the structure and formatting of content (though HTML is **far** more extensive and detailed). In particular, HTML is a markup language that can be automatically rendered by web browsers, and thus is the language used to create web pages. As such, the `.html` files you create can be put online as web pages for others to view—you will learn how to do this in a future chapter. For now, you can open a `.html` file in any browser (such as by double\-clicking on the file) to see the content outside of RStudio!
* As it turns out, it’s quite simple to use GitHub to host publicly available webpages (like the `.html` files you create with RMarkdown). But, this will require learning a bit more about `git` and GitHub. For instructions on publishing your `.html` files as web\-pages, see [chapter 14](git-branches.html#git-branches).
12\.2 R Markdown Syntax
-----------------------
What makes R Markdown distinct from simple Markdown code is the ability to actually *execute your `R` code and include the output directly in the document*. `R` code can be executed and included in the document in blocks of code, or even inline in the document!
### 12\.2\.1 R Code Chunks
Code that is to be executed (rather than simply displayed as formatted text) is called a **code chunk**. To specify a code chunk, you need to include **`{r}`** immediately after the backticks that start the code block (the `````). For example:
```
Write normal **markdown** out here, then create a code block:
```{r}
# Execute R code in here
x <- 201
```
Back to writing _markdown_ out here.
```
Note that by default, the code chunk will *render* any raw expressions (e.g., `x`)—just like you would see in the console if you selected all the code in the chunk and used `ctrl-enter` to execute it.
It is also possible to specify additional configuration **options** by including a comma\-separate list of named arguments (like you’ve done with lists and functions) inside the curly braces following the `r`:
```
```{r options_example, echo=FALSE, message=TRUE}
# a code chunk named "options_example", with parameter `echo` assigned FALSE
# and parameter `message` assigned TRUE
# Would execute R code in here
```
```
* The first “argument” (`options_example`) is a “name” for the chunk, and the following are named arguments for the options. Chunks should be named as a variable or function, based on what code is being executed and/or rendered by the chunk. It’s always a good idea to name individual code chunks as a form of documentation.
There are [many options](https://yihui.name/knitr/options/) for creating code chunks (see also the [reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)). However some of the most useful ones have to do with how the code is outputted in the the document. These include:
* **`echo`** indicates whether you want the *R code itself* to be displayed in the document (e.g., if you want readers to be able to see your work and reproduce your calculations and analysis). Value is either `TRUE` (do display; the default) or `FALSE` (do not display).
* **`message`** indicates whether you want any messages generated by the code to be displayed. This includes print statements! Value is either `TRUE` (do display; the default) or `FALSE` (do not display).
If you only want to *show* your `R` code (and not *evaluate* it), you can alternatively use a standard Markdown codeblock that indicates the `r` language (````r`, *not* ````{r}`), or set the `eval` option to `FALSE`.
### 12\.2\.2 Inline Code
In addition to creating distinct code blocks, you may want to execute R code *inline* with the rest of your text. This empowers you to **reference a variable** from your code\-chunk in a section of Markdown—injected that variable into the text you have written. This allows you to easily include a specific result inside a paragraph of text. So if the computation changes, re\-knitting your document will update the values inside the text without any further work needed.
As with code blocks, you’ll follow the Markdown convention of using single backticks (**```**), but put the letter **`r`** immediately after the first backtick. For example:
```
To calculate 3 + 4 inside some text, we can use `r 3 + 4` right in the _middle_.
```
When you knit the text above, the ``r 3 + 4`` would be replaced with the number `7`.
Note you can also reference values computed in the code blocks preceding your inline code; it is **best practice** to do your calculations in a code block (with `echo=FALSE`), save the result in a variable, and then simply inline that variable with e.g., ``r my.variable``.
12\.3 Rendering Data
--------------------
R Markdown’s code chunks let you perform data analysis directly in your document, but often you will want to include more complex data output. This section discusses a few tips for specifying dynamic, complex output to render using R Markdown.
### 12\.3\.1 Rendering Strings
If you experiment with knitting R Markdown, you will quickly notice that using `print()` will generate a code block with content that looks like a printed vector:
```
```{r echo=FALSE}
print("Hello world")
```
```
```
## [1] "Hello world"
```
For this reason, you usually want to have the code block generate a string that you save in a variable, which you can then display with an inline expression (e.g., on its own line):
```
```{r echo=FALSE}
msg <- "Hello world"
```
Below is the message to see:
`r msg`
```
Note that any Markdown syntax included in the variable (e.g., if you had `msg <- "**Hello** world"`) will be rendered as well—the ``r msg ``is replaced by the value of the expression just as if you had typed that Markdown in directly. This allows you to even include dynamic styling if you construct a “Markdown string” out of your data.
Alternatively, you can use as [`results`](https://yihui.name/knitr/options/#text-results) option of `'asis'`, which will cause the “output” to be rendered directly into the markdown. When combined with the [`cat()`](https://www.rdocumentation.org/packages/base/versions/3.4.3/topics/cat) function (which con**cat**enates content without specifying additional information like vector position), you can make a code chunk effectively render a specific string:
```
```{r results='asis', echo=FALSE}
cat("Hello world")
```
```
### 12\.3\.2 Rendering Lists
Because outputted strings render any Markdown they contain, it’s possible to specify complex Markdown such as *lists* by constructing these strings to contain the `-` symbols utilized (note that each item will need to be separated by a line break or a `\n` character):
```
```{r echo=FALSE}
markdown.list <- "
- Lions
- Tigers
- Bears
"
```
`r markdown.list`
```
Would output a list that looks like:
* Lions
* Tigers
* Bears
Combined with the vectorized `paste()` function, it’s to easily convert vectors into Markdown lists that can be rendered
```
```{r echo=FALSE}
animals <- c("Lions", "Tigers", "Bears")
# paste a `-` in front of each, then cat the items with newlines between
markdown.list <- paste(paste('-',animals), collapse='\n')
```
`r markdown.list`
```
And of course, the contents of the vector (e.g., the text `"Lions"`) could easily have additional Markdown syntax syntax to include bold, italic, or hyperlinked text.
* Creating a “helper function” to do this conversion is perfectly reasonable; or see libraries such as [`pander`](http://rapporter.github.io/pander/) which defines a number of such functions.
### 12\.3\.3 Rendering Tables
Because data frames are so central to programming with R, R Markdown includes capabilities to easily render data frames as Markdown *tables* via the [**`knitr::kable()`**](https://www.rdocumentation.org/packages/knitr/versions/1.19/topics/kable) function. This function takes as an argument the data frame you wish to render, and it will automatically convert that value into a Markdown table:
```
```{r echo=FALSE}
library(knitr) # make sure you load this library (once per doc)
# make a data frame
letters <- c("a", "b", "c")
numbers <- 1:3
df <- data.frame(letters = letters, numbers = numbers)
# render the table
kable(df)
```
```
* `kable()` supports a number of other arguments that can be used to customize how it outputs a table.
* And of courrse, if the values in the dataframe are strings that contain Markdown syntax (e.g., bold, itaic, or hyperlinks), they will be rendered as such in the table!
So while you may need to do a little bit of work to manually generate the Markdown syntax, it is possible to dynamically produce complex documents based on dynamic data sources
Resources
---------
* [R Markdown Homepage](http://rmarkdown.rstudio.com/)
* [R Markdown Cheatsheet](https://www.rstudio.com/wp-content/uploads/2016/03/rmarkdown-cheatsheet-2.0.pdf) (really useful!)
* [R Markdown Reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf) (really useful!)
* [`knitr`](https://yihui.name/knitr/)
12\.1 R Markdown and RStudio
----------------------------
R Markdown documents are created from a combination of two libraries: `rmarkdown` (which process the markdown and generates the output) and `knitr` (which runs R code and produces Markdown\-like output). These packages are already included in RStudio, which provides built\-in support for creating and viewing R Markdown documents.
### 12\.1\.1 Creating `.Rmd` Files
The easiest way to begin a new R\-Markdown document in RStudio is to use the **File \> New File \> R Markdown** menu option:
Create a new R Markdown document in RStudio.
RStudio will then prompt you to provide some additional details abour what kind of R Markdown document you want. In particular, you will need to choose a default *document type* and *output format*. You can also provide a title and author information which will be included in the document. This chapter will focus on creating HTML documents (websites; the default format)—other formats require the installation of additional software.
Specify document type.
Once you’ve chosen *R Markdown* as your desired file type, you’ll be prompted to choose a default *document type* and *output format* that you would like to create. In this module, we’ll discuss creating HTML documents (websites).
Once you’ve chosen your desired document type and output format, RStudio will open up a new script file for you. The file contains some example code for you.
### 12\.1\.2 `.Rmd` Content
At the top of the file is some text that has the format:
```
---
title: "Example"
author: "YOUR NAME HERE"
date: "1/30/2017"
output: html_document
---
```
This is the document “header” information, which tells R Markdown details about the file and how the file should be processed. For example, the `title`, `author`, and `date` will automatically be added to the top of your document. You can include additional information as well, such as [whether there should be a table of contents](http://rmarkdown.rstudio.com/html_document_format.html) or even [variable defaults](http://rmarkdown.rstudio.com/developer_parameterized_reports.html).
* The header is written in [YAML](https://en.wikipedia.org/wiki/YAML) format, which is yet another way of formatting structured data similar to `.csv` or JSON (in fact, YAML is a superset of JSON and can represent the same data structure, just using indentation and dashes instead of braces and commas).
Below the header, you will find two types of content:
* **Markdown**: normal Markdown text like you learned in [Chapter 3](markdown.html#markdown). For example, you can use two pound symbols (`##`) for a second\-level heading.
* **Code Chunks**: These are segments (chunks) of R code that look like normal code block elements (using `````), but with an extra `{r}` immediately after the opening backticks.
R Markdown will be able to execute the R code you include in code chunks, and render that output *in your Markdown*. More on this [below](r-markdown.html#r-markdown-syntax).
**Important** This file should be saved with the extension **`.Rmd`** (for “R Markdown”), which tells the computer and RStudio that the document contains Markdown content with embedded `R` code.
### 12\.1\.3 Knitting Documents
RStudio provides an easy interface to compile your `.Rmd` source code into an actual document (a process called **“knitting”**). Simply click the **Knit** button at the top of the script panel:
RStudio’s Knit button
This will generate the document (in the same directory as your `.Rmd` file), as well as open up a preview window in RStudio.
While it is easy to generate such documents, the knitting process can make it hard to debug errors in your `R` code (whether syntax or logical), in part because the output may or may not show up in the document! We suggest that you write complex `R` code in another script and then `source()` that script into your `.Rmd` file for use the the output. This makes it possible to test your data processing work outside of the knit application, as well as *separates the concerns* of the data and its representation—which is good programming practice.
Nevertheless, you should still be sure and knit your document frequently, paying close attention to any errors that appear in the console.
*Pro\-tip*: If you’re having trouble finding your error, a good strategy is to systematically remove segments of your code and attempt to re\-knit the document. This will help you identify the problematic syntax.
### 12\.1\.4 HTML
Assuming that you’ve chosen HTML as your desired output type, RStudio will knit your `.Rmd` into a `.html` file. HTML stands for ***H**yper**T**ext **M**arkup **L**anguage* and, like Markdown, is a syntax for describing the structure and formatting of content (though HTML is **far** more extensive and detailed). In particular, HTML is a markup language that can be automatically rendered by web browsers, and thus is the language used to create web pages. As such, the `.html` files you create can be put online as web pages for others to view—you will learn how to do this in a future chapter. For now, you can open a `.html` file in any browser (such as by double\-clicking on the file) to see the content outside of RStudio!
* As it turns out, it’s quite simple to use GitHub to host publicly available webpages (like the `.html` files you create with RMarkdown). But, this will require learning a bit more about `git` and GitHub. For instructions on publishing your `.html` files as web\-pages, see [chapter 14](git-branches.html#git-branches).
### 12\.1\.1 Creating `.Rmd` Files
The easiest way to begin a new R\-Markdown document in RStudio is to use the **File \> New File \> R Markdown** menu option:
Create a new R Markdown document in RStudio.
RStudio will then prompt you to provide some additional details abour what kind of R Markdown document you want. In particular, you will need to choose a default *document type* and *output format*. You can also provide a title and author information which will be included in the document. This chapter will focus on creating HTML documents (websites; the default format)—other formats require the installation of additional software.
Specify document type.
Once you’ve chosen *R Markdown* as your desired file type, you’ll be prompted to choose a default *document type* and *output format* that you would like to create. In this module, we’ll discuss creating HTML documents (websites).
Once you’ve chosen your desired document type and output format, RStudio will open up a new script file for you. The file contains some example code for you.
### 12\.1\.2 `.Rmd` Content
At the top of the file is some text that has the format:
```
---
title: "Example"
author: "YOUR NAME HERE"
date: "1/30/2017"
output: html_document
---
```
This is the document “header” information, which tells R Markdown details about the file and how the file should be processed. For example, the `title`, `author`, and `date` will automatically be added to the top of your document. You can include additional information as well, such as [whether there should be a table of contents](http://rmarkdown.rstudio.com/html_document_format.html) or even [variable defaults](http://rmarkdown.rstudio.com/developer_parameterized_reports.html).
* The header is written in [YAML](https://en.wikipedia.org/wiki/YAML) format, which is yet another way of formatting structured data similar to `.csv` or JSON (in fact, YAML is a superset of JSON and can represent the same data structure, just using indentation and dashes instead of braces and commas).
Below the header, you will find two types of content:
* **Markdown**: normal Markdown text like you learned in [Chapter 3](markdown.html#markdown). For example, you can use two pound symbols (`##`) for a second\-level heading.
* **Code Chunks**: These are segments (chunks) of R code that look like normal code block elements (using `````), but with an extra `{r}` immediately after the opening backticks.
R Markdown will be able to execute the R code you include in code chunks, and render that output *in your Markdown*. More on this [below](r-markdown.html#r-markdown-syntax).
**Important** This file should be saved with the extension **`.Rmd`** (for “R Markdown”), which tells the computer and RStudio that the document contains Markdown content with embedded `R` code.
### 12\.1\.3 Knitting Documents
RStudio provides an easy interface to compile your `.Rmd` source code into an actual document (a process called **“knitting”**). Simply click the **Knit** button at the top of the script panel:
RStudio’s Knit button
This will generate the document (in the same directory as your `.Rmd` file), as well as open up a preview window in RStudio.
While it is easy to generate such documents, the knitting process can make it hard to debug errors in your `R` code (whether syntax or logical), in part because the output may or may not show up in the document! We suggest that you write complex `R` code in another script and then `source()` that script into your `.Rmd` file for use the the output. This makes it possible to test your data processing work outside of the knit application, as well as *separates the concerns* of the data and its representation—which is good programming practice.
Nevertheless, you should still be sure and knit your document frequently, paying close attention to any errors that appear in the console.
*Pro\-tip*: If you’re having trouble finding your error, a good strategy is to systematically remove segments of your code and attempt to re\-knit the document. This will help you identify the problematic syntax.
### 12\.1\.4 HTML
Assuming that you’ve chosen HTML as your desired output type, RStudio will knit your `.Rmd` into a `.html` file. HTML stands for ***H**yper**T**ext **M**arkup **L**anguage* and, like Markdown, is a syntax for describing the structure and formatting of content (though HTML is **far** more extensive and detailed). In particular, HTML is a markup language that can be automatically rendered by web browsers, and thus is the language used to create web pages. As such, the `.html` files you create can be put online as web pages for others to view—you will learn how to do this in a future chapter. For now, you can open a `.html` file in any browser (such as by double\-clicking on the file) to see the content outside of RStudio!
* As it turns out, it’s quite simple to use GitHub to host publicly available webpages (like the `.html` files you create with RMarkdown). But, this will require learning a bit more about `git` and GitHub. For instructions on publishing your `.html` files as web\-pages, see [chapter 14](git-branches.html#git-branches).
12\.2 R Markdown Syntax
-----------------------
What makes R Markdown distinct from simple Markdown code is the ability to actually *execute your `R` code and include the output directly in the document*. `R` code can be executed and included in the document in blocks of code, or even inline in the document!
### 12\.2\.1 R Code Chunks
Code that is to be executed (rather than simply displayed as formatted text) is called a **code chunk**. To specify a code chunk, you need to include **`{r}`** immediately after the backticks that start the code block (the `````). For example:
```
Write normal **markdown** out here, then create a code block:
```{r}
# Execute R code in here
x <- 201
```
Back to writing _markdown_ out here.
```
Note that by default, the code chunk will *render* any raw expressions (e.g., `x`)—just like you would see in the console if you selected all the code in the chunk and used `ctrl-enter` to execute it.
It is also possible to specify additional configuration **options** by including a comma\-separate list of named arguments (like you’ve done with lists and functions) inside the curly braces following the `r`:
```
```{r options_example, echo=FALSE, message=TRUE}
# a code chunk named "options_example", with parameter `echo` assigned FALSE
# and parameter `message` assigned TRUE
# Would execute R code in here
```
```
* The first “argument” (`options_example`) is a “name” for the chunk, and the following are named arguments for the options. Chunks should be named as a variable or function, based on what code is being executed and/or rendered by the chunk. It’s always a good idea to name individual code chunks as a form of documentation.
There are [many options](https://yihui.name/knitr/options/) for creating code chunks (see also the [reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)). However some of the most useful ones have to do with how the code is outputted in the the document. These include:
* **`echo`** indicates whether you want the *R code itself* to be displayed in the document (e.g., if you want readers to be able to see your work and reproduce your calculations and analysis). Value is either `TRUE` (do display; the default) or `FALSE` (do not display).
* **`message`** indicates whether you want any messages generated by the code to be displayed. This includes print statements! Value is either `TRUE` (do display; the default) or `FALSE` (do not display).
If you only want to *show* your `R` code (and not *evaluate* it), you can alternatively use a standard Markdown codeblock that indicates the `r` language (````r`, *not* ````{r}`), or set the `eval` option to `FALSE`.
### 12\.2\.2 Inline Code
In addition to creating distinct code blocks, you may want to execute R code *inline* with the rest of your text. This empowers you to **reference a variable** from your code\-chunk in a section of Markdown—injected that variable into the text you have written. This allows you to easily include a specific result inside a paragraph of text. So if the computation changes, re\-knitting your document will update the values inside the text without any further work needed.
As with code blocks, you’ll follow the Markdown convention of using single backticks (**```**), but put the letter **`r`** immediately after the first backtick. For example:
```
To calculate 3 + 4 inside some text, we can use `r 3 + 4` right in the _middle_.
```
When you knit the text above, the ``r 3 + 4`` would be replaced with the number `7`.
Note you can also reference values computed in the code blocks preceding your inline code; it is **best practice** to do your calculations in a code block (with `echo=FALSE`), save the result in a variable, and then simply inline that variable with e.g., ``r my.variable``.
### 12\.2\.1 R Code Chunks
Code that is to be executed (rather than simply displayed as formatted text) is called a **code chunk**. To specify a code chunk, you need to include **`{r}`** immediately after the backticks that start the code block (the `````). For example:
```
Write normal **markdown** out here, then create a code block:
```{r}
# Execute R code in here
x <- 201
```
Back to writing _markdown_ out here.
```
Note that by default, the code chunk will *render* any raw expressions (e.g., `x`)—just like you would see in the console if you selected all the code in the chunk and used `ctrl-enter` to execute it.
It is also possible to specify additional configuration **options** by including a comma\-separate list of named arguments (like you’ve done with lists and functions) inside the curly braces following the `r`:
```
```{r options_example, echo=FALSE, message=TRUE}
# a code chunk named "options_example", with parameter `echo` assigned FALSE
# and parameter `message` assigned TRUE
# Would execute R code in here
```
```
* The first “argument” (`options_example`) is a “name” for the chunk, and the following are named arguments for the options. Chunks should be named as a variable or function, based on what code is being executed and/or rendered by the chunk. It’s always a good idea to name individual code chunks as a form of documentation.
There are [many options](https://yihui.name/knitr/options/) for creating code chunks (see also the [reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)). However some of the most useful ones have to do with how the code is outputted in the the document. These include:
* **`echo`** indicates whether you want the *R code itself* to be displayed in the document (e.g., if you want readers to be able to see your work and reproduce your calculations and analysis). Value is either `TRUE` (do display; the default) or `FALSE` (do not display).
* **`message`** indicates whether you want any messages generated by the code to be displayed. This includes print statements! Value is either `TRUE` (do display; the default) or `FALSE` (do not display).
If you only want to *show* your `R` code (and not *evaluate* it), you can alternatively use a standard Markdown codeblock that indicates the `r` language (````r`, *not* ````{r}`), or set the `eval` option to `FALSE`.
### 12\.2\.2 Inline Code
In addition to creating distinct code blocks, you may want to execute R code *inline* with the rest of your text. This empowers you to **reference a variable** from your code\-chunk in a section of Markdown—injected that variable into the text you have written. This allows you to easily include a specific result inside a paragraph of text. So if the computation changes, re\-knitting your document will update the values inside the text without any further work needed.
As with code blocks, you’ll follow the Markdown convention of using single backticks (**```**), but put the letter **`r`** immediately after the first backtick. For example:
```
To calculate 3 + 4 inside some text, we can use `r 3 + 4` right in the _middle_.
```
When you knit the text above, the ``r 3 + 4`` would be replaced with the number `7`.
Note you can also reference values computed in the code blocks preceding your inline code; it is **best practice** to do your calculations in a code block (with `echo=FALSE`), save the result in a variable, and then simply inline that variable with e.g., ``r my.variable``.
12\.3 Rendering Data
--------------------
R Markdown’s code chunks let you perform data analysis directly in your document, but often you will want to include more complex data output. This section discusses a few tips for specifying dynamic, complex output to render using R Markdown.
### 12\.3\.1 Rendering Strings
If you experiment with knitting R Markdown, you will quickly notice that using `print()` will generate a code block with content that looks like a printed vector:
```
```{r echo=FALSE}
print("Hello world")
```
```
```
## [1] "Hello world"
```
For this reason, you usually want to have the code block generate a string that you save in a variable, which you can then display with an inline expression (e.g., on its own line):
```
```{r echo=FALSE}
msg <- "Hello world"
```
Below is the message to see:
`r msg`
```
Note that any Markdown syntax included in the variable (e.g., if you had `msg <- "**Hello** world"`) will be rendered as well—the ``r msg ``is replaced by the value of the expression just as if you had typed that Markdown in directly. This allows you to even include dynamic styling if you construct a “Markdown string” out of your data.
Alternatively, you can use as [`results`](https://yihui.name/knitr/options/#text-results) option of `'asis'`, which will cause the “output” to be rendered directly into the markdown. When combined with the [`cat()`](https://www.rdocumentation.org/packages/base/versions/3.4.3/topics/cat) function (which con**cat**enates content without specifying additional information like vector position), you can make a code chunk effectively render a specific string:
```
```{r results='asis', echo=FALSE}
cat("Hello world")
```
```
### 12\.3\.2 Rendering Lists
Because outputted strings render any Markdown they contain, it’s possible to specify complex Markdown such as *lists* by constructing these strings to contain the `-` symbols utilized (note that each item will need to be separated by a line break or a `\n` character):
```
```{r echo=FALSE}
markdown.list <- "
- Lions
- Tigers
- Bears
"
```
`r markdown.list`
```
Would output a list that looks like:
* Lions
* Tigers
* Bears
Combined with the vectorized `paste()` function, it’s to easily convert vectors into Markdown lists that can be rendered
```
```{r echo=FALSE}
animals <- c("Lions", "Tigers", "Bears")
# paste a `-` in front of each, then cat the items with newlines between
markdown.list <- paste(paste('-',animals), collapse='\n')
```
`r markdown.list`
```
And of course, the contents of the vector (e.g., the text `"Lions"`) could easily have additional Markdown syntax syntax to include bold, italic, or hyperlinked text.
* Creating a “helper function” to do this conversion is perfectly reasonable; or see libraries such as [`pander`](http://rapporter.github.io/pander/) which defines a number of such functions.
### 12\.3\.3 Rendering Tables
Because data frames are so central to programming with R, R Markdown includes capabilities to easily render data frames as Markdown *tables* via the [**`knitr::kable()`**](https://www.rdocumentation.org/packages/knitr/versions/1.19/topics/kable) function. This function takes as an argument the data frame you wish to render, and it will automatically convert that value into a Markdown table:
```
```{r echo=FALSE}
library(knitr) # make sure you load this library (once per doc)
# make a data frame
letters <- c("a", "b", "c")
numbers <- 1:3
df <- data.frame(letters = letters, numbers = numbers)
# render the table
kable(df)
```
```
* `kable()` supports a number of other arguments that can be used to customize how it outputs a table.
* And of courrse, if the values in the dataframe are strings that contain Markdown syntax (e.g., bold, itaic, or hyperlinks), they will be rendered as such in the table!
So while you may need to do a little bit of work to manually generate the Markdown syntax, it is possible to dynamically produce complex documents based on dynamic data sources
### 12\.3\.1 Rendering Strings
If you experiment with knitting R Markdown, you will quickly notice that using `print()` will generate a code block with content that looks like a printed vector:
```
```{r echo=FALSE}
print("Hello world")
```
```
```
## [1] "Hello world"
```
For this reason, you usually want to have the code block generate a string that you save in a variable, which you can then display with an inline expression (e.g., on its own line):
```
```{r echo=FALSE}
msg <- "Hello world"
```
Below is the message to see:
`r msg`
```
Note that any Markdown syntax included in the variable (e.g., if you had `msg <- "**Hello** world"`) will be rendered as well—the ``r msg ``is replaced by the value of the expression just as if you had typed that Markdown in directly. This allows you to even include dynamic styling if you construct a “Markdown string” out of your data.
Alternatively, you can use as [`results`](https://yihui.name/knitr/options/#text-results) option of `'asis'`, which will cause the “output” to be rendered directly into the markdown. When combined with the [`cat()`](https://www.rdocumentation.org/packages/base/versions/3.4.3/topics/cat) function (which con**cat**enates content without specifying additional information like vector position), you can make a code chunk effectively render a specific string:
```
```{r results='asis', echo=FALSE}
cat("Hello world")
```
```
### 12\.3\.2 Rendering Lists
Because outputted strings render any Markdown they contain, it’s possible to specify complex Markdown such as *lists* by constructing these strings to contain the `-` symbols utilized (note that each item will need to be separated by a line break or a `\n` character):
```
```{r echo=FALSE}
markdown.list <- "
- Lions
- Tigers
- Bears
"
```
`r markdown.list`
```
Would output a list that looks like:
* Lions
* Tigers
* Bears
Combined with the vectorized `paste()` function, it’s to easily convert vectors into Markdown lists that can be rendered
```
```{r echo=FALSE}
animals <- c("Lions", "Tigers", "Bears")
# paste a `-` in front of each, then cat the items with newlines between
markdown.list <- paste(paste('-',animals), collapse='\n')
```
`r markdown.list`
```
And of course, the contents of the vector (e.g., the text `"Lions"`) could easily have additional Markdown syntax syntax to include bold, italic, or hyperlinked text.
* Creating a “helper function” to do this conversion is perfectly reasonable; or see libraries such as [`pander`](http://rapporter.github.io/pander/) which defines a number of such functions.
### 12\.3\.3 Rendering Tables
Because data frames are so central to programming with R, R Markdown includes capabilities to easily render data frames as Markdown *tables* via the [**`knitr::kable()`**](https://www.rdocumentation.org/packages/knitr/versions/1.19/topics/kable) function. This function takes as an argument the data frame you wish to render, and it will automatically convert that value into a Markdown table:
```
```{r echo=FALSE}
library(knitr) # make sure you load this library (once per doc)
# make a data frame
letters <- c("a", "b", "c")
numbers <- 1:3
df <- data.frame(letters = letters, numbers = numbers)
# render the table
kable(df)
```
```
* `kable()` supports a number of other arguments that can be used to customize how it outputs a table.
* And of courrse, if the values in the dataframe are strings that contain Markdown syntax (e.g., bold, itaic, or hyperlinks), they will be rendered as such in the table!
So while you may need to do a little bit of work to manually generate the Markdown syntax, it is possible to dynamically produce complex documents based on dynamic data sources
Resources
---------
* [R Markdown Homepage](http://rmarkdown.rstudio.com/)
* [R Markdown Cheatsheet](https://www.rstudio.com/wp-content/uploads/2016/03/rmarkdown-cheatsheet-2.0.pdf) (really useful!)
* [R Markdown Reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf) (really useful!)
* [`knitr`](https://yihui.name/knitr/)
| Field Specific |
info201.github.io | https://info201.github.io/ggplot2.html |
Chapter 13 The `gglot2` Library
===============================
Being able to create **visualizations** (graphical representations) of data is a key step in being able to *communicate* information and findings to others. In this chapter you will learn to use the **`ggplot2`** library to declaratively make beautiful plots or charts of your data.
Although R does provide built\-in plotting functions, the `ggplot2` library implements the **Grammar of Graphics** (similar to how `dplyr` implements a *Grammar of Data Manipulation*; indeed, both packages were developed by the same person). This makes the library particularly effective for describing how visualizations should represent data, and has turned it into the preeminent plotting library in R.
Learning this library will allow you to easily make nearly any kind of (static) data visualization, customized to your exact specifications.
Examples in this chapter adapted from [*R for Data Science*](http://r4ds.had.co.nz/) by Garrett Grolemund and Hadley Wickham.
13\.1 A Grammar of Graphics
---------------------------
Just as the grammar of language helps us construct meaningful sentences out of words, the ***Grammar of Graphics*** helps us to construct graphical figures out of different visual elements. This grammar gives us a way to talk about parts of a plot: all the circles, lines, arrows, and words that are combined into a diagram for visualizing data. Originally developed by Leland Wilkinson, the Grammar of Graphics was [adapted by Hadley Wickham](http://vita.had.co.nz/papers/layered-grammar.pdf) to describe the *components* of a plot, including
* the **data** being plotted
* the **geometric objects** (circles, lines, etc.) that appear on the plot
* the **aesthetics** (appearance) of the geometric objects, and the *mappings* from variables in the data to those aesthetics
* a **statistical transformation** used to calculate the data values used in the plot
* a **position adjustment** for locating each geometric object on the plot
* a **scale** (e.g., range of values) for each aesthetic mapping used
* a **coordinate system** used to organize the geometric objects
* the **facets** or groups of data shown in different plots
Wickham further organizes these components into **layers**, where each layer has a single *geometric object*, *statistical transformation*, and *position adjustment*. Following this grammar, you can think of each plot as a set of layers of images, where each image’s appearance is based on some aspect of the data set.
All together, this grammar enables you to discuss what plots look like using a standard set of vocabulary. And like with `dplyr` and the *Grammar of Data Manipulation*, `ggplot2` uses this grammar directly to declare plots, allowing you to more easily create specific visual images.
13\.2 Basic Plotting with `ggplot2`
-----------------------------------
### 13\.2\.1 *ggplot2* library
The [**`ggplot2`**](http://ggplot2.tidyverse.org/) library provides a set of *declarative functions* that mirror the above grammar, enabling you to easily specify what you want a plot to look like (e.g., what data, geometric objects, aesthetics, scales, etc. you want it to have).
`ggplot2` is yet another external package (like `dplyr` and `httr` and `jsonlite`), so you will need to install and load it in order to use it:
```
install.packages("ggplot2") # once per machine
library("ggplot2")
```
This will make all of the plotting functions you’ll need available.
*ggplot2* is called *ggplot**2*** because once upon a time there was just a library *ggplot*. However, the developer noticed that it used an inefficient set of functions. In order for not to break the API, the authors introduced a successor package *ggplot2*. However, the central function in this package is still called `ggplot()`, not `ggplot2()`!
### 13\.2\.2 *mpg* data
*ggplot2* library comes with a number of built\-in data sets. One of the most popular of these is `mpg`, a data frame about fuel economy for different cars. It is a sufficiently small but versatile dataset to demonstrate various aspects of plotting. *mpg* has 234 rows and 11 columns. Below is a sample of it:
```
mpg[sample(nrow(mpg), 3),]
```
```
## # A tibble: 3 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 subaru fores… 2.5 2008 4 manu… 4 19 25 p suv
## 2 hyundai sonata 2.4 2008 4 manu… f 21 31 r mids…
## 3 chevrolet malibu 3.1 1999 6 auto… f 18 26 r mids…
```
The most important variables for our purpose are following:
* **class**, car class, such as SUV, compact, minivan
* **displ**, engine size (liters)
* **cyl**, number of cylinders
* **hwy**, mileage on highway, miles per gallon
* **manufacturer**, producer of the car, e.g. Volkswagen, Toyota
### 13\.2\.3 Our first ggplot
In order to create a plot, you call the `ggplot()` function, specifying the **data** that you wish to plot. You then add new *layers* that are **geometric objects** which will show up on the plot:
```
# plot the `mpg` data set, with highway mileage on the x axis and
# engine displacement (power) on the y axis:
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy))
```
To walk through the above code:
* The `ggplot()` function is passed the data frame to plot as the `data` argument.
* You specify a geometric object (`geom`) by calling one of the many`geom` [functions](http://ggplot2.tidyverse.org/reference/index.html#section-layer-geoms), which are all named `geom_` followed by the name of the kind of geometry you wish to create. For example, `geom_point()` will create a layer with “point” (dot) elements as the geometry. There are a large number of these functions; see below for more details.
* For each `geom` you must specify the **aesthetic mappings**, which is how data from the data frame will be mapped to the visual aspects of the geometry. These mappings are defined using the `aes()` function. The `aes()` function takes a set of arguments (like a list), where the argument name is the visual property to map *to*, and the argument value is the data property to map *from*.
* Finally, you add `geom` layers to the plot by using the addition (**`+`**) operator.
Thus, basic simple plots can be created simply by specifying a data set, a `geom`, and a set of aesthetic mappings.
* Note that `ggplot2` library does include a `qplot()` function for creating “quick plots”, which acts as a convenient shortcut for making simple, “default”\-like plots. While this is a nice starting place, the strength of `ggplot2` is in it’s *customizability*, so read on!
### 13\.2\.4 Aesthetic Mappings
The **aesthetic mapping** is a central concept of every data visualization. This means setting up the correspondence between **aesthetics**, the visual properties (visual channels) of the plot, such as *position*, *color*, *size*, or *shape*, and certain properties of the data, typically numeric values of certain variables.
Aesthetics are the representations that you want to *drive with your data properties*, rather than fix in code for all markers. Each visual channel can therefore encode an aspect of the data and be used to express underlying patterns.
The aesthetics mapping is specified in the [`aes()`](http://ggplot2.tidyverse.org/reference/index.html#section-aesthetics) function call in the `geom` layer. Above we used mapping `aes(x=displ, y=hwy)`. This means to map variable `displ` in the `mpg` data (engine size) to the horizontal position (*x*\-coordinate) on the plot, and variable `hwy` (highway mileage) to the vertical position (*y* coordinate). We did not specify any other visual properties, such as color, point size or point shape, so by default the `geom_point` layer produced a set of equal size black dots, positioned according to the date. Let’s now color the points according to the class of the car. This amounts to taking an additional aesthetic, *color*, and mapping it to the variable `class` in data as `color=class`. As we want this to happen in the same layer, we must add this to the `aes()` function as an additional named argument:
```
# color the data by car type
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class))
```
(`ggplot2` will even create a legend for you!)
Note that using the `aes()` function will cause the visual channel to be based on the data specified in the argument. For example, using `aes(color = "blue")` won’t cause the geometry’s color to be “blue”, but will instead cause the visual channel to be mapped from the *vector* `c("blue")`—as if you only had a single type of engine that happened to be called “blue”:
```
ggplot(data = mpg) + # note where parentheses are closed
geom_point(mapping = aes(x = displ, y = hwy, color = "blue"))
```
This looks confusing (note the weird legend!) and is most likely not what you want.
If you wish to specify a given aesthetic, you should ***set*** that property as an argument to the `geom` method, outside of the `aes()` call:
```
ggplot(data = mpg) + # note where parentheses are closed
geom_point(mapping = aes(x = displ, y = hwy), color = "blue") # blue points!
```
13\.3 Complex Plots
-------------------
Building on these basics, `ggplot2` can be used to build almost any kind of plot you may want. These plots are declared using functions that follow from the *Grammar of Graphics*.
### 13\.3\.1 Specifying Geometry
The most obvious distinction between plots is what **geometric objects** (`geoms`) they include. `ggplot2` supports a number of different types of [`geoms`](http://ggplot2.tidyverse.org/reference/index.html#section-layer-geoms), including:
* **`geom_point`** for drawing individual points (e.g., a scatter plot)
* **`geom_line`** for drawing lines (e.g., for a line charts)
* **`geom_smooth`** for drawing smoothed lines (e.g., for simple trends or approximations)
* **`geom_bar`** for drawing bars (e.g., for bar charts)
* **`geom_polygon`** for drawing arbitrary shapes (e.g., for drawing an area in a coordinate plane)
* **`geom_map`** for drawing polygons in the shape of a map! (You can access the *data* to use for these maps by using the [`map_data()`](http://ggplot2.tidyverse.org/reference/map_data.html) function).
Each of these geometries will need to include a set of **aesthetic mappings** (using the `aes()` function and assigned to the `mapping` argument), though the specific *visual properties* that the data will map to will vary. For example, you can map data to the `shape` of a `geom_point` (e.g., if they should be circles or squares), or you can map data to the `linetype` of a `geom_line` (e.g., if it is solid or dotted), but not vice versa.
* Almost all `geoms` **require** an `x` and `y` mapping at the bare minimum.
```
# line chart of mileage by engine power
ggplot(data = mpg) +
geom_line(mapping = aes(x = displ, y = hwy))
# bar chart of car type
ggplot(data = mpg) +
geom_bar(mapping = aes(x = class)) # no y mapping needed!
```
What makes this really powerful is that you can add **multiple geometries** to a plot, thus allowing you to create complex graphics showing multiple aspects of your data
```
# plot with both points and smoothed line
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
geom_smooth(mapping = aes(x = displ, y = hwy))
```
Of course the aesthetics for each `geom` can be different, so you could show multiple lines on the same plot (or with different colors, styles, etc). It’s also possible to give each `geom` a different `data` argument, so that you can show multiple data sets in the same plot.
* If you want multiple `geoms` to utilize the same data or aesthetics, you can pass those values as arguments to the `ggplot()` function itself; any `geoms` added to that plot will use the values declared for the whole plot *unless overridden by individual specifications*.
#### 13\.3\.1\.1 Statistical Transformations
If you look at the above `bar` chart, you’ll notice that the the `y` axis was defined for you as the `count` of elements that have the particular type. This `count` isn’t part of the data set (it’s not a column in `mpg`), but is instead a **statistical transformation** that the `geom_bar` automatically applies to the data. In particular, it applies the `stat_count` transformation, simply summing the number of rows each `class` appeared in the dataset.
`ggplot2` supports many different statistical transformations. For example, the “identity” transformation will leave the data “as is”. You can specify which statistical transformation a `geom` uses by passing it as the **`stat`** argument:
```
# bar chart of make and model vs. mileage
# quickly (lazily) filter the dataset to a sample of the cars: one of each make/model
new_cars <- mpg %>%
mutate(car = paste(manufacturer, model)) %>% # combine make + model
distinct(car, .keep_all = TRUE) %>% # select one of each cars -- lazy filtering!
slice(1:20) # only keep 20 cars
# create the plot (you need the `y` mapping since it is not implied by the stat transform of geom_bar)
ggplot(new_cars) +
geom_bar(mapping = aes(x = car, y = hwy), stat = "identity") +
coord_flip() # horizontal bar chart
```
Additionally, `ggplot2` contains **`stat_`** functions (e.g., `stat_identity` for the “identity” transformation) that can be used to specify a layer in the same way a `geom` does:
```
# generate a "binned" (grouped) display of highway mileage
ggplot(data = mpg) +
stat_bin(aes(x = hwy, color = hwy), binwidth = 4) # binned into groups of 4 units
```
Notice the above chart is actually a [histogram](https://en.wikipedia.org/wiki/Histogram)! Indeed, almost every `stat` transformation corresponds to a particular `geom` (and vice versa) by default. Thus they can often be used interchangeably, depending on how you want to emphasize your layer creation when writing the code.
```
# these two charts are identical
ggplot(data = mpg) +
geom_bar(mapping = aes(x = class))
ggplot(data = mpg) +
stat_count(mapping = aes(x = class))
```
#### 13\.3\.1\.2 Position Adjustments
In addition to a default statistical transformation, each `geom` also has a default **position adjustment** which specifies a set of “rules” as to how different components should be positioned relative to each other. This position is noticeable in a `geom_bar` if you map a different variable to the color visual channel:
```
# bar chart of mileage, colored by engine type
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = class)) # fill color, not outline color
```
The `geom_bar` by default uses a position adjustment of `"stack"`, which makes each “bar” a height appropriate to its value and *stacks* them on top of each other. You can use the **`position`** argument to specify what position adjustment rules to follow:
```
# a filled bar chart (fill the vertical height)
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = drv), position = "fill")
# a dodged (group) bar chart -- values next to each other
# (not great dodging demos in this data set)
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = drv), position = "dodge")
```
Check the documentation for each particular `geom` to learn more about its possible position adjustments.
### 13\.3\.2 Styling with Scales
Whenever you specify an **aesthetic mapping**, `ggplot` uses a particular **scale** to determine the *range of values* that the data should map to. Thus, when you specify
```
# color the data by engine type
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class))
```
`ggplot` automatically adds a **scale** for each mapping to the plot:
```
# same as above, with explicit scales
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class)) +
scale_x_continuous() +
scale_y_continuous() +
scale_colour_discrete()
```
Each scale can be represented by a function with the following name: `scale_`, followed by the name of the aesthetic property, followed by an `_` and the name of the scale. A `continuous` scale will handle things like numeric data (where there is a *continuous set* of numbers), whereas a `discrete` scale will handle things like colors (since there is a small list of *distinct* colors).
While the default scales will work fine, it is possible to explicitly add different scales to replace the defaults. For example, you can use a scale to change the direction of an axis:
```
# mileage relationship, ordered in reverse
ggplot(data = mpg) +
geom_point(mapping = aes(x = cty, y = hwy)) +
scale_x_reverse()
```
Similarly, you can use `scale_x_log10()` to plot on a [logarithmic scale](https://en.wikipedia.org/wiki/Logarithmic_scale).
You can also use scales to specify the *range* of values on a axis by passing in a `limits` argument. This is useful for making sure that multiple graphs share scales or formats.
```
# subset data by class
suv <- mpg %>% filter(class == "suv") # suvs
compact <- mpg %>% filter(class == "compact") # compact cars
# scales
x_scale <- scale_x_continuous(limits = range(mpg$displ))
y_scale <- scale_y_continuous(limits = range(mpg$hwy))
col_scale <- scale_colour_discrete(limits = unique(mpg$drv))
ggplot(data = suv) +
geom_point(mapping = aes(x = displ, y = hwy, color = drv)) +
x_scale + y_scale + col_scale
ggplot(data = compact) +
geom_point(mapping = aes(x = displ, y = hwy, color = drv)) +
x_scale + y_scale + col_scale
```
Notice how it is easy to compare the two data sets to each other because the axes and colors match!
These scales can also be used to specify the “tick” marks and labels; see the resources at the end of the chapter for details. And for further ways specifying where the data appears on the graph, see the [Coordinate Systems](ggplot2.html#coordinate-systems) section below.
#### 13\.3\.2\.1 Color Scales
A more common scale to change is which set of colors to use in a plot. While you can use scale functions to specify a list of colors to use, a more common option is to use a pre\-defined palette from [**colorbrewer.org**](http://colorbrewer2.org/). These color sets have been carefully designed to look good and to be viewable to people with certain forms of color blindness. This color scale is specified with the `scale_color_brewer()` function, passing the `palette` as an argument.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class), size = 4) +
scale_color_brewer(palette = "Set3")
```
You can get the palette name from the *colorbrewer* website by looking at the `scheme` query parameter in the URL. Or see the diagram [here](https://bl.ocks.org/mbostock/5577023) and hover the mouse over each palette for its name.
You can also specify *continuous* color values by using a [gradient](http://ggplot2.tidyverse.org/reference/scale_gradient.html) scale, or [manually](http://ggplot2.tidyverse.org/reference/scale_manual.html) specify the colors you want to use as a *named vector*.
### 13\.3\.3 Coordinate Systems
The next term from the *Grammar of Graphics* that can be specified is the **coordinate system**. As with **scales**, coordinate systems are specified with functions (that all start with **`coord_`**) and are added to a `ggplot`. There are a number of different possible [coordinate systems](http://ggplot2.tidyverse.org/reference/index.html#section-coordinate-systems) to use, including:
* **`coord_cartesian`** the default [cartesian coordinate](https://en.wikipedia.org/wiki/Cartesian_coordinate_system) system, where you specify `x` and `y` values.
* **`coord_flip`** a cartesian system with the `x` and `y` flipped
* **`coord_fixed`** a cartesian system with a “fixed” aspect ratio (e.g., 1\.78 for a “widescreen” plot)
* **`coord_polar`** a plot using [polar coordinates](https://en.wikipedia.org/wiki/Polar_coordinate_system)
* **`coord_quickmap`** a coordinate system that approximates a good aspect ratio for maps. See the documentation for more details.
Most of these system support the `xlim` and `ylim` arguments, which specify the *limits* for the coordinate system.
### 13\.3\.4 Facets
**Facets** are ways of *grouping* a data plot into multiple different pieces (*subplots*). This allows you to view a separate plot for each value in a [categorical variable](https://en.wikipedia.org/wiki/Categorical_variable). Conceptually, breaking a plot up into facets is similar to using the `group_by()` verb in `dplyr`, with each facet acting like a *level* in an R *factor*.
You can construct a plot with multiple facets by using the **`facet_wrap()`** function. This will produce a “row” of subplots, one for each categorical variable (the number of rows can be specified with an additional argument):
```
# a plot with facets based on vehicle type.
# similar to what we did with `suv` and `compact`!
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_wrap(~class)
```
Note that the argument to `facet_wrap()` function is written with a tilde (**`~`**) in front of it. This specifies that the column name should be treated as a **formula**. A formula is a bit like an “equation” in mathematics; it’s like a string representing what set of operations you want to perform (putting the column name in a string also works in this simple case). Formulas are in fact the same structure used with *standard evaluation* in `dplyr`; putting a `~` in front of an expression (such as `~ desc(colname)`) allows SE to work.
* In short: put a `~` in front of the column name you want to “group” by.
### 13\.3\.5 Labels \& Annotations
Textual labels and annotations (on the plot, axes, geometry, and legend) are an important part of making a plot understandable and communicating information. Although not an explicit part of the *Grammar of Graphics* (they would be considered a form of geometry), `ggplot` makes it easy to add such annotations.
You can add titles and axis labels to a chart using the **`labs()`** function (*not* `labels`, which is a different R function!):
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class)) +
labs(
title = "Fuel Efficiency by Engine Power, 1999-2008", # plot title
x = "Engine power (litres displacement)", # x-axis label (with units!)
y = "Fuel Efficiency (miles per gallon)", # y-axis label (with units!)
color = "Car Type"
) # legend label for the "color" property
```
It is possible to add labels into the plot itself (e.g., to label each point or line) by adding a new `geom_text` or `geom_label` to the plot; effectively, you’re plotting an extra set of data which happen to be the variable names:
```
# a data table of each car that has best efficiency of its type
best_in_class <- mpg %>%
group_by(class) %>%
filter(row_number(desc(hwy)) == 1)
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) + # same mapping for all geoms
geom_point(mapping = aes(color = class)) +
geom_label(data = best_in_class, mapping = aes(label = model), alpha = 0.5)
```
*R for Data Science* (linked in the resources below) recommends using the [`ggrepel`](https://github.com/slowkow/ggrepel) package to help position labels.
13\.4 Plotting in Scripts
-------------------------
From the first encounter with *ggplot* one typically gets the impression that it is the *ggplot* function that creates the plot. This **is not true!**. A call to *ggplot* just creates a *ggplot*\-object, a data structure that contains data and all other necessary details for creating the plot. But the image itself is not created. It is created instead by the *print*\-method. If you type an expression on R console, R evaluates and prints this expression. This is why we can use it as a manual calculator for simple math, such as `2 + 2`. The same is true for *ggplot*: it returns a *ggplot*\-object, and given you don’t store it into a variable, it is printed, and the print method of *ggplot*\-object actually makes the image. This is why we can immediately see the images when we work with *ggplot* on console.
Things may be different, however, if we do this in a script. When you execute a script, the returned non\-stored expressions are not printed. For instance, the script
```
diamonds %>%
sample_n(1000) %>%
ggplot() +
geom_point(aes(carat, price)) # note: not printed
```
will not produce any output when sourced as a single script (and not run line\-by\-line). You have to print the returned objects explicitly, for instance as
```
data <- diamonds %>%
sample_n(1000)
p <- ggplot(data) +
geom_point(aes(carat, price)) # store here
print(p) # print here
```
In scripts we often want the code not to produce image on screen, but store it in a file instead. This can be achieved in a variety of ways, for instance through redirecting graphical output to a pdf device with the command `pdf()`:
```
data <- diamonds %>%
sample_n(1000)
p <- ggplot(data) +
geom_point(aes(carat, price)) # store here
pdf(file="diamonds.pdf", width=10, height=8)
# redirect to a pdf file
print(p) # print here
dev.off() # remember to close the file
```
After redirecting the output, all plots will be written to the pdf file (as separate pages if you create more than one plot). Note you have to close the file with `def.off()`, otherwise it will be broken. There are other output options besides pdf, you may want to check `jpeg` and `png` image outputs. Finally, *ggplot* also has a dedicated way to save individual plots to file using `ggsave`.
13\.5 Other Visualization Libraries
-----------------------------------
`ggplot2` is easily the most popular library for producing data visualizations in R. That said, `ggplot2` is used to produce **static** visualizations: unchanging “pictures” of plots. Static plots are great for for **explanatory visualizations**: visualizations that are used to communicate some information—or more commonly, an *argument* about that information. All of the above visualizations have been ways to explain and demonstrate an argument about the data (e.g., the relationship between car engines and fuel efficiency).
Data visualizations can also be highly effective for **exploratory analysis**, in which the visualization is used as a way to *ask and answer questions* about the data (rather than to convey an answer or argument). While it is perfectly feasible to do such exploration on a static visualization, many explorations can be better served with **interactive visualizations** in which the user can select and change the *view* and presentation of that data in order to understand it.
While `ggplot2` does not directly support interactive visualizations, there are a number of additional R libraries that provide this functionality, including:
* [**`ggvis`**](http://ggvis.rstudio.com/) is a library that uses the *Grammar of Graphics* (similar to `ggplot`), but for interactive visualizations. The interactivity is provided through the [`shiny`](http://www.rstudio.com/shiny/) library, which is introduced in a later chapter.
* [**Bokeh**](http://hafen.github.io/rbokeh/index.html) is an open\-source library for developing interactive visualizations. It automatically provides a number of “standard” interactions (pop\-up labels, drag to pan, select to zoom, etc) automatically. It is similar to `ggplot2`, in that you create a figure and then and then add *layers* representing different geometries (points, lines etc). It has detailed and readable documentation, and is also available to other programming languages (such as Python).
* [**Plotly**](https://plot.ly/r/) is another library similar to *Bokeh*, in that it automatically provided standard interactions. It is also possible to take a `ggplot2` plot and [wrap](https://plot.ly/ggplot2/) it in Plotly in order to make it interactive. Plotly has many examples to learn from, though a less effective set of documentation than other libraries.
* [**`rCharts`**](http://rdatascience.io/rCharts/) provides a way to utilize a number of *JavaScript* interactive visualization libraries. JavaScript is the programming language used to create interactive websites (HTML files), and so is highly specialized for creating interactive experiences.
There are many other libraries as well; searching around for a specific feature you need may lead you to a useful tool!
Resources
---------
* [gglot2 Documentation](http://ggplot2.tidyverse.org/) (particularly the [function reference](http://ggplot2.tidyverse.org/reference/index.html))
* [ggplot2 Cheat Sheet](https://www.rstudio.com/wp-content/uploads/2016/11/ggplot2-cheatsheet-2.1.pdf) (see also [here](http://zevross.com/blog/2014/08/04/beautiful-plotting-in-r-a-ggplot2-cheatsheet-3/))
* [Data Visualization (R4DS)](http://r4ds.had.co.nz/data-visualisation.html) \- tutorial using `ggplot2`
* [Graphics for Communication (R4DS)](http://r4ds.had.co.nz/graphics-for-communication.html) \- “part 2” of tutorial using `ggplot`
* [Graphics with ggplot2](http://www.statmethods.net/advgraphs/ggplot2.html) \- explanation of `qplot()`
* [Telling stories with the grammar of graphics](https://codewords.recurse.com/issues/six/telling-stories-with-data-using-the-grammar-of-graphics)
* [A Layered Grammar of Graphics (Wickham)](http://vita.had.co.nz/papers/layered-grammar.pdf)
13\.1 A Grammar of Graphics
---------------------------
Just as the grammar of language helps us construct meaningful sentences out of words, the ***Grammar of Graphics*** helps us to construct graphical figures out of different visual elements. This grammar gives us a way to talk about parts of a plot: all the circles, lines, arrows, and words that are combined into a diagram for visualizing data. Originally developed by Leland Wilkinson, the Grammar of Graphics was [adapted by Hadley Wickham](http://vita.had.co.nz/papers/layered-grammar.pdf) to describe the *components* of a plot, including
* the **data** being plotted
* the **geometric objects** (circles, lines, etc.) that appear on the plot
* the **aesthetics** (appearance) of the geometric objects, and the *mappings* from variables in the data to those aesthetics
* a **statistical transformation** used to calculate the data values used in the plot
* a **position adjustment** for locating each geometric object on the plot
* a **scale** (e.g., range of values) for each aesthetic mapping used
* a **coordinate system** used to organize the geometric objects
* the **facets** or groups of data shown in different plots
Wickham further organizes these components into **layers**, where each layer has a single *geometric object*, *statistical transformation*, and *position adjustment*. Following this grammar, you can think of each plot as a set of layers of images, where each image’s appearance is based on some aspect of the data set.
All together, this grammar enables you to discuss what plots look like using a standard set of vocabulary. And like with `dplyr` and the *Grammar of Data Manipulation*, `ggplot2` uses this grammar directly to declare plots, allowing you to more easily create specific visual images.
13\.2 Basic Plotting with `ggplot2`
-----------------------------------
### 13\.2\.1 *ggplot2* library
The [**`ggplot2`**](http://ggplot2.tidyverse.org/) library provides a set of *declarative functions* that mirror the above grammar, enabling you to easily specify what you want a plot to look like (e.g., what data, geometric objects, aesthetics, scales, etc. you want it to have).
`ggplot2` is yet another external package (like `dplyr` and `httr` and `jsonlite`), so you will need to install and load it in order to use it:
```
install.packages("ggplot2") # once per machine
library("ggplot2")
```
This will make all of the plotting functions you’ll need available.
*ggplot2* is called *ggplot**2*** because once upon a time there was just a library *ggplot*. However, the developer noticed that it used an inefficient set of functions. In order for not to break the API, the authors introduced a successor package *ggplot2*. However, the central function in this package is still called `ggplot()`, not `ggplot2()`!
### 13\.2\.2 *mpg* data
*ggplot2* library comes with a number of built\-in data sets. One of the most popular of these is `mpg`, a data frame about fuel economy for different cars. It is a sufficiently small but versatile dataset to demonstrate various aspects of plotting. *mpg* has 234 rows and 11 columns. Below is a sample of it:
```
mpg[sample(nrow(mpg), 3),]
```
```
## # A tibble: 3 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 subaru fores… 2.5 2008 4 manu… 4 19 25 p suv
## 2 hyundai sonata 2.4 2008 4 manu… f 21 31 r mids…
## 3 chevrolet malibu 3.1 1999 6 auto… f 18 26 r mids…
```
The most important variables for our purpose are following:
* **class**, car class, such as SUV, compact, minivan
* **displ**, engine size (liters)
* **cyl**, number of cylinders
* **hwy**, mileage on highway, miles per gallon
* **manufacturer**, producer of the car, e.g. Volkswagen, Toyota
### 13\.2\.3 Our first ggplot
In order to create a plot, you call the `ggplot()` function, specifying the **data** that you wish to plot. You then add new *layers* that are **geometric objects** which will show up on the plot:
```
# plot the `mpg` data set, with highway mileage on the x axis and
# engine displacement (power) on the y axis:
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy))
```
To walk through the above code:
* The `ggplot()` function is passed the data frame to plot as the `data` argument.
* You specify a geometric object (`geom`) by calling one of the many`geom` [functions](http://ggplot2.tidyverse.org/reference/index.html#section-layer-geoms), which are all named `geom_` followed by the name of the kind of geometry you wish to create. For example, `geom_point()` will create a layer with “point” (dot) elements as the geometry. There are a large number of these functions; see below for more details.
* For each `geom` you must specify the **aesthetic mappings**, which is how data from the data frame will be mapped to the visual aspects of the geometry. These mappings are defined using the `aes()` function. The `aes()` function takes a set of arguments (like a list), where the argument name is the visual property to map *to*, and the argument value is the data property to map *from*.
* Finally, you add `geom` layers to the plot by using the addition (**`+`**) operator.
Thus, basic simple plots can be created simply by specifying a data set, a `geom`, and a set of aesthetic mappings.
* Note that `ggplot2` library does include a `qplot()` function for creating “quick plots”, which acts as a convenient shortcut for making simple, “default”\-like plots. While this is a nice starting place, the strength of `ggplot2` is in it’s *customizability*, so read on!
### 13\.2\.4 Aesthetic Mappings
The **aesthetic mapping** is a central concept of every data visualization. This means setting up the correspondence between **aesthetics**, the visual properties (visual channels) of the plot, such as *position*, *color*, *size*, or *shape*, and certain properties of the data, typically numeric values of certain variables.
Aesthetics are the representations that you want to *drive with your data properties*, rather than fix in code for all markers. Each visual channel can therefore encode an aspect of the data and be used to express underlying patterns.
The aesthetics mapping is specified in the [`aes()`](http://ggplot2.tidyverse.org/reference/index.html#section-aesthetics) function call in the `geom` layer. Above we used mapping `aes(x=displ, y=hwy)`. This means to map variable `displ` in the `mpg` data (engine size) to the horizontal position (*x*\-coordinate) on the plot, and variable `hwy` (highway mileage) to the vertical position (*y* coordinate). We did not specify any other visual properties, such as color, point size or point shape, so by default the `geom_point` layer produced a set of equal size black dots, positioned according to the date. Let’s now color the points according to the class of the car. This amounts to taking an additional aesthetic, *color*, and mapping it to the variable `class` in data as `color=class`. As we want this to happen in the same layer, we must add this to the `aes()` function as an additional named argument:
```
# color the data by car type
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class))
```
(`ggplot2` will even create a legend for you!)
Note that using the `aes()` function will cause the visual channel to be based on the data specified in the argument. For example, using `aes(color = "blue")` won’t cause the geometry’s color to be “blue”, but will instead cause the visual channel to be mapped from the *vector* `c("blue")`—as if you only had a single type of engine that happened to be called “blue”:
```
ggplot(data = mpg) + # note where parentheses are closed
geom_point(mapping = aes(x = displ, y = hwy, color = "blue"))
```
This looks confusing (note the weird legend!) and is most likely not what you want.
If you wish to specify a given aesthetic, you should ***set*** that property as an argument to the `geom` method, outside of the `aes()` call:
```
ggplot(data = mpg) + # note where parentheses are closed
geom_point(mapping = aes(x = displ, y = hwy), color = "blue") # blue points!
```
### 13\.2\.1 *ggplot2* library
The [**`ggplot2`**](http://ggplot2.tidyverse.org/) library provides a set of *declarative functions* that mirror the above grammar, enabling you to easily specify what you want a plot to look like (e.g., what data, geometric objects, aesthetics, scales, etc. you want it to have).
`ggplot2` is yet another external package (like `dplyr` and `httr` and `jsonlite`), so you will need to install and load it in order to use it:
```
install.packages("ggplot2") # once per machine
library("ggplot2")
```
This will make all of the plotting functions you’ll need available.
*ggplot2* is called *ggplot**2*** because once upon a time there was just a library *ggplot*. However, the developer noticed that it used an inefficient set of functions. In order for not to break the API, the authors introduced a successor package *ggplot2*. However, the central function in this package is still called `ggplot()`, not `ggplot2()`!
### 13\.2\.2 *mpg* data
*ggplot2* library comes with a number of built\-in data sets. One of the most popular of these is `mpg`, a data frame about fuel economy for different cars. It is a sufficiently small but versatile dataset to demonstrate various aspects of plotting. *mpg* has 234 rows and 11 columns. Below is a sample of it:
```
mpg[sample(nrow(mpg), 3),]
```
```
## # A tibble: 3 x 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 subaru fores… 2.5 2008 4 manu… 4 19 25 p suv
## 2 hyundai sonata 2.4 2008 4 manu… f 21 31 r mids…
## 3 chevrolet malibu 3.1 1999 6 auto… f 18 26 r mids…
```
The most important variables for our purpose are following:
* **class**, car class, such as SUV, compact, minivan
* **displ**, engine size (liters)
* **cyl**, number of cylinders
* **hwy**, mileage on highway, miles per gallon
* **manufacturer**, producer of the car, e.g. Volkswagen, Toyota
### 13\.2\.3 Our first ggplot
In order to create a plot, you call the `ggplot()` function, specifying the **data** that you wish to plot. You then add new *layers* that are **geometric objects** which will show up on the plot:
```
# plot the `mpg` data set, with highway mileage on the x axis and
# engine displacement (power) on the y axis:
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy))
```
To walk through the above code:
* The `ggplot()` function is passed the data frame to plot as the `data` argument.
* You specify a geometric object (`geom`) by calling one of the many`geom` [functions](http://ggplot2.tidyverse.org/reference/index.html#section-layer-geoms), which are all named `geom_` followed by the name of the kind of geometry you wish to create. For example, `geom_point()` will create a layer with “point” (dot) elements as the geometry. There are a large number of these functions; see below for more details.
* For each `geom` you must specify the **aesthetic mappings**, which is how data from the data frame will be mapped to the visual aspects of the geometry. These mappings are defined using the `aes()` function. The `aes()` function takes a set of arguments (like a list), where the argument name is the visual property to map *to*, and the argument value is the data property to map *from*.
* Finally, you add `geom` layers to the plot by using the addition (**`+`**) operator.
Thus, basic simple plots can be created simply by specifying a data set, a `geom`, and a set of aesthetic mappings.
* Note that `ggplot2` library does include a `qplot()` function for creating “quick plots”, which acts as a convenient shortcut for making simple, “default”\-like plots. While this is a nice starting place, the strength of `ggplot2` is in it’s *customizability*, so read on!
### 13\.2\.4 Aesthetic Mappings
The **aesthetic mapping** is a central concept of every data visualization. This means setting up the correspondence between **aesthetics**, the visual properties (visual channels) of the plot, such as *position*, *color*, *size*, or *shape*, and certain properties of the data, typically numeric values of certain variables.
Aesthetics are the representations that you want to *drive with your data properties*, rather than fix in code for all markers. Each visual channel can therefore encode an aspect of the data and be used to express underlying patterns.
The aesthetics mapping is specified in the [`aes()`](http://ggplot2.tidyverse.org/reference/index.html#section-aesthetics) function call in the `geom` layer. Above we used mapping `aes(x=displ, y=hwy)`. This means to map variable `displ` in the `mpg` data (engine size) to the horizontal position (*x*\-coordinate) on the plot, and variable `hwy` (highway mileage) to the vertical position (*y* coordinate). We did not specify any other visual properties, such as color, point size or point shape, so by default the `geom_point` layer produced a set of equal size black dots, positioned according to the date. Let’s now color the points according to the class of the car. This amounts to taking an additional aesthetic, *color*, and mapping it to the variable `class` in data as `color=class`. As we want this to happen in the same layer, we must add this to the `aes()` function as an additional named argument:
```
# color the data by car type
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class))
```
(`ggplot2` will even create a legend for you!)
Note that using the `aes()` function will cause the visual channel to be based on the data specified in the argument. For example, using `aes(color = "blue")` won’t cause the geometry’s color to be “blue”, but will instead cause the visual channel to be mapped from the *vector* `c("blue")`—as if you only had a single type of engine that happened to be called “blue”:
```
ggplot(data = mpg) + # note where parentheses are closed
geom_point(mapping = aes(x = displ, y = hwy, color = "blue"))
```
This looks confusing (note the weird legend!) and is most likely not what you want.
If you wish to specify a given aesthetic, you should ***set*** that property as an argument to the `geom` method, outside of the `aes()` call:
```
ggplot(data = mpg) + # note where parentheses are closed
geom_point(mapping = aes(x = displ, y = hwy), color = "blue") # blue points!
```
13\.3 Complex Plots
-------------------
Building on these basics, `ggplot2` can be used to build almost any kind of plot you may want. These plots are declared using functions that follow from the *Grammar of Graphics*.
### 13\.3\.1 Specifying Geometry
The most obvious distinction between plots is what **geometric objects** (`geoms`) they include. `ggplot2` supports a number of different types of [`geoms`](http://ggplot2.tidyverse.org/reference/index.html#section-layer-geoms), including:
* **`geom_point`** for drawing individual points (e.g., a scatter plot)
* **`geom_line`** for drawing lines (e.g., for a line charts)
* **`geom_smooth`** for drawing smoothed lines (e.g., for simple trends or approximations)
* **`geom_bar`** for drawing bars (e.g., for bar charts)
* **`geom_polygon`** for drawing arbitrary shapes (e.g., for drawing an area in a coordinate plane)
* **`geom_map`** for drawing polygons in the shape of a map! (You can access the *data* to use for these maps by using the [`map_data()`](http://ggplot2.tidyverse.org/reference/map_data.html) function).
Each of these geometries will need to include a set of **aesthetic mappings** (using the `aes()` function and assigned to the `mapping` argument), though the specific *visual properties* that the data will map to will vary. For example, you can map data to the `shape` of a `geom_point` (e.g., if they should be circles or squares), or you can map data to the `linetype` of a `geom_line` (e.g., if it is solid or dotted), but not vice versa.
* Almost all `geoms` **require** an `x` and `y` mapping at the bare minimum.
```
# line chart of mileage by engine power
ggplot(data = mpg) +
geom_line(mapping = aes(x = displ, y = hwy))
# bar chart of car type
ggplot(data = mpg) +
geom_bar(mapping = aes(x = class)) # no y mapping needed!
```
What makes this really powerful is that you can add **multiple geometries** to a plot, thus allowing you to create complex graphics showing multiple aspects of your data
```
# plot with both points and smoothed line
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
geom_smooth(mapping = aes(x = displ, y = hwy))
```
Of course the aesthetics for each `geom` can be different, so you could show multiple lines on the same plot (or with different colors, styles, etc). It’s also possible to give each `geom` a different `data` argument, so that you can show multiple data sets in the same plot.
* If you want multiple `geoms` to utilize the same data or aesthetics, you can pass those values as arguments to the `ggplot()` function itself; any `geoms` added to that plot will use the values declared for the whole plot *unless overridden by individual specifications*.
#### 13\.3\.1\.1 Statistical Transformations
If you look at the above `bar` chart, you’ll notice that the the `y` axis was defined for you as the `count` of elements that have the particular type. This `count` isn’t part of the data set (it’s not a column in `mpg`), but is instead a **statistical transformation** that the `geom_bar` automatically applies to the data. In particular, it applies the `stat_count` transformation, simply summing the number of rows each `class` appeared in the dataset.
`ggplot2` supports many different statistical transformations. For example, the “identity” transformation will leave the data “as is”. You can specify which statistical transformation a `geom` uses by passing it as the **`stat`** argument:
```
# bar chart of make and model vs. mileage
# quickly (lazily) filter the dataset to a sample of the cars: one of each make/model
new_cars <- mpg %>%
mutate(car = paste(manufacturer, model)) %>% # combine make + model
distinct(car, .keep_all = TRUE) %>% # select one of each cars -- lazy filtering!
slice(1:20) # only keep 20 cars
# create the plot (you need the `y` mapping since it is not implied by the stat transform of geom_bar)
ggplot(new_cars) +
geom_bar(mapping = aes(x = car, y = hwy), stat = "identity") +
coord_flip() # horizontal bar chart
```
Additionally, `ggplot2` contains **`stat_`** functions (e.g., `stat_identity` for the “identity” transformation) that can be used to specify a layer in the same way a `geom` does:
```
# generate a "binned" (grouped) display of highway mileage
ggplot(data = mpg) +
stat_bin(aes(x = hwy, color = hwy), binwidth = 4) # binned into groups of 4 units
```
Notice the above chart is actually a [histogram](https://en.wikipedia.org/wiki/Histogram)! Indeed, almost every `stat` transformation corresponds to a particular `geom` (and vice versa) by default. Thus they can often be used interchangeably, depending on how you want to emphasize your layer creation when writing the code.
```
# these two charts are identical
ggplot(data = mpg) +
geom_bar(mapping = aes(x = class))
ggplot(data = mpg) +
stat_count(mapping = aes(x = class))
```
#### 13\.3\.1\.2 Position Adjustments
In addition to a default statistical transformation, each `geom` also has a default **position adjustment** which specifies a set of “rules” as to how different components should be positioned relative to each other. This position is noticeable in a `geom_bar` if you map a different variable to the color visual channel:
```
# bar chart of mileage, colored by engine type
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = class)) # fill color, not outline color
```
The `geom_bar` by default uses a position adjustment of `"stack"`, which makes each “bar” a height appropriate to its value and *stacks* them on top of each other. You can use the **`position`** argument to specify what position adjustment rules to follow:
```
# a filled bar chart (fill the vertical height)
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = drv), position = "fill")
# a dodged (group) bar chart -- values next to each other
# (not great dodging demos in this data set)
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = drv), position = "dodge")
```
Check the documentation for each particular `geom` to learn more about its possible position adjustments.
### 13\.3\.2 Styling with Scales
Whenever you specify an **aesthetic mapping**, `ggplot` uses a particular **scale** to determine the *range of values* that the data should map to. Thus, when you specify
```
# color the data by engine type
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class))
```
`ggplot` automatically adds a **scale** for each mapping to the plot:
```
# same as above, with explicit scales
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class)) +
scale_x_continuous() +
scale_y_continuous() +
scale_colour_discrete()
```
Each scale can be represented by a function with the following name: `scale_`, followed by the name of the aesthetic property, followed by an `_` and the name of the scale. A `continuous` scale will handle things like numeric data (where there is a *continuous set* of numbers), whereas a `discrete` scale will handle things like colors (since there is a small list of *distinct* colors).
While the default scales will work fine, it is possible to explicitly add different scales to replace the defaults. For example, you can use a scale to change the direction of an axis:
```
# mileage relationship, ordered in reverse
ggplot(data = mpg) +
geom_point(mapping = aes(x = cty, y = hwy)) +
scale_x_reverse()
```
Similarly, you can use `scale_x_log10()` to plot on a [logarithmic scale](https://en.wikipedia.org/wiki/Logarithmic_scale).
You can also use scales to specify the *range* of values on a axis by passing in a `limits` argument. This is useful for making sure that multiple graphs share scales or formats.
```
# subset data by class
suv <- mpg %>% filter(class == "suv") # suvs
compact <- mpg %>% filter(class == "compact") # compact cars
# scales
x_scale <- scale_x_continuous(limits = range(mpg$displ))
y_scale <- scale_y_continuous(limits = range(mpg$hwy))
col_scale <- scale_colour_discrete(limits = unique(mpg$drv))
ggplot(data = suv) +
geom_point(mapping = aes(x = displ, y = hwy, color = drv)) +
x_scale + y_scale + col_scale
ggplot(data = compact) +
geom_point(mapping = aes(x = displ, y = hwy, color = drv)) +
x_scale + y_scale + col_scale
```
Notice how it is easy to compare the two data sets to each other because the axes and colors match!
These scales can also be used to specify the “tick” marks and labels; see the resources at the end of the chapter for details. And for further ways specifying where the data appears on the graph, see the [Coordinate Systems](ggplot2.html#coordinate-systems) section below.
#### 13\.3\.2\.1 Color Scales
A more common scale to change is which set of colors to use in a plot. While you can use scale functions to specify a list of colors to use, a more common option is to use a pre\-defined palette from [**colorbrewer.org**](http://colorbrewer2.org/). These color sets have been carefully designed to look good and to be viewable to people with certain forms of color blindness. This color scale is specified with the `scale_color_brewer()` function, passing the `palette` as an argument.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class), size = 4) +
scale_color_brewer(palette = "Set3")
```
You can get the palette name from the *colorbrewer* website by looking at the `scheme` query parameter in the URL. Or see the diagram [here](https://bl.ocks.org/mbostock/5577023) and hover the mouse over each palette for its name.
You can also specify *continuous* color values by using a [gradient](http://ggplot2.tidyverse.org/reference/scale_gradient.html) scale, or [manually](http://ggplot2.tidyverse.org/reference/scale_manual.html) specify the colors you want to use as a *named vector*.
### 13\.3\.3 Coordinate Systems
The next term from the *Grammar of Graphics* that can be specified is the **coordinate system**. As with **scales**, coordinate systems are specified with functions (that all start with **`coord_`**) and are added to a `ggplot`. There are a number of different possible [coordinate systems](http://ggplot2.tidyverse.org/reference/index.html#section-coordinate-systems) to use, including:
* **`coord_cartesian`** the default [cartesian coordinate](https://en.wikipedia.org/wiki/Cartesian_coordinate_system) system, where you specify `x` and `y` values.
* **`coord_flip`** a cartesian system with the `x` and `y` flipped
* **`coord_fixed`** a cartesian system with a “fixed” aspect ratio (e.g., 1\.78 for a “widescreen” plot)
* **`coord_polar`** a plot using [polar coordinates](https://en.wikipedia.org/wiki/Polar_coordinate_system)
* **`coord_quickmap`** a coordinate system that approximates a good aspect ratio for maps. See the documentation for more details.
Most of these system support the `xlim` and `ylim` arguments, which specify the *limits* for the coordinate system.
### 13\.3\.4 Facets
**Facets** are ways of *grouping* a data plot into multiple different pieces (*subplots*). This allows you to view a separate plot for each value in a [categorical variable](https://en.wikipedia.org/wiki/Categorical_variable). Conceptually, breaking a plot up into facets is similar to using the `group_by()` verb in `dplyr`, with each facet acting like a *level* in an R *factor*.
You can construct a plot with multiple facets by using the **`facet_wrap()`** function. This will produce a “row” of subplots, one for each categorical variable (the number of rows can be specified with an additional argument):
```
# a plot with facets based on vehicle type.
# similar to what we did with `suv` and `compact`!
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_wrap(~class)
```
Note that the argument to `facet_wrap()` function is written with a tilde (**`~`**) in front of it. This specifies that the column name should be treated as a **formula**. A formula is a bit like an “equation” in mathematics; it’s like a string representing what set of operations you want to perform (putting the column name in a string also works in this simple case). Formulas are in fact the same structure used with *standard evaluation* in `dplyr`; putting a `~` in front of an expression (such as `~ desc(colname)`) allows SE to work.
* In short: put a `~` in front of the column name you want to “group” by.
### 13\.3\.5 Labels \& Annotations
Textual labels and annotations (on the plot, axes, geometry, and legend) are an important part of making a plot understandable and communicating information. Although not an explicit part of the *Grammar of Graphics* (they would be considered a form of geometry), `ggplot` makes it easy to add such annotations.
You can add titles and axis labels to a chart using the **`labs()`** function (*not* `labels`, which is a different R function!):
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class)) +
labs(
title = "Fuel Efficiency by Engine Power, 1999-2008", # plot title
x = "Engine power (litres displacement)", # x-axis label (with units!)
y = "Fuel Efficiency (miles per gallon)", # y-axis label (with units!)
color = "Car Type"
) # legend label for the "color" property
```
It is possible to add labels into the plot itself (e.g., to label each point or line) by adding a new `geom_text` or `geom_label` to the plot; effectively, you’re plotting an extra set of data which happen to be the variable names:
```
# a data table of each car that has best efficiency of its type
best_in_class <- mpg %>%
group_by(class) %>%
filter(row_number(desc(hwy)) == 1)
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) + # same mapping for all geoms
geom_point(mapping = aes(color = class)) +
geom_label(data = best_in_class, mapping = aes(label = model), alpha = 0.5)
```
*R for Data Science* (linked in the resources below) recommends using the [`ggrepel`](https://github.com/slowkow/ggrepel) package to help position labels.
### 13\.3\.1 Specifying Geometry
The most obvious distinction between plots is what **geometric objects** (`geoms`) they include. `ggplot2` supports a number of different types of [`geoms`](http://ggplot2.tidyverse.org/reference/index.html#section-layer-geoms), including:
* **`geom_point`** for drawing individual points (e.g., a scatter plot)
* **`geom_line`** for drawing lines (e.g., for a line charts)
* **`geom_smooth`** for drawing smoothed lines (e.g., for simple trends or approximations)
* **`geom_bar`** for drawing bars (e.g., for bar charts)
* **`geom_polygon`** for drawing arbitrary shapes (e.g., for drawing an area in a coordinate plane)
* **`geom_map`** for drawing polygons in the shape of a map! (You can access the *data* to use for these maps by using the [`map_data()`](http://ggplot2.tidyverse.org/reference/map_data.html) function).
Each of these geometries will need to include a set of **aesthetic mappings** (using the `aes()` function and assigned to the `mapping` argument), though the specific *visual properties* that the data will map to will vary. For example, you can map data to the `shape` of a `geom_point` (e.g., if they should be circles or squares), or you can map data to the `linetype` of a `geom_line` (e.g., if it is solid or dotted), but not vice versa.
* Almost all `geoms` **require** an `x` and `y` mapping at the bare minimum.
```
# line chart of mileage by engine power
ggplot(data = mpg) +
geom_line(mapping = aes(x = displ, y = hwy))
# bar chart of car type
ggplot(data = mpg) +
geom_bar(mapping = aes(x = class)) # no y mapping needed!
```
What makes this really powerful is that you can add **multiple geometries** to a plot, thus allowing you to create complex graphics showing multiple aspects of your data
```
# plot with both points and smoothed line
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
geom_smooth(mapping = aes(x = displ, y = hwy))
```
Of course the aesthetics for each `geom` can be different, so you could show multiple lines on the same plot (or with different colors, styles, etc). It’s also possible to give each `geom` a different `data` argument, so that you can show multiple data sets in the same plot.
* If you want multiple `geoms` to utilize the same data or aesthetics, you can pass those values as arguments to the `ggplot()` function itself; any `geoms` added to that plot will use the values declared for the whole plot *unless overridden by individual specifications*.
#### 13\.3\.1\.1 Statistical Transformations
If you look at the above `bar` chart, you’ll notice that the the `y` axis was defined for you as the `count` of elements that have the particular type. This `count` isn’t part of the data set (it’s not a column in `mpg`), but is instead a **statistical transformation** that the `geom_bar` automatically applies to the data. In particular, it applies the `stat_count` transformation, simply summing the number of rows each `class` appeared in the dataset.
`ggplot2` supports many different statistical transformations. For example, the “identity” transformation will leave the data “as is”. You can specify which statistical transformation a `geom` uses by passing it as the **`stat`** argument:
```
# bar chart of make and model vs. mileage
# quickly (lazily) filter the dataset to a sample of the cars: one of each make/model
new_cars <- mpg %>%
mutate(car = paste(manufacturer, model)) %>% # combine make + model
distinct(car, .keep_all = TRUE) %>% # select one of each cars -- lazy filtering!
slice(1:20) # only keep 20 cars
# create the plot (you need the `y` mapping since it is not implied by the stat transform of geom_bar)
ggplot(new_cars) +
geom_bar(mapping = aes(x = car, y = hwy), stat = "identity") +
coord_flip() # horizontal bar chart
```
Additionally, `ggplot2` contains **`stat_`** functions (e.g., `stat_identity` for the “identity” transformation) that can be used to specify a layer in the same way a `geom` does:
```
# generate a "binned" (grouped) display of highway mileage
ggplot(data = mpg) +
stat_bin(aes(x = hwy, color = hwy), binwidth = 4) # binned into groups of 4 units
```
Notice the above chart is actually a [histogram](https://en.wikipedia.org/wiki/Histogram)! Indeed, almost every `stat` transformation corresponds to a particular `geom` (and vice versa) by default. Thus they can often be used interchangeably, depending on how you want to emphasize your layer creation when writing the code.
```
# these two charts are identical
ggplot(data = mpg) +
geom_bar(mapping = aes(x = class))
ggplot(data = mpg) +
stat_count(mapping = aes(x = class))
```
#### 13\.3\.1\.2 Position Adjustments
In addition to a default statistical transformation, each `geom` also has a default **position adjustment** which specifies a set of “rules” as to how different components should be positioned relative to each other. This position is noticeable in a `geom_bar` if you map a different variable to the color visual channel:
```
# bar chart of mileage, colored by engine type
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = class)) # fill color, not outline color
```
The `geom_bar` by default uses a position adjustment of `"stack"`, which makes each “bar” a height appropriate to its value and *stacks* them on top of each other. You can use the **`position`** argument to specify what position adjustment rules to follow:
```
# a filled bar chart (fill the vertical height)
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = drv), position = "fill")
# a dodged (group) bar chart -- values next to each other
# (not great dodging demos in this data set)
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = drv), position = "dodge")
```
Check the documentation for each particular `geom` to learn more about its possible position adjustments.
#### 13\.3\.1\.1 Statistical Transformations
If you look at the above `bar` chart, you’ll notice that the the `y` axis was defined for you as the `count` of elements that have the particular type. This `count` isn’t part of the data set (it’s not a column in `mpg`), but is instead a **statistical transformation** that the `geom_bar` automatically applies to the data. In particular, it applies the `stat_count` transformation, simply summing the number of rows each `class` appeared in the dataset.
`ggplot2` supports many different statistical transformations. For example, the “identity” transformation will leave the data “as is”. You can specify which statistical transformation a `geom` uses by passing it as the **`stat`** argument:
```
# bar chart of make and model vs. mileage
# quickly (lazily) filter the dataset to a sample of the cars: one of each make/model
new_cars <- mpg %>%
mutate(car = paste(manufacturer, model)) %>% # combine make + model
distinct(car, .keep_all = TRUE) %>% # select one of each cars -- lazy filtering!
slice(1:20) # only keep 20 cars
# create the plot (you need the `y` mapping since it is not implied by the stat transform of geom_bar)
ggplot(new_cars) +
geom_bar(mapping = aes(x = car, y = hwy), stat = "identity") +
coord_flip() # horizontal bar chart
```
Additionally, `ggplot2` contains **`stat_`** functions (e.g., `stat_identity` for the “identity” transformation) that can be used to specify a layer in the same way a `geom` does:
```
# generate a "binned" (grouped) display of highway mileage
ggplot(data = mpg) +
stat_bin(aes(x = hwy, color = hwy), binwidth = 4) # binned into groups of 4 units
```
Notice the above chart is actually a [histogram](https://en.wikipedia.org/wiki/Histogram)! Indeed, almost every `stat` transformation corresponds to a particular `geom` (and vice versa) by default. Thus they can often be used interchangeably, depending on how you want to emphasize your layer creation when writing the code.
```
# these two charts are identical
ggplot(data = mpg) +
geom_bar(mapping = aes(x = class))
ggplot(data = mpg) +
stat_count(mapping = aes(x = class))
```
#### 13\.3\.1\.2 Position Adjustments
In addition to a default statistical transformation, each `geom` also has a default **position adjustment** which specifies a set of “rules” as to how different components should be positioned relative to each other. This position is noticeable in a `geom_bar` if you map a different variable to the color visual channel:
```
# bar chart of mileage, colored by engine type
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = class)) # fill color, not outline color
```
The `geom_bar` by default uses a position adjustment of `"stack"`, which makes each “bar” a height appropriate to its value and *stacks* them on top of each other. You can use the **`position`** argument to specify what position adjustment rules to follow:
```
# a filled bar chart (fill the vertical height)
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = drv), position = "fill")
# a dodged (group) bar chart -- values next to each other
# (not great dodging demos in this data set)
ggplot(data = mpg) +
geom_bar(mapping = aes(x = hwy, fill = drv), position = "dodge")
```
Check the documentation for each particular `geom` to learn more about its possible position adjustments.
### 13\.3\.2 Styling with Scales
Whenever you specify an **aesthetic mapping**, `ggplot` uses a particular **scale** to determine the *range of values* that the data should map to. Thus, when you specify
```
# color the data by engine type
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class))
```
`ggplot` automatically adds a **scale** for each mapping to the plot:
```
# same as above, with explicit scales
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class)) +
scale_x_continuous() +
scale_y_continuous() +
scale_colour_discrete()
```
Each scale can be represented by a function with the following name: `scale_`, followed by the name of the aesthetic property, followed by an `_` and the name of the scale. A `continuous` scale will handle things like numeric data (where there is a *continuous set* of numbers), whereas a `discrete` scale will handle things like colors (since there is a small list of *distinct* colors).
While the default scales will work fine, it is possible to explicitly add different scales to replace the defaults. For example, you can use a scale to change the direction of an axis:
```
# mileage relationship, ordered in reverse
ggplot(data = mpg) +
geom_point(mapping = aes(x = cty, y = hwy)) +
scale_x_reverse()
```
Similarly, you can use `scale_x_log10()` to plot on a [logarithmic scale](https://en.wikipedia.org/wiki/Logarithmic_scale).
You can also use scales to specify the *range* of values on a axis by passing in a `limits` argument. This is useful for making sure that multiple graphs share scales or formats.
```
# subset data by class
suv <- mpg %>% filter(class == "suv") # suvs
compact <- mpg %>% filter(class == "compact") # compact cars
# scales
x_scale <- scale_x_continuous(limits = range(mpg$displ))
y_scale <- scale_y_continuous(limits = range(mpg$hwy))
col_scale <- scale_colour_discrete(limits = unique(mpg$drv))
ggplot(data = suv) +
geom_point(mapping = aes(x = displ, y = hwy, color = drv)) +
x_scale + y_scale + col_scale
ggplot(data = compact) +
geom_point(mapping = aes(x = displ, y = hwy, color = drv)) +
x_scale + y_scale + col_scale
```
Notice how it is easy to compare the two data sets to each other because the axes and colors match!
These scales can also be used to specify the “tick” marks and labels; see the resources at the end of the chapter for details. And for further ways specifying where the data appears on the graph, see the [Coordinate Systems](ggplot2.html#coordinate-systems) section below.
#### 13\.3\.2\.1 Color Scales
A more common scale to change is which set of colors to use in a plot. While you can use scale functions to specify a list of colors to use, a more common option is to use a pre\-defined palette from [**colorbrewer.org**](http://colorbrewer2.org/). These color sets have been carefully designed to look good and to be viewable to people with certain forms of color blindness. This color scale is specified with the `scale_color_brewer()` function, passing the `palette` as an argument.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class), size = 4) +
scale_color_brewer(palette = "Set3")
```
You can get the palette name from the *colorbrewer* website by looking at the `scheme` query parameter in the URL. Or see the diagram [here](https://bl.ocks.org/mbostock/5577023) and hover the mouse over each palette for its name.
You can also specify *continuous* color values by using a [gradient](http://ggplot2.tidyverse.org/reference/scale_gradient.html) scale, or [manually](http://ggplot2.tidyverse.org/reference/scale_manual.html) specify the colors you want to use as a *named vector*.
#### 13\.3\.2\.1 Color Scales
A more common scale to change is which set of colors to use in a plot. While you can use scale functions to specify a list of colors to use, a more common option is to use a pre\-defined palette from [**colorbrewer.org**](http://colorbrewer2.org/). These color sets have been carefully designed to look good and to be viewable to people with certain forms of color blindness. This color scale is specified with the `scale_color_brewer()` function, passing the `palette` as an argument.
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class), size = 4) +
scale_color_brewer(palette = "Set3")
```
You can get the palette name from the *colorbrewer* website by looking at the `scheme` query parameter in the URL. Or see the diagram [here](https://bl.ocks.org/mbostock/5577023) and hover the mouse over each palette for its name.
You can also specify *continuous* color values by using a [gradient](http://ggplot2.tidyverse.org/reference/scale_gradient.html) scale, or [manually](http://ggplot2.tidyverse.org/reference/scale_manual.html) specify the colors you want to use as a *named vector*.
### 13\.3\.3 Coordinate Systems
The next term from the *Grammar of Graphics* that can be specified is the **coordinate system**. As with **scales**, coordinate systems are specified with functions (that all start with **`coord_`**) and are added to a `ggplot`. There are a number of different possible [coordinate systems](http://ggplot2.tidyverse.org/reference/index.html#section-coordinate-systems) to use, including:
* **`coord_cartesian`** the default [cartesian coordinate](https://en.wikipedia.org/wiki/Cartesian_coordinate_system) system, where you specify `x` and `y` values.
* **`coord_flip`** a cartesian system with the `x` and `y` flipped
* **`coord_fixed`** a cartesian system with a “fixed” aspect ratio (e.g., 1\.78 for a “widescreen” plot)
* **`coord_polar`** a plot using [polar coordinates](https://en.wikipedia.org/wiki/Polar_coordinate_system)
* **`coord_quickmap`** a coordinate system that approximates a good aspect ratio for maps. See the documentation for more details.
Most of these system support the `xlim` and `ylim` arguments, which specify the *limits* for the coordinate system.
### 13\.3\.4 Facets
**Facets** are ways of *grouping* a data plot into multiple different pieces (*subplots*). This allows you to view a separate plot for each value in a [categorical variable](https://en.wikipedia.org/wiki/Categorical_variable). Conceptually, breaking a plot up into facets is similar to using the `group_by()` verb in `dplyr`, with each facet acting like a *level* in an R *factor*.
You can construct a plot with multiple facets by using the **`facet_wrap()`** function. This will produce a “row” of subplots, one for each categorical variable (the number of rows can be specified with an additional argument):
```
# a plot with facets based on vehicle type.
# similar to what we did with `suv` and `compact`!
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy)) +
facet_wrap(~class)
```
Note that the argument to `facet_wrap()` function is written with a tilde (**`~`**) in front of it. This specifies that the column name should be treated as a **formula**. A formula is a bit like an “equation” in mathematics; it’s like a string representing what set of operations you want to perform (putting the column name in a string also works in this simple case). Formulas are in fact the same structure used with *standard evaluation* in `dplyr`; putting a `~` in front of an expression (such as `~ desc(colname)`) allows SE to work.
* In short: put a `~` in front of the column name you want to “group” by.
### 13\.3\.5 Labels \& Annotations
Textual labels and annotations (on the plot, axes, geometry, and legend) are an important part of making a plot understandable and communicating information. Although not an explicit part of the *Grammar of Graphics* (they would be considered a form of geometry), `ggplot` makes it easy to add such annotations.
You can add titles and axis labels to a chart using the **`labs()`** function (*not* `labels`, which is a different R function!):
```
ggplot(data = mpg) +
geom_point(mapping = aes(x = displ, y = hwy, color = class)) +
labs(
title = "Fuel Efficiency by Engine Power, 1999-2008", # plot title
x = "Engine power (litres displacement)", # x-axis label (with units!)
y = "Fuel Efficiency (miles per gallon)", # y-axis label (with units!)
color = "Car Type"
) # legend label for the "color" property
```
It is possible to add labels into the plot itself (e.g., to label each point or line) by adding a new `geom_text` or `geom_label` to the plot; effectively, you’re plotting an extra set of data which happen to be the variable names:
```
# a data table of each car that has best efficiency of its type
best_in_class <- mpg %>%
group_by(class) %>%
filter(row_number(desc(hwy)) == 1)
ggplot(data = mpg, mapping = aes(x = displ, y = hwy)) + # same mapping for all geoms
geom_point(mapping = aes(color = class)) +
geom_label(data = best_in_class, mapping = aes(label = model), alpha = 0.5)
```
*R for Data Science* (linked in the resources below) recommends using the [`ggrepel`](https://github.com/slowkow/ggrepel) package to help position labels.
13\.4 Plotting in Scripts
-------------------------
From the first encounter with *ggplot* one typically gets the impression that it is the *ggplot* function that creates the plot. This **is not true!**. A call to *ggplot* just creates a *ggplot*\-object, a data structure that contains data and all other necessary details for creating the plot. But the image itself is not created. It is created instead by the *print*\-method. If you type an expression on R console, R evaluates and prints this expression. This is why we can use it as a manual calculator for simple math, such as `2 + 2`. The same is true for *ggplot*: it returns a *ggplot*\-object, and given you don’t store it into a variable, it is printed, and the print method of *ggplot*\-object actually makes the image. This is why we can immediately see the images when we work with *ggplot* on console.
Things may be different, however, if we do this in a script. When you execute a script, the returned non\-stored expressions are not printed. For instance, the script
```
diamonds %>%
sample_n(1000) %>%
ggplot() +
geom_point(aes(carat, price)) # note: not printed
```
will not produce any output when sourced as a single script (and not run line\-by\-line). You have to print the returned objects explicitly, for instance as
```
data <- diamonds %>%
sample_n(1000)
p <- ggplot(data) +
geom_point(aes(carat, price)) # store here
print(p) # print here
```
In scripts we often want the code not to produce image on screen, but store it in a file instead. This can be achieved in a variety of ways, for instance through redirecting graphical output to a pdf device with the command `pdf()`:
```
data <- diamonds %>%
sample_n(1000)
p <- ggplot(data) +
geom_point(aes(carat, price)) # store here
pdf(file="diamonds.pdf", width=10, height=8)
# redirect to a pdf file
print(p) # print here
dev.off() # remember to close the file
```
After redirecting the output, all plots will be written to the pdf file (as separate pages if you create more than one plot). Note you have to close the file with `def.off()`, otherwise it will be broken. There are other output options besides pdf, you may want to check `jpeg` and `png` image outputs. Finally, *ggplot* also has a dedicated way to save individual plots to file using `ggsave`.
13\.5 Other Visualization Libraries
-----------------------------------
`ggplot2` is easily the most popular library for producing data visualizations in R. That said, `ggplot2` is used to produce **static** visualizations: unchanging “pictures” of plots. Static plots are great for for **explanatory visualizations**: visualizations that are used to communicate some information—or more commonly, an *argument* about that information. All of the above visualizations have been ways to explain and demonstrate an argument about the data (e.g., the relationship between car engines and fuel efficiency).
Data visualizations can also be highly effective for **exploratory analysis**, in which the visualization is used as a way to *ask and answer questions* about the data (rather than to convey an answer or argument). While it is perfectly feasible to do such exploration on a static visualization, many explorations can be better served with **interactive visualizations** in which the user can select and change the *view* and presentation of that data in order to understand it.
While `ggplot2` does not directly support interactive visualizations, there are a number of additional R libraries that provide this functionality, including:
* [**`ggvis`**](http://ggvis.rstudio.com/) is a library that uses the *Grammar of Graphics* (similar to `ggplot`), but for interactive visualizations. The interactivity is provided through the [`shiny`](http://www.rstudio.com/shiny/) library, which is introduced in a later chapter.
* [**Bokeh**](http://hafen.github.io/rbokeh/index.html) is an open\-source library for developing interactive visualizations. It automatically provides a number of “standard” interactions (pop\-up labels, drag to pan, select to zoom, etc) automatically. It is similar to `ggplot2`, in that you create a figure and then and then add *layers* representing different geometries (points, lines etc). It has detailed and readable documentation, and is also available to other programming languages (such as Python).
* [**Plotly**](https://plot.ly/r/) is another library similar to *Bokeh*, in that it automatically provided standard interactions. It is also possible to take a `ggplot2` plot and [wrap](https://plot.ly/ggplot2/) it in Plotly in order to make it interactive. Plotly has many examples to learn from, though a less effective set of documentation than other libraries.
* [**`rCharts`**](http://rdatascience.io/rCharts/) provides a way to utilize a number of *JavaScript* interactive visualization libraries. JavaScript is the programming language used to create interactive websites (HTML files), and so is highly specialized for creating interactive experiences.
There are many other libraries as well; searching around for a specific feature you need may lead you to a useful tool!
Resources
---------
* [gglot2 Documentation](http://ggplot2.tidyverse.org/) (particularly the [function reference](http://ggplot2.tidyverse.org/reference/index.html))
* [ggplot2 Cheat Sheet](https://www.rstudio.com/wp-content/uploads/2016/11/ggplot2-cheatsheet-2.1.pdf) (see also [here](http://zevross.com/blog/2014/08/04/beautiful-plotting-in-r-a-ggplot2-cheatsheet-3/))
* [Data Visualization (R4DS)](http://r4ds.had.co.nz/data-visualisation.html) \- tutorial using `ggplot2`
* [Graphics for Communication (R4DS)](http://r4ds.had.co.nz/graphics-for-communication.html) \- “part 2” of tutorial using `ggplot`
* [Graphics with ggplot2](http://www.statmethods.net/advgraphs/ggplot2.html) \- explanation of `qplot()`
* [Telling stories with the grammar of graphics](https://codewords.recurse.com/issues/six/telling-stories-with-data-using-the-grammar-of-graphics)
* [A Layered Grammar of Graphics (Wickham)](http://vita.had.co.nz/papers/layered-grammar.pdf)
| Field Specific |
info201.github.io | https://info201.github.io/git-branches.html |
Chapter 14 Git Branches
=======================
While `git` is great for uploading and downloading code, its true benefits are its ability to support *reversability* (e.g., undo) and *collaboration* (working with other people). In order to effectively utilize these capabilities, you need to understand git’s **branching model**, which is central to how the program manages different versions of code.
This chapter will cover how to work with **branches** with git and GitHub, including using them to work on different features simultaneously and to undo previous changes.
14\.1 Git Branches
------------------
So far, you’ve been using git to create a *linear sequence* of commits: they are all in a line, one after another).
A linear sequence of commits, each with an SHA1 identifier.
Each commit has a message associated with it (that you can see with `git log --oneline`), as well as a unique [SHA\-1](https://en.wikipedia.org/wiki/SHA-1) hash (the random numbers and letters), which can be used to identify that commit as an “id number”.
But you can also save commits in a *non\-linear* sequence. Perhaps you want to try something new and crazy without breaking code that you’ve already written. Or you want to work on two different features simultaneously (having separate commits for each). Or you want multiple people to work on the same code without stepping on each other’s toes.
To do this, you use a feature of git called **branching** (because you can have commits that “branch off” from a line of development):
An example of branching commits.
In this example, you have a primary branch (called the `master` branch), and decide you want to try an experiment. You *split off* a new branch (called for example `experiment`), which saves some funky changes to your code. But then you decide to make further changes to your main development line, adding more commits to `master` that ignore the changes stored in the `experiment` branch. You can develop `master` and `experiment` simultaneously, making changes to each version of the code. You can even branch off further versions (e.g., a `bugfix` to fix a problem) if you wish. And once you decide you’re happy with the code added to both versions, you can **merge** them back together, so that the `master` branch now contains all the changes that were made on the `experiment` branch. If you decided that the `experiment` didn’t work out, you can simply delete those set of changes without ever having messed with your “core” `master` branch.
You can view a list of current branches in the repo with the command
```
git branch
```
(The item with the asterisk (`*`) is the “current branch” you’re on. The latest commit of the branch you’re on is referred to as the **`HEAD`**.
You can use the same command to create a *new* branch:
```
git branch [branch_name]
```
This will create a new branch called `branch_name` (replacing `[branch_name]`, including the brackets, with whatever name you want). Note that if you run `git branch` again you’ll see that this *hasn’t actually changed what branch you’re on*. In fact, all you’ve done is created a new *reference* (like a new variable!) that refers to the current commit as the given branch name.
* You can think of this like creating a new variable called `branch_name` and assigning the latest commit to that! Almost like you wrote `new_branch <- my_last_commit`.
* If you’re familiar with [LinkedLists](https://en.wikipedia.org/wiki/Linked_list), it’s a similar idea to changing a pointer in those.
In order to switch to a different branch, use the command (without the brackets)
```
git checkout [branch_name]
```
**Checking out** a branch doesn’t actually create a new commit! All it does is change the `HEAD` (the “commit I’m currently looking at”) so that it now refers to the latest commit of the target branch. You can confirm that the branch has changed with `git branch`.
* You can think of this like assigning a new value (the latest commit of the target branch) to the `HEAD` variable. Almost like you wrote `HEAD <- branch_name_last_commit`.
* Note that you can create *and* checkout a branch in a single step using the `-b` option of `git checkout`:
```
git checkout -b [branch_name]
```
Once you’ve checked out a particular branch, any *new* commits from that point on will be “attached” to the “HEAD” of that branch, while the “HEAD” of other branches (e.g., `master`) will stay the same. If you use `git checkout` again, you can switch back to the other branch.
* **Important** checking out a branch will “reset” your code to whatever it looked like when you made that commit. Switch back and forth between branches and watch your code change!
Switching between the HEAD of different branches.
Note that you can only check out code if the *current working directory* has no uncommitted changes. This means you’ll need to `commit` any changes to the current branch before you `checkout` another. If you want to “save” your changes but don’t want to commit to them, you can also use git’s ability to temporarily [stash](https://git-scm.com/book/en/v2/Git-Tools-Stashing-and-Cleaning) changes.
Finally, you can delete a branch using `git branch -d [branch_name]`. Note that this will give you a warning if you might lose work; be sure and read the output message!
14\.2 Merging
-------------
If you have changes (commits) spread across multiple branches, eventually you’ll want to combine those changes back into a single branch. This is a process called **merging**: you “merge” the changes from one branch *into* another. You do this with the (surprise!) `merge` command:
```
git merge [other_branch]
```
This command will merge `other_branch` **into the current branch**. So if you want to end up with the “combined” version of your commits on a particular branch, you’ll need to switch to (`checkout`) that branch before you run the merge.
* **IMPORTANT** If something goes wrong, don’t panic and try to close your command\-line! Come back to this book and look up how to fix the problem you’ve encountered (e.g., how to exit *vim*). And if you’re unsure why something isn’t working with git, use **`git status`** to check the current status and for what steps to do next.
* Note that the `rebase` command will perform a similar operation, but without creating a new “merge” commit—it simply takes the commits from one branch and attaches them to the end of the other. This effectively **changes history**, since it is no longer clear where the branching occurred. From an archival and academic view, you never want to “destroy history” and lose a record of changes that were made. History is important: don’t screw with it! Thus we recommend you *avoid* rebasing and stick with merging.
### 14\.2\.1 Merge Conflicts
Merging is a regular occurrence when working with branches. But consider the following situation:
1. You’re on the `master` branch.
2. You create and `checkout` a new branch called `danger`
3. On the `danger` branch, you change line 12 of the code to be “I like kitties”. You then commit this change (with message “Change line 12 of danger”).
4. You `checkout` (switch to) the `master` branch again.
5. On the `master` branch, you change to line 12 of the code to be “I like puppies”. You then commit this change (with message “Change line 12 of master”).
6. You use `git merge danger` to merge the `danger` branch **into** the `master` branch.
In this situation, you are trying to *merge two different changes to the same line of code*, and thus should be shown an error on the command\-line:
A merge conflict reported on the command\-line
This is called a **merge conflict**. A merge conflict occurs when two commits from different branches include different changes to the same code (they conflict). Git is just a simple computer program, and has no way of knowing which version to keep (“Are kitties better than puppies? How should I know?!”).
Since git can’t determine which version of the code to keep, it ***stops the merge in the middle*** and forces you to choose what code is correct **manually**.
In order to **resolve the merge conflict**, you will need to edit the file (code) so that you pick which version to keep. Git adds “code” to the file to indicate where you need to make a decision about which code is better:
Code including a merge conflict.
In order to resolve the conflict:
1. Use `git status` to see which files have merge conflicts. Note that files may have more than one conflict!
2. Choose which version of the code to keep (or keep a combination, or replace it with something new entirely!) You do this by **editing the file** (i.e., open it in Atom or RStudio and change it). Pretend that your cat walked across your keyboard and added a bunch of extra junk; it is now your task to fix your work and restore it to a clean, working state. ***Be sure and test your changes to make sure things work!***
3. Be sure and remove the `<<<<<<<` and `=======` and `>>>>>>>`. These are not legal code in any language.
4. Once you’re satisfied that the conflicts are all resolved and everything works as it should, follow the instructions in the error message and `add` and `commit` your changes (the code you “modified” to resolve the conflict):
```
git add .
git commit "Resolve merge conflict"
```
This will complete the merge! Use `git status` to check that everything is clean again.
**Merge conflicts are expected**. You didn’t do something wrong if one occurs! Don’t worry about getting merge conflicts or try to avoid them: just resolve the conflict, fix the “bug” that has appeared, and move on with your life.
14\.3 Undoing Changes
---------------------
One of the key benefits of version control systems is **reversibility**: the ability to “undo” a mistake (and we all make lots of mistakes when programming!) Git provides two basic ways that you can go back and fix a mistake you’ve made previously:
1. You can replace a file (or the entire project directory!) with a version saved as a previous commit.
2. You can have git “reverse” the changes that you made with a previous commit, effectively applying the *opposite* changes and thereby undoing it.
Note that both of these require you to have committed a working version of the code you want to go back to. Git only knows about changes that have been committed—if you don’t commit, git can’t help you! **Commit early, commit often**.
For both forms of undoing, first recall how each commit has a unique SHA\-1 hash (those random numbers) that acted as its “name”. You can see these with the `git log --oneline` command.
You can use the `checkout` command to switch not only to the commit named by a branch (e.g., `master` or `experiment`), but to *any* commit in order to “undo” work. You refer to the commit by its hash number in order to check it out:
```
git checkout [commit_number] [filename]
```
This will replace the current version ***of a single file*** with the version saved in `commit_number`. You can also use **`--`** as the commit\-number to refer to the `HEAD` (the most recent commit in the branch):
```
git checkout -- [filename]
```
If you’re trying to undo changes to lots of files, you can alternatively replace the entire project directory with a version from a previous commit by checking out that commit **as a new branch**:
```
git checkout -b [branch_name] [commit_number]
```
This command treats the commit as if it was the HEAD of a named branch… where the name of that branch is the commit number. You can then make further changes and merge it back into your development or `master` branch.
**IMPORTANT NOTE**: If you don’t create a *new branch* (with **`-b`**) when checking out an old commit, you’ll enter **detached HEAD state**. You can’t commit from here, because there is no branch for that commit to be attached to! See [this tutorial (scroll down)](https://www.atlassian.com/git/tutorials/using-branches/git-checkout) for details and diagrams. If you find yourself in a detached HEAD state, you can use `git checkout master` to get back to the last saved commit (though you will lose any changes you made in that detached state—so just avoid it in the first place!)
But what if you just had one bad commit, and don’t want to throw out other good changes you made later? For this, you can use the `git revert` command:
```
git revert [commit_number] --no-edit
```
This will determine what changes that commit made to the files, and then apply the *opposite* changes to effectively “back out” the commit. Note that this **does not** go back *to* the given commit number (that’s what `checkout` is for!), but rather will *reverse the commit you specify*.
* This command does create a new commit (the `--no-edit` option tells git that you don’t want to include a custom commit message). This is great from an archival point of view: you never “destroy history” and lose the record of what changes were made and then reverted. History is important: don’t screw with it!
Conversely, the `reset` command will destroy history. **Do not use it**, no matter what StackOverflow tells you to do.
14\.4 GitHub and Branches
-------------------------
GitHub is an online service that stores copies of repositories in the cloud. When you `push` and `pull` to GitHub, what you’re actually doing is **merging** your commits with the ones on GitHub!
However, remember that you don’t edit any files on GitHub’s servers, only on your own local machine. And since **resolving a merge conflict** involves editing the files, you have to be careful that conflicts only occur on the local machine, not on GitHub. This plays out in two ways:
1. You will **not** be able to **`push`** to GitHub if merging your commits ***into*** GitHub’s repo would cause a merge conflict. Git will instead report an error, telling you that you need to `pull` changes first and make sure that your version is “up to date”. Up to date in this case means that you have downloaded and merged all the commits on your local machine, so there is no chance of divergent changes causing a merge conflict when you merge by pushing.
2. Whenever you **`pull`** changes from GitHub, there may be a merge conflict! These are resolved ***in the exact same way*** as when merging local branches: that is, you need to *edit the files* to resolve the conflict, then `add` and `commit` the updated versions.
Thus in practice, when working with GitHub (and especially with multiple people), in order to upload your changes you’ll need to do the following:
1. `pull` (download) any changes you don’t have
2. *Resolve* any merge conflicts that occurred
3. `push` (upload) your merged set of changes
Additionally, because GitHub repositories are repos just like the ones on your local machine, they can have branches as well! You have access to any *remote* branches when you `clone` a repo; you can see a list of them with `git branch -a` (using the “**a**ll” option).
If you create a new branch on your local machine, it is possible to push *that branch* to GitHub, creating a mirroring branch on the remote repo. You do this by specifying the branch in the `git push` command:
```
git push origin branch_name
```
where `branch_name` is the name of the branch you are currently on (and thus want to push to GitHub).
Note that you often want to associate your local branch with the remote one (make the local branch **track** the remote), so that when you use `git status` you will be able to see whether they are different or not. You can establish this relationship by including the `-u` option in your push:
```
git push -u origin branch_name
```
Tracking will be remembered once set up, so you only need to use the `-u` option *once*.
### 14\.4\.1 GitHub Pages
GitHub’s use of branches provides a number of additional features, one of which is the ability to **host** web pages (`.html` files, which can be generated from R Markdown) on a publicly accessible web server that can “serve” the page to anyone who requests it. This feature is known as [GitHub Pages](https://help.github.com/articles/what-is-github-pages/).
With GitHub pages, GitHub will automatically serve your files to visitors as long as the files are in a branch with a magic name: **`gh-pages`**. Thus in order to **publish** your webpage and make it available online, all you need to do is create that branch, merge your content into it, and then push that branch to GitHub.
You almost always want to create the new `gh-pages` branch off of your `master` branch. This is because you usually want to publish the “finished” version, which is traditionally represented by the `master` branch. This means you’ll need to switch over to `master`, and then create a new branch from there:
```
git checkout master
git checkout -b gh-pages
```
Checking out the new branch will create it *with all of the commits of its source* meaning `gh-pages` will start with the exact same content as `master`—if your page is done, then it is ready to go!
You can then upload this new local branch to the `gh-pages` branch on the `origin` remote:
```
git push -u origin gh-pages
```
After the push completes, you will be able to see your web page using the following URL:
```
https://GITHUB-USERNAME.github.io/REPO-NAME
```
(Replace `GITHUB-USERNAME` with the user name **of the account hosting the repo**, and `REPO-NAME` with your repository name).
* This means that if you’re making your homework reports available, the `GITHUB-USERNAME` will be the name of the course organization.
Some important notes:
1. The `gh-pages` branch must be named *exactly* that. If you misspell the name, or use an underscore instead of a dash, it won’t work.
2. Only the files and commits in the `gh-pages` branch are visible on the web. All commits in other branches (`experiment`, `master`, etc.) are not visible on the web (other than as source code in the repo). This allows you to work on your site with others before publishing those changes to the web.
3. Any content in the `gh-pages` branch will be publicly accessible, even if your repo is private. You can remove specific files from the `gh-pages` branch that you don’t want to be visible on the web, while still keeping them in the `master` branch: use the `git rm` to remove the file and then add, commit, and push the deletion.
* Be careful not push [any passwords or anything](http://www.itnews.com.au/news/aws-urges-developers-to-scrub-github-of-secret-keys-375785) to GitHub!
4. The web page will only be initially built when a **repo administrator** pushes a change to the `gh-pages` branch; if someone just has “write access” to the repo (e.g., they are a contributor, but not an “owner”), then the page won’t be created. But once an administrator (such as the person who created the repo) pushes that branch and causes the initial page to be created, then any further updates will appear as well.
After you’ve created your initial `gh-pages` branch, any changes you want to appear online will need to be saved as new commits to that branch and then pushed back up to GitHub. **HOWEVER**, it is best practice to ***not*** make any changes directly to the `gh-pages` branch! Instead, you should switch back to the `master` branch, make your changes there, commit them, then `merge` them back into `gh-pages` before pushing to GitHub:
```
# switch back to master
git checkout master
### UPDATE YOUR CODE (outside of the terminal)
# commit the changes
git add .
git commit -m "YOUR CHANGE MESSAGE"
# switch back to gh-pages and merge changes from master
git checkout gh-pages
git merge master
# upload to github
git push --all
```
(the `--all` option on `git push` will push all branches that are **tracking** remote branches).
This procedure will keep your code synchronized between the branches, while avoiding a large number of merge conflicts.
Resources
---------
* [Git and GitHub in Plain English](https://red-badger.com/blog/2016/11/29/gitgithub-in-plain-english)
* [Atlassian Git Branches Tutorial](https://www.atlassian.com/git/tutorials/using-branches)
* [Git Branching (Official Documentation)](https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell)
* [Learn Git Branching](http://learngitbranching.js.org/) (interactive tutorial)
* [Visualizing Git Concepts](http://www.wei-wang.com/ExplainGitWithD3/#) (interactive visualization)
* [Resolving a merge conflict (GitHub)](https://help.github.com/articles/resolving-a-merge-conflict-using-the-command-line/)
14\.1 Git Branches
------------------
So far, you’ve been using git to create a *linear sequence* of commits: they are all in a line, one after another).
A linear sequence of commits, each with an SHA1 identifier.
Each commit has a message associated with it (that you can see with `git log --oneline`), as well as a unique [SHA\-1](https://en.wikipedia.org/wiki/SHA-1) hash (the random numbers and letters), which can be used to identify that commit as an “id number”.
But you can also save commits in a *non\-linear* sequence. Perhaps you want to try something new and crazy without breaking code that you’ve already written. Or you want to work on two different features simultaneously (having separate commits for each). Or you want multiple people to work on the same code without stepping on each other’s toes.
To do this, you use a feature of git called **branching** (because you can have commits that “branch off” from a line of development):
An example of branching commits.
In this example, you have a primary branch (called the `master` branch), and decide you want to try an experiment. You *split off* a new branch (called for example `experiment`), which saves some funky changes to your code. But then you decide to make further changes to your main development line, adding more commits to `master` that ignore the changes stored in the `experiment` branch. You can develop `master` and `experiment` simultaneously, making changes to each version of the code. You can even branch off further versions (e.g., a `bugfix` to fix a problem) if you wish. And once you decide you’re happy with the code added to both versions, you can **merge** them back together, so that the `master` branch now contains all the changes that were made on the `experiment` branch. If you decided that the `experiment` didn’t work out, you can simply delete those set of changes without ever having messed with your “core” `master` branch.
You can view a list of current branches in the repo with the command
```
git branch
```
(The item with the asterisk (`*`) is the “current branch” you’re on. The latest commit of the branch you’re on is referred to as the **`HEAD`**.
You can use the same command to create a *new* branch:
```
git branch [branch_name]
```
This will create a new branch called `branch_name` (replacing `[branch_name]`, including the brackets, with whatever name you want). Note that if you run `git branch` again you’ll see that this *hasn’t actually changed what branch you’re on*. In fact, all you’ve done is created a new *reference* (like a new variable!) that refers to the current commit as the given branch name.
* You can think of this like creating a new variable called `branch_name` and assigning the latest commit to that! Almost like you wrote `new_branch <- my_last_commit`.
* If you’re familiar with [LinkedLists](https://en.wikipedia.org/wiki/Linked_list), it’s a similar idea to changing a pointer in those.
In order to switch to a different branch, use the command (without the brackets)
```
git checkout [branch_name]
```
**Checking out** a branch doesn’t actually create a new commit! All it does is change the `HEAD` (the “commit I’m currently looking at”) so that it now refers to the latest commit of the target branch. You can confirm that the branch has changed with `git branch`.
* You can think of this like assigning a new value (the latest commit of the target branch) to the `HEAD` variable. Almost like you wrote `HEAD <- branch_name_last_commit`.
* Note that you can create *and* checkout a branch in a single step using the `-b` option of `git checkout`:
```
git checkout -b [branch_name]
```
Once you’ve checked out a particular branch, any *new* commits from that point on will be “attached” to the “HEAD” of that branch, while the “HEAD” of other branches (e.g., `master`) will stay the same. If you use `git checkout` again, you can switch back to the other branch.
* **Important** checking out a branch will “reset” your code to whatever it looked like when you made that commit. Switch back and forth between branches and watch your code change!
Switching between the HEAD of different branches.
Note that you can only check out code if the *current working directory* has no uncommitted changes. This means you’ll need to `commit` any changes to the current branch before you `checkout` another. If you want to “save” your changes but don’t want to commit to them, you can also use git’s ability to temporarily [stash](https://git-scm.com/book/en/v2/Git-Tools-Stashing-and-Cleaning) changes.
Finally, you can delete a branch using `git branch -d [branch_name]`. Note that this will give you a warning if you might lose work; be sure and read the output message!
14\.2 Merging
-------------
If you have changes (commits) spread across multiple branches, eventually you’ll want to combine those changes back into a single branch. This is a process called **merging**: you “merge” the changes from one branch *into* another. You do this with the (surprise!) `merge` command:
```
git merge [other_branch]
```
This command will merge `other_branch` **into the current branch**. So if you want to end up with the “combined” version of your commits on a particular branch, you’ll need to switch to (`checkout`) that branch before you run the merge.
* **IMPORTANT** If something goes wrong, don’t panic and try to close your command\-line! Come back to this book and look up how to fix the problem you’ve encountered (e.g., how to exit *vim*). And if you’re unsure why something isn’t working with git, use **`git status`** to check the current status and for what steps to do next.
* Note that the `rebase` command will perform a similar operation, but without creating a new “merge” commit—it simply takes the commits from one branch and attaches them to the end of the other. This effectively **changes history**, since it is no longer clear where the branching occurred. From an archival and academic view, you never want to “destroy history” and lose a record of changes that were made. History is important: don’t screw with it! Thus we recommend you *avoid* rebasing and stick with merging.
### 14\.2\.1 Merge Conflicts
Merging is a regular occurrence when working with branches. But consider the following situation:
1. You’re on the `master` branch.
2. You create and `checkout` a new branch called `danger`
3. On the `danger` branch, you change line 12 of the code to be “I like kitties”. You then commit this change (with message “Change line 12 of danger”).
4. You `checkout` (switch to) the `master` branch again.
5. On the `master` branch, you change to line 12 of the code to be “I like puppies”. You then commit this change (with message “Change line 12 of master”).
6. You use `git merge danger` to merge the `danger` branch **into** the `master` branch.
In this situation, you are trying to *merge two different changes to the same line of code*, and thus should be shown an error on the command\-line:
A merge conflict reported on the command\-line
This is called a **merge conflict**. A merge conflict occurs when two commits from different branches include different changes to the same code (they conflict). Git is just a simple computer program, and has no way of knowing which version to keep (“Are kitties better than puppies? How should I know?!”).
Since git can’t determine which version of the code to keep, it ***stops the merge in the middle*** and forces you to choose what code is correct **manually**.
In order to **resolve the merge conflict**, you will need to edit the file (code) so that you pick which version to keep. Git adds “code” to the file to indicate where you need to make a decision about which code is better:
Code including a merge conflict.
In order to resolve the conflict:
1. Use `git status` to see which files have merge conflicts. Note that files may have more than one conflict!
2. Choose which version of the code to keep (or keep a combination, or replace it with something new entirely!) You do this by **editing the file** (i.e., open it in Atom or RStudio and change it). Pretend that your cat walked across your keyboard and added a bunch of extra junk; it is now your task to fix your work and restore it to a clean, working state. ***Be sure and test your changes to make sure things work!***
3. Be sure and remove the `<<<<<<<` and `=======` and `>>>>>>>`. These are not legal code in any language.
4. Once you’re satisfied that the conflicts are all resolved and everything works as it should, follow the instructions in the error message and `add` and `commit` your changes (the code you “modified” to resolve the conflict):
```
git add .
git commit "Resolve merge conflict"
```
This will complete the merge! Use `git status` to check that everything is clean again.
**Merge conflicts are expected**. You didn’t do something wrong if one occurs! Don’t worry about getting merge conflicts or try to avoid them: just resolve the conflict, fix the “bug” that has appeared, and move on with your life.
### 14\.2\.1 Merge Conflicts
Merging is a regular occurrence when working with branches. But consider the following situation:
1. You’re on the `master` branch.
2. You create and `checkout` a new branch called `danger`
3. On the `danger` branch, you change line 12 of the code to be “I like kitties”. You then commit this change (with message “Change line 12 of danger”).
4. You `checkout` (switch to) the `master` branch again.
5. On the `master` branch, you change to line 12 of the code to be “I like puppies”. You then commit this change (with message “Change line 12 of master”).
6. You use `git merge danger` to merge the `danger` branch **into** the `master` branch.
In this situation, you are trying to *merge two different changes to the same line of code*, and thus should be shown an error on the command\-line:
A merge conflict reported on the command\-line
This is called a **merge conflict**. A merge conflict occurs when two commits from different branches include different changes to the same code (they conflict). Git is just a simple computer program, and has no way of knowing which version to keep (“Are kitties better than puppies? How should I know?!”).
Since git can’t determine which version of the code to keep, it ***stops the merge in the middle*** and forces you to choose what code is correct **manually**.
In order to **resolve the merge conflict**, you will need to edit the file (code) so that you pick which version to keep. Git adds “code” to the file to indicate where you need to make a decision about which code is better:
Code including a merge conflict.
In order to resolve the conflict:
1. Use `git status` to see which files have merge conflicts. Note that files may have more than one conflict!
2. Choose which version of the code to keep (or keep a combination, or replace it with something new entirely!) You do this by **editing the file** (i.e., open it in Atom or RStudio and change it). Pretend that your cat walked across your keyboard and added a bunch of extra junk; it is now your task to fix your work and restore it to a clean, working state. ***Be sure and test your changes to make sure things work!***
3. Be sure and remove the `<<<<<<<` and `=======` and `>>>>>>>`. These are not legal code in any language.
4. Once you’re satisfied that the conflicts are all resolved and everything works as it should, follow the instructions in the error message and `add` and `commit` your changes (the code you “modified” to resolve the conflict):
```
git add .
git commit "Resolve merge conflict"
```
This will complete the merge! Use `git status` to check that everything is clean again.
**Merge conflicts are expected**. You didn’t do something wrong if one occurs! Don’t worry about getting merge conflicts or try to avoid them: just resolve the conflict, fix the “bug” that has appeared, and move on with your life.
14\.3 Undoing Changes
---------------------
One of the key benefits of version control systems is **reversibility**: the ability to “undo” a mistake (and we all make lots of mistakes when programming!) Git provides two basic ways that you can go back and fix a mistake you’ve made previously:
1. You can replace a file (or the entire project directory!) with a version saved as a previous commit.
2. You can have git “reverse” the changes that you made with a previous commit, effectively applying the *opposite* changes and thereby undoing it.
Note that both of these require you to have committed a working version of the code you want to go back to. Git only knows about changes that have been committed—if you don’t commit, git can’t help you! **Commit early, commit often**.
For both forms of undoing, first recall how each commit has a unique SHA\-1 hash (those random numbers) that acted as its “name”. You can see these with the `git log --oneline` command.
You can use the `checkout` command to switch not only to the commit named by a branch (e.g., `master` or `experiment`), but to *any* commit in order to “undo” work. You refer to the commit by its hash number in order to check it out:
```
git checkout [commit_number] [filename]
```
This will replace the current version ***of a single file*** with the version saved in `commit_number`. You can also use **`--`** as the commit\-number to refer to the `HEAD` (the most recent commit in the branch):
```
git checkout -- [filename]
```
If you’re trying to undo changes to lots of files, you can alternatively replace the entire project directory with a version from a previous commit by checking out that commit **as a new branch**:
```
git checkout -b [branch_name] [commit_number]
```
This command treats the commit as if it was the HEAD of a named branch… where the name of that branch is the commit number. You can then make further changes and merge it back into your development or `master` branch.
**IMPORTANT NOTE**: If you don’t create a *new branch* (with **`-b`**) when checking out an old commit, you’ll enter **detached HEAD state**. You can’t commit from here, because there is no branch for that commit to be attached to! See [this tutorial (scroll down)](https://www.atlassian.com/git/tutorials/using-branches/git-checkout) for details and diagrams. If you find yourself in a detached HEAD state, you can use `git checkout master` to get back to the last saved commit (though you will lose any changes you made in that detached state—so just avoid it in the first place!)
But what if you just had one bad commit, and don’t want to throw out other good changes you made later? For this, you can use the `git revert` command:
```
git revert [commit_number] --no-edit
```
This will determine what changes that commit made to the files, and then apply the *opposite* changes to effectively “back out” the commit. Note that this **does not** go back *to* the given commit number (that’s what `checkout` is for!), but rather will *reverse the commit you specify*.
* This command does create a new commit (the `--no-edit` option tells git that you don’t want to include a custom commit message). This is great from an archival point of view: you never “destroy history” and lose the record of what changes were made and then reverted. History is important: don’t screw with it!
Conversely, the `reset` command will destroy history. **Do not use it**, no matter what StackOverflow tells you to do.
14\.4 GitHub and Branches
-------------------------
GitHub is an online service that stores copies of repositories in the cloud. When you `push` and `pull` to GitHub, what you’re actually doing is **merging** your commits with the ones on GitHub!
However, remember that you don’t edit any files on GitHub’s servers, only on your own local machine. And since **resolving a merge conflict** involves editing the files, you have to be careful that conflicts only occur on the local machine, not on GitHub. This plays out in two ways:
1. You will **not** be able to **`push`** to GitHub if merging your commits ***into*** GitHub’s repo would cause a merge conflict. Git will instead report an error, telling you that you need to `pull` changes first and make sure that your version is “up to date”. Up to date in this case means that you have downloaded and merged all the commits on your local machine, so there is no chance of divergent changes causing a merge conflict when you merge by pushing.
2. Whenever you **`pull`** changes from GitHub, there may be a merge conflict! These are resolved ***in the exact same way*** as when merging local branches: that is, you need to *edit the files* to resolve the conflict, then `add` and `commit` the updated versions.
Thus in practice, when working with GitHub (and especially with multiple people), in order to upload your changes you’ll need to do the following:
1. `pull` (download) any changes you don’t have
2. *Resolve* any merge conflicts that occurred
3. `push` (upload) your merged set of changes
Additionally, because GitHub repositories are repos just like the ones on your local machine, they can have branches as well! You have access to any *remote* branches when you `clone` a repo; you can see a list of them with `git branch -a` (using the “**a**ll” option).
If you create a new branch on your local machine, it is possible to push *that branch* to GitHub, creating a mirroring branch on the remote repo. You do this by specifying the branch in the `git push` command:
```
git push origin branch_name
```
where `branch_name` is the name of the branch you are currently on (and thus want to push to GitHub).
Note that you often want to associate your local branch with the remote one (make the local branch **track** the remote), so that when you use `git status` you will be able to see whether they are different or not. You can establish this relationship by including the `-u` option in your push:
```
git push -u origin branch_name
```
Tracking will be remembered once set up, so you only need to use the `-u` option *once*.
### 14\.4\.1 GitHub Pages
GitHub’s use of branches provides a number of additional features, one of which is the ability to **host** web pages (`.html` files, which can be generated from R Markdown) on a publicly accessible web server that can “serve” the page to anyone who requests it. This feature is known as [GitHub Pages](https://help.github.com/articles/what-is-github-pages/).
With GitHub pages, GitHub will automatically serve your files to visitors as long as the files are in a branch with a magic name: **`gh-pages`**. Thus in order to **publish** your webpage and make it available online, all you need to do is create that branch, merge your content into it, and then push that branch to GitHub.
You almost always want to create the new `gh-pages` branch off of your `master` branch. This is because you usually want to publish the “finished” version, which is traditionally represented by the `master` branch. This means you’ll need to switch over to `master`, and then create a new branch from there:
```
git checkout master
git checkout -b gh-pages
```
Checking out the new branch will create it *with all of the commits of its source* meaning `gh-pages` will start with the exact same content as `master`—if your page is done, then it is ready to go!
You can then upload this new local branch to the `gh-pages` branch on the `origin` remote:
```
git push -u origin gh-pages
```
After the push completes, you will be able to see your web page using the following URL:
```
https://GITHUB-USERNAME.github.io/REPO-NAME
```
(Replace `GITHUB-USERNAME` with the user name **of the account hosting the repo**, and `REPO-NAME` with your repository name).
* This means that if you’re making your homework reports available, the `GITHUB-USERNAME` will be the name of the course organization.
Some important notes:
1. The `gh-pages` branch must be named *exactly* that. If you misspell the name, or use an underscore instead of a dash, it won’t work.
2. Only the files and commits in the `gh-pages` branch are visible on the web. All commits in other branches (`experiment`, `master`, etc.) are not visible on the web (other than as source code in the repo). This allows you to work on your site with others before publishing those changes to the web.
3. Any content in the `gh-pages` branch will be publicly accessible, even if your repo is private. You can remove specific files from the `gh-pages` branch that you don’t want to be visible on the web, while still keeping them in the `master` branch: use the `git rm` to remove the file and then add, commit, and push the deletion.
* Be careful not push [any passwords or anything](http://www.itnews.com.au/news/aws-urges-developers-to-scrub-github-of-secret-keys-375785) to GitHub!
4. The web page will only be initially built when a **repo administrator** pushes a change to the `gh-pages` branch; if someone just has “write access” to the repo (e.g., they are a contributor, but not an “owner”), then the page won’t be created. But once an administrator (such as the person who created the repo) pushes that branch and causes the initial page to be created, then any further updates will appear as well.
After you’ve created your initial `gh-pages` branch, any changes you want to appear online will need to be saved as new commits to that branch and then pushed back up to GitHub. **HOWEVER**, it is best practice to ***not*** make any changes directly to the `gh-pages` branch! Instead, you should switch back to the `master` branch, make your changes there, commit them, then `merge` them back into `gh-pages` before pushing to GitHub:
```
# switch back to master
git checkout master
### UPDATE YOUR CODE (outside of the terminal)
# commit the changes
git add .
git commit -m "YOUR CHANGE MESSAGE"
# switch back to gh-pages and merge changes from master
git checkout gh-pages
git merge master
# upload to github
git push --all
```
(the `--all` option on `git push` will push all branches that are **tracking** remote branches).
This procedure will keep your code synchronized between the branches, while avoiding a large number of merge conflicts.
### 14\.4\.1 GitHub Pages
GitHub’s use of branches provides a number of additional features, one of which is the ability to **host** web pages (`.html` files, which can be generated from R Markdown) on a publicly accessible web server that can “serve” the page to anyone who requests it. This feature is known as [GitHub Pages](https://help.github.com/articles/what-is-github-pages/).
With GitHub pages, GitHub will automatically serve your files to visitors as long as the files are in a branch with a magic name: **`gh-pages`**. Thus in order to **publish** your webpage and make it available online, all you need to do is create that branch, merge your content into it, and then push that branch to GitHub.
You almost always want to create the new `gh-pages` branch off of your `master` branch. This is because you usually want to publish the “finished” version, which is traditionally represented by the `master` branch. This means you’ll need to switch over to `master`, and then create a new branch from there:
```
git checkout master
git checkout -b gh-pages
```
Checking out the new branch will create it *with all of the commits of its source* meaning `gh-pages` will start with the exact same content as `master`—if your page is done, then it is ready to go!
You can then upload this new local branch to the `gh-pages` branch on the `origin` remote:
```
git push -u origin gh-pages
```
After the push completes, you will be able to see your web page using the following URL:
```
https://GITHUB-USERNAME.github.io/REPO-NAME
```
(Replace `GITHUB-USERNAME` with the user name **of the account hosting the repo**, and `REPO-NAME` with your repository name).
* This means that if you’re making your homework reports available, the `GITHUB-USERNAME` will be the name of the course organization.
Some important notes:
1. The `gh-pages` branch must be named *exactly* that. If you misspell the name, or use an underscore instead of a dash, it won’t work.
2. Only the files and commits in the `gh-pages` branch are visible on the web. All commits in other branches (`experiment`, `master`, etc.) are not visible on the web (other than as source code in the repo). This allows you to work on your site with others before publishing those changes to the web.
3. Any content in the `gh-pages` branch will be publicly accessible, even if your repo is private. You can remove specific files from the `gh-pages` branch that you don’t want to be visible on the web, while still keeping them in the `master` branch: use the `git rm` to remove the file and then add, commit, and push the deletion.
* Be careful not push [any passwords or anything](http://www.itnews.com.au/news/aws-urges-developers-to-scrub-github-of-secret-keys-375785) to GitHub!
4. The web page will only be initially built when a **repo administrator** pushes a change to the `gh-pages` branch; if someone just has “write access” to the repo (e.g., they are a contributor, but not an “owner”), then the page won’t be created. But once an administrator (such as the person who created the repo) pushes that branch and causes the initial page to be created, then any further updates will appear as well.
After you’ve created your initial `gh-pages` branch, any changes you want to appear online will need to be saved as new commits to that branch and then pushed back up to GitHub. **HOWEVER**, it is best practice to ***not*** make any changes directly to the `gh-pages` branch! Instead, you should switch back to the `master` branch, make your changes there, commit them, then `merge` them back into `gh-pages` before pushing to GitHub:
```
# switch back to master
git checkout master
### UPDATE YOUR CODE (outside of the terminal)
# commit the changes
git add .
git commit -m "YOUR CHANGE MESSAGE"
# switch back to gh-pages and merge changes from master
git checkout gh-pages
git merge master
# upload to github
git push --all
```
(the `--all` option on `git push` will push all branches that are **tracking** remote branches).
This procedure will keep your code synchronized between the branches, while avoiding a large number of merge conflicts.
Resources
---------
* [Git and GitHub in Plain English](https://red-badger.com/blog/2016/11/29/gitgithub-in-plain-english)
* [Atlassian Git Branches Tutorial](https://www.atlassian.com/git/tutorials/using-branches)
* [Git Branching (Official Documentation)](https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell)
* [Learn Git Branching](http://learngitbranching.js.org/) (interactive tutorial)
* [Visualizing Git Concepts](http://www.wei-wang.com/ExplainGitWithD3/#) (interactive visualization)
* [Resolving a merge conflict (GitHub)](https://help.github.com/articles/resolving-a-merge-conflict-using-the-command-line/)
| Field Specific |
info201.github.io | https://info201.github.io/git-collaboration.html |
Chapter 15 Git Collaboration
============================
Being able to merge between branches allows you to work **collaboratively**, with multiple people making changes to the same repo and sharing those changes through GitHub. There are a variety of approaches (or **workflows**) that can be used to facilitate collaboration and make sure that people are effectively able to share code. This section describes a variety of different workflows, however we suggest the branch\-based workflow called the [**Feature Branch Workflow**](https://www.atlassian.com/git/tutorials/comparing-workflows#feature-branch-workflow) for this course.
15\.1 Centralized Workflow
--------------------------
In order to understand the Feature Branch Workflow, it’s important to first understand how to collaborate on a centralized repository. The Feature Branch Workflow uses a **centralized repository** stored on GitHub—that is, every single member of the team will `push` and `pull` to a single GitHub repo. However, since each repository needs to be created under a particular account, this means that a ***single member*** of the team will need to create the repo (such as by accepting a GitHub Classroom assignment, or by clicking the *“New”* button on their “Repositories” tab on the GitHub web portal).
In order to make sure everyone is able to `push` to the repository, whoever creates the repo will need to [**add the other team members as collaborators**](https://help.github.com/articles/inviting-collaborators-to-a-personal-repository/). You can do this under the **Settings** tab:
Adding a collaborator to a Github repo (via the web portal).
Once you’ve added everyone to the GitHub repository, **each team member** will need to **`clone`** the repository to their local machines to work on the code individually. Collaborators can then `push` any changes they make to the central repository, and `pull` and changes made by others. Because multiple members will be contributing to the *same repositiory*, it’s important to ensure that you are working on the most up\-to\-date version of the code. This means that you will regularly have to **pull in changes** from GitHub that your team members may have committed. As a result, we suggest that you have a workflow that looks like this:
```
# Begin your work session by pulling in changes from GitHub
git pull origin master
# If necessary, resolve any merge conflicts and commit them
git add .
git commit -m "Merge in changes from GitHub"
# Do your work, then add, commit and push
git add .
git commit -m "Make progress on feature X"
git push origin master
```
Note, if someone pushes a commit to GitHub *before you push your changes*, you’ll need to integrate those into your code (and test them!) before pushing up to GitHub. While working on a single `master` branch in this fashion is possible, you’ll encounter fewer conflicts if you use a dedicated **feature branch** for each developer or feature you’re working on.
15\.2 Feature Branch Workflow
-----------------------------
The Feature Branch Workflow is a natural extension of the Centralized Workflow that enhances the model by defining specific *branches* for different pieces of development (still with one centralized repository). The core idea behind the Feature Branch Workflow is that all development should take place on a dedicated **feature branch**, rather than on the `master` branch. This allows for different people to work on different branches without disturbing the main codebase. For example, you might have one branch `visualization` that focuses on adding a complex visualization, or another `experimental-analysis` that tries a bold new approach to processing the data. Each branch is based on a *feature* (capability or part) of the project, not a particular person: a single developer could be working on multiple feature branches.
The idea is that the `master` branch *always* contains “production\-level” code: valid, completely working code that you could deploy or publish (read: give to your boss or teacher) at a whim. All feature branches branch off of `master`, and are allowed to contain temporary or even broken code (since they are still in development). This way there is always a “working” (if incomplete) copy of the code (`master`), and development can be kept isolated and considered independent of the whole. This is similar to the example with the `experiment` branch above.
The workflow thus works like this:
1. Ada decides to add a new feature or part to the code. She creates a new feature branch off of `master`:
```
git checkout master
git checkout -b adas-feature
```
2. Ada does some work on this feature
```
# work is done outside of terminal
git add .
git commit -m "Add progress on feature"
```
3. Ada takes a break, pushing her changes to GitHub
```
git push -u origin adas-feature
```
4. After talking to Ada, Bebe decides to help finish up the feature. She checks out the branch and makes some changes, then pushes them back to GitHub
```
# fetch will "download" commits from GitHub, without merging them
git fetch origin
git checkout adas-feature
# work is on adas-feature done outside of terminal
git add .
git commit -m "Add more progress on feature"
git push origin adas-feature
```
5. Ada downloads Bebe’s changes
```
git pull origin adas-feature
```
6. Ada decides the feature is finished, and *merges* it back into `master`. But first, she makes sure she has the latest version of the `master` code to integrate her changes with
```
git checkout master # switch to master
git pull origin master # download any changes
git merge adas-feature # merge the feature into the master branch
# fix any merge conflicts!!
git push origin master # upload the updated code to master
```
7. And now that the feature has been successfully added to the project, Ada can delete the feature branch (using `git branch -d branch_name`). See also [here](http://stackoverflow.com/questions/2003505/how-to-delete-a-git-branch-both-locally-and-remotely).
This kind of workflow is very common and effective for supporting collaboration. Note that as projects get large, you may need to start being more organized about how and when you create feature branches. For example, the [**Git Flow**](http://nvie.com/posts/a-successful-git-branching-model/) model organizes feature branches around product releases, and is often a starting point for large collaborative projects.
15\.3 Forking Workflow
----------------------
The Forking Workflow takes a **fundamentally different approach** to collaboration than the Centralized and Feature Branch workflows. Rather than having a single remote, each developer will have **their own repository** on GitHub that is *forked* from the original repository. As discussed in the [introductory GitHub Chapter](git-basics.html#git-basics), a developer can create their own remote repository from an existing project by *forking* it on GitHub. This allows the individual to make changes (and contribute to) the project. However, we have not yet discussed how those changes can be integrated into the original code base. GitHub offers a feature called [**pull requests**](https://help.github.com/articles/creating-a-pull-request/) by which you can merge two remote branches (that is: `merge` two branches that are on GitHub). A **pull request** is a request for the changes from one branch to be pulled (merged) into another.
### 15\.3\.1 Pull Requests
Pull requests are primarily used to let teams of developers *collaborate*—one developer can send a request “hey, can you integrate my changes?” to another. The second developer can perform a **code review**: reviewing the proposed changes and making comments or asking for corrections to anything they find problematic. Once the changes are improved, the pull request can be **accepted** and the changes merged into the target branch. This process is how programmers collaborate on *open\-source software* (such as R libraries like `dplyr`): a developer can *fork* an existing professional project, make changes to that fork, and then send a pull request back to the original developer asking them to merge in changes (“will you include my changes in your branch/version of the code?”).
Pull requests should only be used when doing collaboration using remote branches! Local branches should be `merge`’d locally using the command\-line, not GitHub’s pull request feature.
In order to issue a pull request, both branches you wish to merge will need to be `pushed` to GitHub (whether they are in the same repo or in forks). To issue the pull request, navigate to your repository on GitHub’s web portal and choose the **New Pull Request** button (it is next to the drop\-down that lets you view different branches).
In the next page, you will need to specify which branches you wish to merge. The **base** branch is the one you want to merge *into* (often `master`), and the **head** branch (labeled “compare”) is the branch with the new changes you want to merge (often a feature branch; see below).
Add a title and description for your pull request. These should follow the format for git commit messages. Finally, click the **Create pull request** button to finish creating the pull request.
Important! The pull request is a request to merge two branches, not to merge a specific set of commits. This means that you can *push more commits* to the head/merge\-from branch, and they will automatically be included in the pull request—the request is always “up\-to\-date” with whatever commits are on the (remote) branch.
You can view all pull requests (including those that have been accepted) through the **Pull Requests** tab at the top of the repo’s web portal. This is where you can go to see comments that have been left by the reviewer.
If someone sends you a pull request (e.g., another developer on your team), you can [accept that pull request](https://help.github.com/articles/merging-a-pull-request/) through GitHub’s web portal. If the branches can be merged without a conflict, you can do this simply by hitting the **Merge pull request** button. However, if GitHub detects that a conflict may occur, you will need to [pull down the branches and merge them locally](https://help.github.com/articles/checking-out-pull-requests-locally).
It is best practice to *never* accept your own pull requests! If you don’t need any collaboration, just merge the branches locally.
Note that when you merge a pull request via the GitHub web site, the merge is done entirely on the server. Your local repo will not yet have those changes, and so you will need to use `git pull` to download the updates to an appropriate branch.
Resources
---------
* [Atlassian Git Workflows Tutorial](https://www.atlassian.com/git/tutorials/comparing-workflows)
15\.1 Centralized Workflow
--------------------------
In order to understand the Feature Branch Workflow, it’s important to first understand how to collaborate on a centralized repository. The Feature Branch Workflow uses a **centralized repository** stored on GitHub—that is, every single member of the team will `push` and `pull` to a single GitHub repo. However, since each repository needs to be created under a particular account, this means that a ***single member*** of the team will need to create the repo (such as by accepting a GitHub Classroom assignment, or by clicking the *“New”* button on their “Repositories” tab on the GitHub web portal).
In order to make sure everyone is able to `push` to the repository, whoever creates the repo will need to [**add the other team members as collaborators**](https://help.github.com/articles/inviting-collaborators-to-a-personal-repository/). You can do this under the **Settings** tab:
Adding a collaborator to a Github repo (via the web portal).
Once you’ve added everyone to the GitHub repository, **each team member** will need to **`clone`** the repository to their local machines to work on the code individually. Collaborators can then `push` any changes they make to the central repository, and `pull` and changes made by others. Because multiple members will be contributing to the *same repositiory*, it’s important to ensure that you are working on the most up\-to\-date version of the code. This means that you will regularly have to **pull in changes** from GitHub that your team members may have committed. As a result, we suggest that you have a workflow that looks like this:
```
# Begin your work session by pulling in changes from GitHub
git pull origin master
# If necessary, resolve any merge conflicts and commit them
git add .
git commit -m "Merge in changes from GitHub"
# Do your work, then add, commit and push
git add .
git commit -m "Make progress on feature X"
git push origin master
```
Note, if someone pushes a commit to GitHub *before you push your changes*, you’ll need to integrate those into your code (and test them!) before pushing up to GitHub. While working on a single `master` branch in this fashion is possible, you’ll encounter fewer conflicts if you use a dedicated **feature branch** for each developer or feature you’re working on.
15\.2 Feature Branch Workflow
-----------------------------
The Feature Branch Workflow is a natural extension of the Centralized Workflow that enhances the model by defining specific *branches* for different pieces of development (still with one centralized repository). The core idea behind the Feature Branch Workflow is that all development should take place on a dedicated **feature branch**, rather than on the `master` branch. This allows for different people to work on different branches without disturbing the main codebase. For example, you might have one branch `visualization` that focuses on adding a complex visualization, or another `experimental-analysis` that tries a bold new approach to processing the data. Each branch is based on a *feature* (capability or part) of the project, not a particular person: a single developer could be working on multiple feature branches.
The idea is that the `master` branch *always* contains “production\-level” code: valid, completely working code that you could deploy or publish (read: give to your boss or teacher) at a whim. All feature branches branch off of `master`, and are allowed to contain temporary or even broken code (since they are still in development). This way there is always a “working” (if incomplete) copy of the code (`master`), and development can be kept isolated and considered independent of the whole. This is similar to the example with the `experiment` branch above.
The workflow thus works like this:
1. Ada decides to add a new feature or part to the code. She creates a new feature branch off of `master`:
```
git checkout master
git checkout -b adas-feature
```
2. Ada does some work on this feature
```
# work is done outside of terminal
git add .
git commit -m "Add progress on feature"
```
3. Ada takes a break, pushing her changes to GitHub
```
git push -u origin adas-feature
```
4. After talking to Ada, Bebe decides to help finish up the feature. She checks out the branch and makes some changes, then pushes them back to GitHub
```
# fetch will "download" commits from GitHub, without merging them
git fetch origin
git checkout adas-feature
# work is on adas-feature done outside of terminal
git add .
git commit -m "Add more progress on feature"
git push origin adas-feature
```
5. Ada downloads Bebe’s changes
```
git pull origin adas-feature
```
6. Ada decides the feature is finished, and *merges* it back into `master`. But first, she makes sure she has the latest version of the `master` code to integrate her changes with
```
git checkout master # switch to master
git pull origin master # download any changes
git merge adas-feature # merge the feature into the master branch
# fix any merge conflicts!!
git push origin master # upload the updated code to master
```
7. And now that the feature has been successfully added to the project, Ada can delete the feature branch (using `git branch -d branch_name`). See also [here](http://stackoverflow.com/questions/2003505/how-to-delete-a-git-branch-both-locally-and-remotely).
This kind of workflow is very common and effective for supporting collaboration. Note that as projects get large, you may need to start being more organized about how and when you create feature branches. For example, the [**Git Flow**](http://nvie.com/posts/a-successful-git-branching-model/) model organizes feature branches around product releases, and is often a starting point for large collaborative projects.
15\.3 Forking Workflow
----------------------
The Forking Workflow takes a **fundamentally different approach** to collaboration than the Centralized and Feature Branch workflows. Rather than having a single remote, each developer will have **their own repository** on GitHub that is *forked* from the original repository. As discussed in the [introductory GitHub Chapter](git-basics.html#git-basics), a developer can create their own remote repository from an existing project by *forking* it on GitHub. This allows the individual to make changes (and contribute to) the project. However, we have not yet discussed how those changes can be integrated into the original code base. GitHub offers a feature called [**pull requests**](https://help.github.com/articles/creating-a-pull-request/) by which you can merge two remote branches (that is: `merge` two branches that are on GitHub). A **pull request** is a request for the changes from one branch to be pulled (merged) into another.
### 15\.3\.1 Pull Requests
Pull requests are primarily used to let teams of developers *collaborate*—one developer can send a request “hey, can you integrate my changes?” to another. The second developer can perform a **code review**: reviewing the proposed changes and making comments or asking for corrections to anything they find problematic. Once the changes are improved, the pull request can be **accepted** and the changes merged into the target branch. This process is how programmers collaborate on *open\-source software* (such as R libraries like `dplyr`): a developer can *fork* an existing professional project, make changes to that fork, and then send a pull request back to the original developer asking them to merge in changes (“will you include my changes in your branch/version of the code?”).
Pull requests should only be used when doing collaboration using remote branches! Local branches should be `merge`’d locally using the command\-line, not GitHub’s pull request feature.
In order to issue a pull request, both branches you wish to merge will need to be `pushed` to GitHub (whether they are in the same repo or in forks). To issue the pull request, navigate to your repository on GitHub’s web portal and choose the **New Pull Request** button (it is next to the drop\-down that lets you view different branches).
In the next page, you will need to specify which branches you wish to merge. The **base** branch is the one you want to merge *into* (often `master`), and the **head** branch (labeled “compare”) is the branch with the new changes you want to merge (often a feature branch; see below).
Add a title and description for your pull request. These should follow the format for git commit messages. Finally, click the **Create pull request** button to finish creating the pull request.
Important! The pull request is a request to merge two branches, not to merge a specific set of commits. This means that you can *push more commits* to the head/merge\-from branch, and they will automatically be included in the pull request—the request is always “up\-to\-date” with whatever commits are on the (remote) branch.
You can view all pull requests (including those that have been accepted) through the **Pull Requests** tab at the top of the repo’s web portal. This is where you can go to see comments that have been left by the reviewer.
If someone sends you a pull request (e.g., another developer on your team), you can [accept that pull request](https://help.github.com/articles/merging-a-pull-request/) through GitHub’s web portal. If the branches can be merged without a conflict, you can do this simply by hitting the **Merge pull request** button. However, if GitHub detects that a conflict may occur, you will need to [pull down the branches and merge them locally](https://help.github.com/articles/checking-out-pull-requests-locally).
It is best practice to *never* accept your own pull requests! If you don’t need any collaboration, just merge the branches locally.
Note that when you merge a pull request via the GitHub web site, the merge is done entirely on the server. Your local repo will not yet have those changes, and so you will need to use `git pull` to download the updates to an appropriate branch.
### 15\.3\.1 Pull Requests
Pull requests are primarily used to let teams of developers *collaborate*—one developer can send a request “hey, can you integrate my changes?” to another. The second developer can perform a **code review**: reviewing the proposed changes and making comments or asking for corrections to anything they find problematic. Once the changes are improved, the pull request can be **accepted** and the changes merged into the target branch. This process is how programmers collaborate on *open\-source software* (such as R libraries like `dplyr`): a developer can *fork* an existing professional project, make changes to that fork, and then send a pull request back to the original developer asking them to merge in changes (“will you include my changes in your branch/version of the code?”).
Pull requests should only be used when doing collaboration using remote branches! Local branches should be `merge`’d locally using the command\-line, not GitHub’s pull request feature.
In order to issue a pull request, both branches you wish to merge will need to be `pushed` to GitHub (whether they are in the same repo or in forks). To issue the pull request, navigate to your repository on GitHub’s web portal and choose the **New Pull Request** button (it is next to the drop\-down that lets you view different branches).
In the next page, you will need to specify which branches you wish to merge. The **base** branch is the one you want to merge *into* (often `master`), and the **head** branch (labeled “compare”) is the branch with the new changes you want to merge (often a feature branch; see below).
Add a title and description for your pull request. These should follow the format for git commit messages. Finally, click the **Create pull request** button to finish creating the pull request.
Important! The pull request is a request to merge two branches, not to merge a specific set of commits. This means that you can *push more commits* to the head/merge\-from branch, and they will automatically be included in the pull request—the request is always “up\-to\-date” with whatever commits are on the (remote) branch.
You can view all pull requests (including those that have been accepted) through the **Pull Requests** tab at the top of the repo’s web portal. This is where you can go to see comments that have been left by the reviewer.
If someone sends you a pull request (e.g., another developer on your team), you can [accept that pull request](https://help.github.com/articles/merging-a-pull-request/) through GitHub’s web portal. If the branches can be merged without a conflict, you can do this simply by hitting the **Merge pull request** button. However, if GitHub detects that a conflict may occur, you will need to [pull down the branches and merge them locally](https://help.github.com/articles/checking-out-pull-requests-locally).
It is best practice to *never* accept your own pull requests! If you don’t need any collaboration, just merge the branches locally.
Note that when you merge a pull request via the GitHub web site, the merge is done entirely on the server. Your local repo will not yet have those changes, and so you will need to use `git pull` to download the updates to an appropriate branch.
Resources
---------
* [Atlassian Git Workflows Tutorial](https://www.atlassian.com/git/tutorials/comparing-workflows)
| Field Specific |
info201.github.io | https://info201.github.io/shiny.html |
Chapter 16 The `shiny` Framework
================================
Adding **interactivity** to a data report is a highly effective way of communicating that information and enabling users to explore a data set. In this chapter, you will learn about the **Shiny** framework for building interactive applications in R. Shiny provides a structure for communicating between a user\-interface (i.e., a web\-browser) and an R session, allowing users to interactively change the “code” that is run and the data that are output. This not only enables developers to create **interactive graphics**, but provides a way for users to interact directly with an R session (without writing any code!).
16\.1 Creating Shiny Apps
-------------------------
Shiny is a **web application framework for R**. As opposed to a simple (static) web page like you’ve created with R Markdown, a *web application* is an interactive, dynamic web page—the user can click on buttons, check boxes, or input text in order to change the presentation of the data. Shiny is a *framework* in that it provides the “code” for producing and enabling this interaction, while you as the developer simply “fill in the blanks” by providing *variables* or *functions* that the provided code will utilize to create the interactive page.
`shiny` is another external package (like `dplyr` and `ggplot2`), so you will need to install and load it in order to use it:
```
install.packages("shiny") # once per machine
library("shiny")
```
This will make all of the framework functions and variables you will need to work with available.
### 16\.1\.1 Application Structure
Shiny applications are divided into two parts:
1. The **User Interface (UI)** defines how the application will be *displayed* in the browser. The UI can render R content such as text or graphics just like R Markdown, but it can also include **widgets**, which are interactive controls for your application (think buttons or sliders). The UI can specify a **layout** for these components (e.g., so you can put widgets above, below, or beside one another).
The UI for a Shiny application is defined as a **value**, usually one returned from calling a **layout function**. For example:
```
# The ui is the result of calling the `fluidPage()` layout function
my_ui <- fluidPage(
# A widget: a text input box (save input in the `username` key)
textInput("username", label = "What is your name?"),
# An output element: a text output (for the `message` key)
textOutput("message")
)
```
This UI defines a [fluidPage](https://shiny.rstudio.com/reference/shiny/latest/fluidPage.html) (where the content flows “fluidly” down the page), that contains two *content elements*: a text input box where the user can type their name, and some outputted text based on the `message` variable.
2. The **Server** defines the data that will be displayed through the UI. You can think of this as an interactive R script that the user will be able to “run”: the script will take in *inputs* from the user (based on their interactions) and provide *outputs* that the UI will then display. The server uses **reactive expressions**, which are like functions that will automatically be re\-run whenever the input changes. This allows the output to be dynamic and interactive.
The Server for a Shiny application is defined as a **function** (as opposed to the UI which is a *value*). This function takes in two *lists* as argments: an `input` and `output`. It then uses *render functions* and *reactive expressions* that assign values to the `output` list based on the `input` list. For example:
```
# The server is a function that takes `input` and `output` args
my_server <- function(input, output) {
# assign a value to the `message` key in `output`
# argument is a reactive expression for showing text
output$message <- renderText({
# use the `username` key from input and and return new value
# for the `message` key in output
return(paste("Hello", input$username))
})
}
```
Combined, this UI and server will allow the user to type their name into an input box, and will then say “hello” to whatever name is typed in.
More details about the UI and server components can be found in the sections below.
#### 16\.1\.1\.1 Combining UI and Server
There are two ways of combining the UI and server:
The first (newer) way is to define a file called **`app.R`**. This file should call the [**`shinyApp()`**](http://shiny.rstudio.com/reference/shiny/latest/shinyApp.html) function, which takes a UI value and Server function as arguments. For example:
```
# pass in the variables defined above
shinyApp(ui = my_ui, server = my_server)
```
Executing the `shinyApp()` function will start the App (you can also click the **“Run App”** button at the top of RStudio).
* Note: if you change the UI or the Server, you do **not** need to stop and start the app; you can simply refresh the browser or viewer window and it will reload with the new UI and server.
* If you need to stop the App, you can hit the “Stop Sign” icon on the RStudio console.
Using this function allows you to define your entire application (UI and Server) in a single file (which **must** be named `app.R`). This approach is good for simple applications that you wish to be able to share with others, since the entire application code can be listed in a single file.
However, it is also possible to define the UI and server as *separate* files. This allows you to keep the presentation (UI) separated from the logic (server) of your application, making it easier to maintain and change in the future. To do this, you define two separate files: **`ui.R`** for the UI and **`server.R`** for the Server (the files **must** be named `ui.R` and `server.R`). In these files, you must call the functions `shinyUI()` and `shinyServer()` respectively to create the UI and server, and then RStudio will automatically combine these files together into an application:
```
# In ui.R file
my_ui <- fluidPage(
# define widgets
)
shinyUI(my_ui)
```
```
# In server.R file
my_server <- function(input, output) {
# define output reactive expressions
}
shinyServer(my_server)
```
You can then run the app by using the **“Run App”** button at the top of RStudio:
Use RStudio to run a shiny app defined in separate UI and Server files.
This chapter will primarily use the “single file” approach for compactness and readability, but you are encouraged to break up the UI and server into separate files for your own, larger applications.
* Note that it is also possible to simply define the (e.g.) `my_ui` and `my_server` variables in separate files, and then use `source()` to load them into the `app.R` file and pass them into `shinyApp()`.
### 16\.1\.2 The UI
The UI defines how the app will be displayed in the browser. You create a UI by calling a **layout function** such as `fluidPage()`, which will return a UI definition that can be used by the `shinyUI()` or `shinyApp()` functions.
You specify the “content” that you want the layout to contain (and hence the app to show) by passing each **content element** (piece of content) as an *argument* to that function:
```
# a "pseudocode" example, calling a function with arguments
ui <- fluidPage(element1, element2, element3)
```
Content elements are defined by calling specific *functions* that create them: for example `h1()` will create an element that has a first\-level heading, `textInput()` will create an element where the user can enter text, and `textOutput` will create an element that can have dynamic (changing) content. Usually these content elements are defined as *nested* (anonymous) variables, each on its own line:
```
# still just calling a function with arguments!
ui <- fluidPage(
h1("My App"), # first argument
textInput('username', label = "What is your name?"), # second argument
textOutput('message') # third argument
)
```
Note that layout functions *themselves return content elements*, meaning it is possible to include a layout inside another layout. This allows you to create complex layouts by combining multiple layout elements together. For example:
```
ui <- fluidPage( # UI is a fluid page
titlePanel("My Title"), # include panel with the title (also sets browser title)
sidebarLayout( # layout the page in two columns
sidebarPanel( # specify content for the "sidebar" column
p("sidebar panel content goes here")
),
mainPanel( # specify content for the "main" column
p("main panel content goes here")
)
)
)
```
See the [Shiny documentation](http://shiny.rstudio.com/reference/shiny/latest/) and [gallery](http://shiny.rstudio.com/gallery/) for details and examples of doing complex application layouts.
Fun Fact: much of Shiny’s styling and layout structure is based on the [Bootstrap](http://getbootstrap.com/) web framework.
You can include *static* (unchanging) content in a Shiny UI layout—this is similar to the kinds of content you would write in Markdown (rather than inline R) when using R Markdown. However, you usually don’t specify this content using Markdown syntax (though it is possible to [include a markdown file](http://shiny.rstudio.com/reference/shiny/latest/include.html)’s content). Instead, you include content functions that produce HTML, the language that Markdown is converted to when you look at it in the browser. These functions include:
* `p()` for creating paragraphs, the same as plain text in Markdown
* `h1()`, `h2()`, `h3()` etc for creating headings, the same as `# Heading 1`, `## Heading 2`, `### Heading 3` in Markdown
* `em()` for creating *emphasized* (italic) text, the same as `_text_` in Markdown
* `strong()` for creating **strong** (bolded) text, the same as `**text**` in Markdown
* `a(text, href='url')` for creating hyperlinks (anchors), the same as `[text](url)` in Markdown
* `img(text, src='url')` for including images, the same as `` in Markdown
There are many other methods as well, see [this tutorial lesson](http://shiny.rstudio.com/tutorial/lesson2/) for a list. If you are [familiar with HTML](https://info343-au16.github.io/#/tutorials/html), then these methods will seem familiar; you can also write content in HTML directly using the `tag()` content function.
#### 16\.1\.2\.1 Control Widgets and Reactive Outputs
It is more common to include **control widgets** as content elements in your UI layout. Widgets are *dynamic* (changing) control elements that the user can interact with. Each stores a **value** that the user has entered, whether by typing into a box, moving a slider, or checking a button. When the user changes their input, the stored *value* automatically changes as well.
Examples of control widgets (image from shiny.rstudio.com).
Like other content elements, widgets are created by calling an appropriate function. For example:
* `textInput()` creates a box in which the user can enter text
* `sliderInput()` creates a slider
* `selectInput()` creates a dropdown menu the user can choose from
* `checkboxInput()` creates a box the user can check (using `checkboxGroupInput()` to group them)
* `radioButtons()` creates “radio” buttons (which the user can select only one of at a time)
See [the documentation](http://shiny.rstudio.com/reference/shiny/latest/), and [this tutorial lesson](http://shiny.rstudio.com/tutorial/lesson3/) for a complete list.
All widget functions take at least two arguments:
* A **name** (as a string) for the widget’s value. This will be the **“key”** that will allow the server to be able to access the value the user has input (think: the key in the `input` *list*).
* A **label** (a string or content element described above) that will be shown alongside the widget and tell the user what the value represents. Note that this can be an empty string (`""`) if you don’t want to show anything.
Other arguments may be required by a particular widget—for example, a slider’s `min` and `max` values:
```
# this function would be nested in a layout function (e.g., `fluidPage()`)
sliderInput(
"age", # key this value will be assigned to
"Age of subjects", # label
min = 18, # minimum slider value
max = 80, # maximum slider value
value = 42 # starting value
)
```
Widgets are used to provide **inputs *to*** the Server; see the below section for how to use these inputs, as well as examples from [the gallery](http://shiny.rstudio.com/gallery/).
In order to display **outputs *from*** the Server, you include a **reactive output** element in your UI layout. These are elements similar to the basic content elements, but instead of just displaying *static* (unchanging) content they can display *dynamic* (changing) content produced by the Server.
As with other content elements, reactive outputs are creating by calling an appropriate function. For example:
* `textOutput()` displays output as plain text (note this output can be nested in a content element for formatting)
* `tableOutput()` displays output as a data table (similar to `kable()` in R Markdown). See also `dataTableOutput()` for an interactive version!
* `plotOutput()` displays a graphical plot, such as one created with `ggplot2`
Each of these functions takes as an argument the **name** (as a string) of the value that will be displayed. This is the **“key”** that allows it to access the value the Server is outputting. Note that the functions may take additional arguments as well (e.g., to specify the size of a plot); see [the documentation](http://shiny.rstudio.com/reference/shiny/latest/) for details.
### 16\.1\.3 The Server
The Server defines how the data input by the user will be used to create the output displayed by the app—that is, how the *control widgets* and *reactive outputs* will be connected. You create a Server by *defining a new function* (not calling a provided one):
```
server <- function(input, output) {
# assign values to `output` here
}
```
Note that this is *just a normal function* that happens two take **lists** as arguments. That means you can include the same kinds of code as you normally would—though that code will only be run once (when the application is first started) unless defined as part of a reactive expression.
The first argument is a list of any values defined by the *control widgets*: each **name** in a control widget will be a **key** in this list. For example, using the above `sliderInput()` example would cause the list to have an `age` key (referenced as `input$age`). This allows the Server to access any data that the user has input, using the key names defined in the UI. Note that the values in this list *will change as the user interacts with the UI’s control widgets*.
The purpose of the Server function is to assign new *values* to the `output` argument list (each with an appropriate *key*). These values will then be displayed by the *reactive outputs* defined in the UI. To make it so that the values can actually be displayed by by the UI, the values assigned to this list need to be the results of **Render Functions**. Similar to creating widgets or reactive outputs, different functions are associated with different types of output the server should produce. For example:
* `renderText()` will produce text (character strings) that can be displayed (i.e., by `textOutput()` in the UI)
* `renderTable()` will produce a table that can be displayed (i.e., by `tableOutput()` in the UI)
* `renderPlot()` will produce a graphical plot that can be displayed (i.e., by `plotOutput()` in the UI)
Render functions take as an argument a **Reactive Expression**. This is a lot like a function: it is a **block** of code (in braces **`{}`**) that **returns** the value which should be rendered. For example:
```
output$msg <- renderText({
# code goes here, just like any other function
my_greeting <- "Hello"
# code should always draw upon a key from the `input` variable
message <- paste(my_greeting, input$username)
# return the variable that will be rendered
return(message)
})
```
The only difference between writing a *reactive expression* and a function is that you only include the *block* (the braces and the code inside of them): you don’t use the keyword `function` and don’t specify a set of arguments.
This technically defines a *closure*, which is a programming concept used to encapsulate functions and the context for those functions.
These *reactive expressions* will be “re\-run” **every time** one of the `input` values that it references changes. So if the user interacts with the `username` control widget (and thereby changes the value of the `input` list), the expression in the above `renderText()` will be executed again, returning a new value that will be assigned to `output$msg`. And since `output$msg` has now changed, any *reactive output* in the UI (e.g., a `textOutput()`) will update to show the latest value. This makes the app interactive!
#### 16\.1\.3\.1 Multiple Views
It is quite common in a Shiny app to produce *lots* of output variables, and thus to have multiple reactive expressions. For example:
```
server <- function(input, output) {
# render a histogram plot
output$hist <- renderPlot({
uniform_nums <- runif(input$num, 1, 10) # random nums between 1 and 10
return(hist(uniform_nums)) # built-in plotting for simplicity
})
# render the counts
output$counts <- renderPrint({
uniform_nums <- runif(input$num, 1, 10) # random nums between 1 and 10
counts <- factor(cut(uniform_nums, breaks=1:10)) # factor
return(summary(counts)) # simple vector of counts
})
}
```
If you look at the above example though, you’ll notice that each render function produces a set of random numbers… which means each will produce a *different* set of numbers! The histogram and the table won’t match!
This is an example of where you want to share a single piece of data (a single **model**) between multiple different renditions (multiple **views**). Effectively, you want to define a shared variable (the `uniform_nums`) that can be referenced by both render functions. But since you need that shared variable to be able to *update* whenever the `input` changes, you need to make it be a *reactive expression* itself. You can do this by using the **`reactive()`** function:
```
server <- function(input, output) {
# define a reactive variable
uniform_nums <- reactive({
return(runif(input$num, 1, 10)) # just like for a render function
})
# render a histogram plot
output$hist <- renderPlot({
return(hist(uniform_nums())) # call the reactive variable AS A FUNCTION
})
# render the counts
output$counts <- renderPrint({
counts <- factor(cut(uniform_nums(), breaks=1:10)) # call the reactive variable AS A FUNCTION
return(summary(counts))
})
}
```
The `reactive()` function lets you define a single “variable” that is a *reactive function* which can be called from within the render functions. Importantly, the value returned by this function (the `uniform_nums()`) only changes **when a referenced `input` changes**. Thus as long as `input$num` stays the same, `uniform_nums()` will return the same value.
This is very powerful for allowing multiple **views** of a single piece of data: you can have a single source of data displayed both graphically and textually, with both views linked off of the same processed data table. Additionally, it can help keep your code more organized and readable, and avoid needing to duplicate any processing.
16\.2 Publishing Shiny Apps
---------------------------
Sharing a Shiny App with the world is a bit more involved than simply pushing the code to GitHub. You can’t just use GitHub pages to host the code because, in addition to the HTML UI, you need an R interpreter session to run the Server that the UI can connect to (and GitHub does not provide R interpreters)!
While there are a few different ways of “hosting” Shiny Apps, in this course you’ll use the simplest one: hosting through [**shinyapps.io**](https://www.shinyapps.io). shinyapps.io is a platform for hosting and running Shiny Apps; while large applications cost money, anyone can deploy a simple app (like the ones you’ll create in this course) for free.
In order to host your app on shinyapps.io, you’ll need to [create a free account](https://www.shinyapps.io/admin/#/signup). Note that you can sign up with GitHub or your Google/UW account. Follow the site’s instructions to
1. Select an account name (use something professional, like you used when signing up with GitHub)
2. Install the required `rsconnect` package (may be included with RStudio)
3. Set your authorization token (“password”). Just click the green “Copy to Clipboard” button, and then paste that into the **Console** in RStudio. You should only need to do this once.
Don’t worry about “Step 3 \- Deploy”; you’ll do that through RStudio directly!
After you’ve set up an account, you can *Run* your application (as above) and hit the **Publish** button in the upper\-right corner:
How to publish a running Shiny App to shinyapps.io.
This will put your app online, available at
```
https://USERNAME.shinyapps.io/APPNAME/
```
**Important** Publishing to shinyapps.io is one of the major “pain points” in working with Shiny. For the best experience, be sure to:
1. Always test and debug your app *locally* (e.g., on your own computer, by running the App through RStudio). Make sure it works on your machine before you try to put it online.
2. Use correct folder structures and *relative paths*. All of your app should be in a single folder (usually named after the project). Make sure any `.csv` or `.R` files referenced are inside the app folder, and that you use relative paths to refer to them. Do not include any `setwd()` statements in your code; you should only set the working directory through RStudio (because shinyapps.io will have its own working directory).
3. It is possible to [see the logs for your deployed app](http://docs.rstudio.com/shinyapps.io/applications.html#logging), which may include errors explaining any problems that arise when you deploy your app.
For more options and details, see [the shinyapps.io documentation](http://docs.rstudio.com/shinyapps.io/index.html).
Resources
---------
* [Shiny Documentation](http://shiny.rstudio.com/articles/)
* [Shiny Basics Article](http://shiny.rstudio.com/articles/basics.html)
* [Shiny Tutorial](http://shiny.rstudio.com/tutorial/) (video; links to text at bottom)
* [Shiny Cheatsheet](https://www.rstudio.com/wp-content/uploads/2016/01/shiny-cheatsheet.pdf)
* [Shiny Example Gallery](http://shiny.rstudio.com/gallery/)
* [shinyapps.io User Guide](http://docs.rstudio.com/shinyapps.io/index.html)
* [Interactive Plots with Shiny](http://shiny.rstudio.com/articles/plot-interaction.html) (see also [here](https://blog.rstudio.org/2015/06/16/shiny-0-12-interactive-plots-with-ggplot2/))
* [Interactive Docs with Shiny](https://shiny.rstudio.com/articles/interactive-docs.html)
16\.1 Creating Shiny Apps
-------------------------
Shiny is a **web application framework for R**. As opposed to a simple (static) web page like you’ve created with R Markdown, a *web application* is an interactive, dynamic web page—the user can click on buttons, check boxes, or input text in order to change the presentation of the data. Shiny is a *framework* in that it provides the “code” for producing and enabling this interaction, while you as the developer simply “fill in the blanks” by providing *variables* or *functions* that the provided code will utilize to create the interactive page.
`shiny` is another external package (like `dplyr` and `ggplot2`), so you will need to install and load it in order to use it:
```
install.packages("shiny") # once per machine
library("shiny")
```
This will make all of the framework functions and variables you will need to work with available.
### 16\.1\.1 Application Structure
Shiny applications are divided into two parts:
1. The **User Interface (UI)** defines how the application will be *displayed* in the browser. The UI can render R content such as text or graphics just like R Markdown, but it can also include **widgets**, which are interactive controls for your application (think buttons or sliders). The UI can specify a **layout** for these components (e.g., so you can put widgets above, below, or beside one another).
The UI for a Shiny application is defined as a **value**, usually one returned from calling a **layout function**. For example:
```
# The ui is the result of calling the `fluidPage()` layout function
my_ui <- fluidPage(
# A widget: a text input box (save input in the `username` key)
textInput("username", label = "What is your name?"),
# An output element: a text output (for the `message` key)
textOutput("message")
)
```
This UI defines a [fluidPage](https://shiny.rstudio.com/reference/shiny/latest/fluidPage.html) (where the content flows “fluidly” down the page), that contains two *content elements*: a text input box where the user can type their name, and some outputted text based on the `message` variable.
2. The **Server** defines the data that will be displayed through the UI. You can think of this as an interactive R script that the user will be able to “run”: the script will take in *inputs* from the user (based on their interactions) and provide *outputs* that the UI will then display. The server uses **reactive expressions**, which are like functions that will automatically be re\-run whenever the input changes. This allows the output to be dynamic and interactive.
The Server for a Shiny application is defined as a **function** (as opposed to the UI which is a *value*). This function takes in two *lists* as argments: an `input` and `output`. It then uses *render functions* and *reactive expressions* that assign values to the `output` list based on the `input` list. For example:
```
# The server is a function that takes `input` and `output` args
my_server <- function(input, output) {
# assign a value to the `message` key in `output`
# argument is a reactive expression for showing text
output$message <- renderText({
# use the `username` key from input and and return new value
# for the `message` key in output
return(paste("Hello", input$username))
})
}
```
Combined, this UI and server will allow the user to type their name into an input box, and will then say “hello” to whatever name is typed in.
More details about the UI and server components can be found in the sections below.
#### 16\.1\.1\.1 Combining UI and Server
There are two ways of combining the UI and server:
The first (newer) way is to define a file called **`app.R`**. This file should call the [**`shinyApp()`**](http://shiny.rstudio.com/reference/shiny/latest/shinyApp.html) function, which takes a UI value and Server function as arguments. For example:
```
# pass in the variables defined above
shinyApp(ui = my_ui, server = my_server)
```
Executing the `shinyApp()` function will start the App (you can also click the **“Run App”** button at the top of RStudio).
* Note: if you change the UI or the Server, you do **not** need to stop and start the app; you can simply refresh the browser or viewer window and it will reload with the new UI and server.
* If you need to stop the App, you can hit the “Stop Sign” icon on the RStudio console.
Using this function allows you to define your entire application (UI and Server) in a single file (which **must** be named `app.R`). This approach is good for simple applications that you wish to be able to share with others, since the entire application code can be listed in a single file.
However, it is also possible to define the UI and server as *separate* files. This allows you to keep the presentation (UI) separated from the logic (server) of your application, making it easier to maintain and change in the future. To do this, you define two separate files: **`ui.R`** for the UI and **`server.R`** for the Server (the files **must** be named `ui.R` and `server.R`). In these files, you must call the functions `shinyUI()` and `shinyServer()` respectively to create the UI and server, and then RStudio will automatically combine these files together into an application:
```
# In ui.R file
my_ui <- fluidPage(
# define widgets
)
shinyUI(my_ui)
```
```
# In server.R file
my_server <- function(input, output) {
# define output reactive expressions
}
shinyServer(my_server)
```
You can then run the app by using the **“Run App”** button at the top of RStudio:
Use RStudio to run a shiny app defined in separate UI and Server files.
This chapter will primarily use the “single file” approach for compactness and readability, but you are encouraged to break up the UI and server into separate files for your own, larger applications.
* Note that it is also possible to simply define the (e.g.) `my_ui` and `my_server` variables in separate files, and then use `source()` to load them into the `app.R` file and pass them into `shinyApp()`.
### 16\.1\.2 The UI
The UI defines how the app will be displayed in the browser. You create a UI by calling a **layout function** such as `fluidPage()`, which will return a UI definition that can be used by the `shinyUI()` or `shinyApp()` functions.
You specify the “content” that you want the layout to contain (and hence the app to show) by passing each **content element** (piece of content) as an *argument* to that function:
```
# a "pseudocode" example, calling a function with arguments
ui <- fluidPage(element1, element2, element3)
```
Content elements are defined by calling specific *functions* that create them: for example `h1()` will create an element that has a first\-level heading, `textInput()` will create an element where the user can enter text, and `textOutput` will create an element that can have dynamic (changing) content. Usually these content elements are defined as *nested* (anonymous) variables, each on its own line:
```
# still just calling a function with arguments!
ui <- fluidPage(
h1("My App"), # first argument
textInput('username', label = "What is your name?"), # second argument
textOutput('message') # third argument
)
```
Note that layout functions *themselves return content elements*, meaning it is possible to include a layout inside another layout. This allows you to create complex layouts by combining multiple layout elements together. For example:
```
ui <- fluidPage( # UI is a fluid page
titlePanel("My Title"), # include panel with the title (also sets browser title)
sidebarLayout( # layout the page in two columns
sidebarPanel( # specify content for the "sidebar" column
p("sidebar panel content goes here")
),
mainPanel( # specify content for the "main" column
p("main panel content goes here")
)
)
)
```
See the [Shiny documentation](http://shiny.rstudio.com/reference/shiny/latest/) and [gallery](http://shiny.rstudio.com/gallery/) for details and examples of doing complex application layouts.
Fun Fact: much of Shiny’s styling and layout structure is based on the [Bootstrap](http://getbootstrap.com/) web framework.
You can include *static* (unchanging) content in a Shiny UI layout—this is similar to the kinds of content you would write in Markdown (rather than inline R) when using R Markdown. However, you usually don’t specify this content using Markdown syntax (though it is possible to [include a markdown file](http://shiny.rstudio.com/reference/shiny/latest/include.html)’s content). Instead, you include content functions that produce HTML, the language that Markdown is converted to when you look at it in the browser. These functions include:
* `p()` for creating paragraphs, the same as plain text in Markdown
* `h1()`, `h2()`, `h3()` etc for creating headings, the same as `# Heading 1`, `## Heading 2`, `### Heading 3` in Markdown
* `em()` for creating *emphasized* (italic) text, the same as `_text_` in Markdown
* `strong()` for creating **strong** (bolded) text, the same as `**text**` in Markdown
* `a(text, href='url')` for creating hyperlinks (anchors), the same as `[text](url)` in Markdown
* `img(text, src='url')` for including images, the same as `` in Markdown
There are many other methods as well, see [this tutorial lesson](http://shiny.rstudio.com/tutorial/lesson2/) for a list. If you are [familiar with HTML](https://info343-au16.github.io/#/tutorials/html), then these methods will seem familiar; you can also write content in HTML directly using the `tag()` content function.
#### 16\.1\.2\.1 Control Widgets and Reactive Outputs
It is more common to include **control widgets** as content elements in your UI layout. Widgets are *dynamic* (changing) control elements that the user can interact with. Each stores a **value** that the user has entered, whether by typing into a box, moving a slider, or checking a button. When the user changes their input, the stored *value* automatically changes as well.
Examples of control widgets (image from shiny.rstudio.com).
Like other content elements, widgets are created by calling an appropriate function. For example:
* `textInput()` creates a box in which the user can enter text
* `sliderInput()` creates a slider
* `selectInput()` creates a dropdown menu the user can choose from
* `checkboxInput()` creates a box the user can check (using `checkboxGroupInput()` to group them)
* `radioButtons()` creates “radio” buttons (which the user can select only one of at a time)
See [the documentation](http://shiny.rstudio.com/reference/shiny/latest/), and [this tutorial lesson](http://shiny.rstudio.com/tutorial/lesson3/) for a complete list.
All widget functions take at least two arguments:
* A **name** (as a string) for the widget’s value. This will be the **“key”** that will allow the server to be able to access the value the user has input (think: the key in the `input` *list*).
* A **label** (a string or content element described above) that will be shown alongside the widget and tell the user what the value represents. Note that this can be an empty string (`""`) if you don’t want to show anything.
Other arguments may be required by a particular widget—for example, a slider’s `min` and `max` values:
```
# this function would be nested in a layout function (e.g., `fluidPage()`)
sliderInput(
"age", # key this value will be assigned to
"Age of subjects", # label
min = 18, # minimum slider value
max = 80, # maximum slider value
value = 42 # starting value
)
```
Widgets are used to provide **inputs *to*** the Server; see the below section for how to use these inputs, as well as examples from [the gallery](http://shiny.rstudio.com/gallery/).
In order to display **outputs *from*** the Server, you include a **reactive output** element in your UI layout. These are elements similar to the basic content elements, but instead of just displaying *static* (unchanging) content they can display *dynamic* (changing) content produced by the Server.
As with other content elements, reactive outputs are creating by calling an appropriate function. For example:
* `textOutput()` displays output as plain text (note this output can be nested in a content element for formatting)
* `tableOutput()` displays output as a data table (similar to `kable()` in R Markdown). See also `dataTableOutput()` for an interactive version!
* `plotOutput()` displays a graphical plot, such as one created with `ggplot2`
Each of these functions takes as an argument the **name** (as a string) of the value that will be displayed. This is the **“key”** that allows it to access the value the Server is outputting. Note that the functions may take additional arguments as well (e.g., to specify the size of a plot); see [the documentation](http://shiny.rstudio.com/reference/shiny/latest/) for details.
### 16\.1\.3 The Server
The Server defines how the data input by the user will be used to create the output displayed by the app—that is, how the *control widgets* and *reactive outputs* will be connected. You create a Server by *defining a new function* (not calling a provided one):
```
server <- function(input, output) {
# assign values to `output` here
}
```
Note that this is *just a normal function* that happens two take **lists** as arguments. That means you can include the same kinds of code as you normally would—though that code will only be run once (when the application is first started) unless defined as part of a reactive expression.
The first argument is a list of any values defined by the *control widgets*: each **name** in a control widget will be a **key** in this list. For example, using the above `sliderInput()` example would cause the list to have an `age` key (referenced as `input$age`). This allows the Server to access any data that the user has input, using the key names defined in the UI. Note that the values in this list *will change as the user interacts with the UI’s control widgets*.
The purpose of the Server function is to assign new *values* to the `output` argument list (each with an appropriate *key*). These values will then be displayed by the *reactive outputs* defined in the UI. To make it so that the values can actually be displayed by by the UI, the values assigned to this list need to be the results of **Render Functions**. Similar to creating widgets or reactive outputs, different functions are associated with different types of output the server should produce. For example:
* `renderText()` will produce text (character strings) that can be displayed (i.e., by `textOutput()` in the UI)
* `renderTable()` will produce a table that can be displayed (i.e., by `tableOutput()` in the UI)
* `renderPlot()` will produce a graphical plot that can be displayed (i.e., by `plotOutput()` in the UI)
Render functions take as an argument a **Reactive Expression**. This is a lot like a function: it is a **block** of code (in braces **`{}`**) that **returns** the value which should be rendered. For example:
```
output$msg <- renderText({
# code goes here, just like any other function
my_greeting <- "Hello"
# code should always draw upon a key from the `input` variable
message <- paste(my_greeting, input$username)
# return the variable that will be rendered
return(message)
})
```
The only difference between writing a *reactive expression* and a function is that you only include the *block* (the braces and the code inside of them): you don’t use the keyword `function` and don’t specify a set of arguments.
This technically defines a *closure*, which is a programming concept used to encapsulate functions and the context for those functions.
These *reactive expressions* will be “re\-run” **every time** one of the `input` values that it references changes. So if the user interacts with the `username` control widget (and thereby changes the value of the `input` list), the expression in the above `renderText()` will be executed again, returning a new value that will be assigned to `output$msg`. And since `output$msg` has now changed, any *reactive output* in the UI (e.g., a `textOutput()`) will update to show the latest value. This makes the app interactive!
#### 16\.1\.3\.1 Multiple Views
It is quite common in a Shiny app to produce *lots* of output variables, and thus to have multiple reactive expressions. For example:
```
server <- function(input, output) {
# render a histogram plot
output$hist <- renderPlot({
uniform_nums <- runif(input$num, 1, 10) # random nums between 1 and 10
return(hist(uniform_nums)) # built-in plotting for simplicity
})
# render the counts
output$counts <- renderPrint({
uniform_nums <- runif(input$num, 1, 10) # random nums between 1 and 10
counts <- factor(cut(uniform_nums, breaks=1:10)) # factor
return(summary(counts)) # simple vector of counts
})
}
```
If you look at the above example though, you’ll notice that each render function produces a set of random numbers… which means each will produce a *different* set of numbers! The histogram and the table won’t match!
This is an example of where you want to share a single piece of data (a single **model**) between multiple different renditions (multiple **views**). Effectively, you want to define a shared variable (the `uniform_nums`) that can be referenced by both render functions. But since you need that shared variable to be able to *update* whenever the `input` changes, you need to make it be a *reactive expression* itself. You can do this by using the **`reactive()`** function:
```
server <- function(input, output) {
# define a reactive variable
uniform_nums <- reactive({
return(runif(input$num, 1, 10)) # just like for a render function
})
# render a histogram plot
output$hist <- renderPlot({
return(hist(uniform_nums())) # call the reactive variable AS A FUNCTION
})
# render the counts
output$counts <- renderPrint({
counts <- factor(cut(uniform_nums(), breaks=1:10)) # call the reactive variable AS A FUNCTION
return(summary(counts))
})
}
```
The `reactive()` function lets you define a single “variable” that is a *reactive function* which can be called from within the render functions. Importantly, the value returned by this function (the `uniform_nums()`) only changes **when a referenced `input` changes**. Thus as long as `input$num` stays the same, `uniform_nums()` will return the same value.
This is very powerful for allowing multiple **views** of a single piece of data: you can have a single source of data displayed both graphically and textually, with both views linked off of the same processed data table. Additionally, it can help keep your code more organized and readable, and avoid needing to duplicate any processing.
### 16\.1\.1 Application Structure
Shiny applications are divided into two parts:
1. The **User Interface (UI)** defines how the application will be *displayed* in the browser. The UI can render R content such as text or graphics just like R Markdown, but it can also include **widgets**, which are interactive controls for your application (think buttons or sliders). The UI can specify a **layout** for these components (e.g., so you can put widgets above, below, or beside one another).
The UI for a Shiny application is defined as a **value**, usually one returned from calling a **layout function**. For example:
```
# The ui is the result of calling the `fluidPage()` layout function
my_ui <- fluidPage(
# A widget: a text input box (save input in the `username` key)
textInput("username", label = "What is your name?"),
# An output element: a text output (for the `message` key)
textOutput("message")
)
```
This UI defines a [fluidPage](https://shiny.rstudio.com/reference/shiny/latest/fluidPage.html) (where the content flows “fluidly” down the page), that contains two *content elements*: a text input box where the user can type their name, and some outputted text based on the `message` variable.
2. The **Server** defines the data that will be displayed through the UI. You can think of this as an interactive R script that the user will be able to “run”: the script will take in *inputs* from the user (based on their interactions) and provide *outputs* that the UI will then display. The server uses **reactive expressions**, which are like functions that will automatically be re\-run whenever the input changes. This allows the output to be dynamic and interactive.
The Server for a Shiny application is defined as a **function** (as opposed to the UI which is a *value*). This function takes in two *lists* as argments: an `input` and `output`. It then uses *render functions* and *reactive expressions* that assign values to the `output` list based on the `input` list. For example:
```
# The server is a function that takes `input` and `output` args
my_server <- function(input, output) {
# assign a value to the `message` key in `output`
# argument is a reactive expression for showing text
output$message <- renderText({
# use the `username` key from input and and return new value
# for the `message` key in output
return(paste("Hello", input$username))
})
}
```
Combined, this UI and server will allow the user to type their name into an input box, and will then say “hello” to whatever name is typed in.
More details about the UI and server components can be found in the sections below.
#### 16\.1\.1\.1 Combining UI and Server
There are two ways of combining the UI and server:
The first (newer) way is to define a file called **`app.R`**. This file should call the [**`shinyApp()`**](http://shiny.rstudio.com/reference/shiny/latest/shinyApp.html) function, which takes a UI value and Server function as arguments. For example:
```
# pass in the variables defined above
shinyApp(ui = my_ui, server = my_server)
```
Executing the `shinyApp()` function will start the App (you can also click the **“Run App”** button at the top of RStudio).
* Note: if you change the UI or the Server, you do **not** need to stop and start the app; you can simply refresh the browser or viewer window and it will reload with the new UI and server.
* If you need to stop the App, you can hit the “Stop Sign” icon on the RStudio console.
Using this function allows you to define your entire application (UI and Server) in a single file (which **must** be named `app.R`). This approach is good for simple applications that you wish to be able to share with others, since the entire application code can be listed in a single file.
However, it is also possible to define the UI and server as *separate* files. This allows you to keep the presentation (UI) separated from the logic (server) of your application, making it easier to maintain and change in the future. To do this, you define two separate files: **`ui.R`** for the UI and **`server.R`** for the Server (the files **must** be named `ui.R` and `server.R`). In these files, you must call the functions `shinyUI()` and `shinyServer()` respectively to create the UI and server, and then RStudio will automatically combine these files together into an application:
```
# In ui.R file
my_ui <- fluidPage(
# define widgets
)
shinyUI(my_ui)
```
```
# In server.R file
my_server <- function(input, output) {
# define output reactive expressions
}
shinyServer(my_server)
```
You can then run the app by using the **“Run App”** button at the top of RStudio:
Use RStudio to run a shiny app defined in separate UI and Server files.
This chapter will primarily use the “single file” approach for compactness and readability, but you are encouraged to break up the UI and server into separate files for your own, larger applications.
* Note that it is also possible to simply define the (e.g.) `my_ui` and `my_server` variables in separate files, and then use `source()` to load them into the `app.R` file and pass them into `shinyApp()`.
#### 16\.1\.1\.1 Combining UI and Server
There are two ways of combining the UI and server:
The first (newer) way is to define a file called **`app.R`**. This file should call the [**`shinyApp()`**](http://shiny.rstudio.com/reference/shiny/latest/shinyApp.html) function, which takes a UI value and Server function as arguments. For example:
```
# pass in the variables defined above
shinyApp(ui = my_ui, server = my_server)
```
Executing the `shinyApp()` function will start the App (you can also click the **“Run App”** button at the top of RStudio).
* Note: if you change the UI or the Server, you do **not** need to stop and start the app; you can simply refresh the browser or viewer window and it will reload with the new UI and server.
* If you need to stop the App, you can hit the “Stop Sign” icon on the RStudio console.
Using this function allows you to define your entire application (UI and Server) in a single file (which **must** be named `app.R`). This approach is good for simple applications that you wish to be able to share with others, since the entire application code can be listed in a single file.
However, it is also possible to define the UI and server as *separate* files. This allows you to keep the presentation (UI) separated from the logic (server) of your application, making it easier to maintain and change in the future. To do this, you define two separate files: **`ui.R`** for the UI and **`server.R`** for the Server (the files **must** be named `ui.R` and `server.R`). In these files, you must call the functions `shinyUI()` and `shinyServer()` respectively to create the UI and server, and then RStudio will automatically combine these files together into an application:
```
# In ui.R file
my_ui <- fluidPage(
# define widgets
)
shinyUI(my_ui)
```
```
# In server.R file
my_server <- function(input, output) {
# define output reactive expressions
}
shinyServer(my_server)
```
You can then run the app by using the **“Run App”** button at the top of RStudio:
Use RStudio to run a shiny app defined in separate UI and Server files.
This chapter will primarily use the “single file” approach for compactness and readability, but you are encouraged to break up the UI and server into separate files for your own, larger applications.
* Note that it is also possible to simply define the (e.g.) `my_ui` and `my_server` variables in separate files, and then use `source()` to load them into the `app.R` file and pass them into `shinyApp()`.
### 16\.1\.2 The UI
The UI defines how the app will be displayed in the browser. You create a UI by calling a **layout function** such as `fluidPage()`, which will return a UI definition that can be used by the `shinyUI()` or `shinyApp()` functions.
You specify the “content” that you want the layout to contain (and hence the app to show) by passing each **content element** (piece of content) as an *argument* to that function:
```
# a "pseudocode" example, calling a function with arguments
ui <- fluidPage(element1, element2, element3)
```
Content elements are defined by calling specific *functions* that create them: for example `h1()` will create an element that has a first\-level heading, `textInput()` will create an element where the user can enter text, and `textOutput` will create an element that can have dynamic (changing) content. Usually these content elements are defined as *nested* (anonymous) variables, each on its own line:
```
# still just calling a function with arguments!
ui <- fluidPage(
h1("My App"), # first argument
textInput('username', label = "What is your name?"), # second argument
textOutput('message') # third argument
)
```
Note that layout functions *themselves return content elements*, meaning it is possible to include a layout inside another layout. This allows you to create complex layouts by combining multiple layout elements together. For example:
```
ui <- fluidPage( # UI is a fluid page
titlePanel("My Title"), # include panel with the title (also sets browser title)
sidebarLayout( # layout the page in two columns
sidebarPanel( # specify content for the "sidebar" column
p("sidebar panel content goes here")
),
mainPanel( # specify content for the "main" column
p("main panel content goes here")
)
)
)
```
See the [Shiny documentation](http://shiny.rstudio.com/reference/shiny/latest/) and [gallery](http://shiny.rstudio.com/gallery/) for details and examples of doing complex application layouts.
Fun Fact: much of Shiny’s styling and layout structure is based on the [Bootstrap](http://getbootstrap.com/) web framework.
You can include *static* (unchanging) content in a Shiny UI layout—this is similar to the kinds of content you would write in Markdown (rather than inline R) when using R Markdown. However, you usually don’t specify this content using Markdown syntax (though it is possible to [include a markdown file](http://shiny.rstudio.com/reference/shiny/latest/include.html)’s content). Instead, you include content functions that produce HTML, the language that Markdown is converted to when you look at it in the browser. These functions include:
* `p()` for creating paragraphs, the same as plain text in Markdown
* `h1()`, `h2()`, `h3()` etc for creating headings, the same as `# Heading 1`, `## Heading 2`, `### Heading 3` in Markdown
* `em()` for creating *emphasized* (italic) text, the same as `_text_` in Markdown
* `strong()` for creating **strong** (bolded) text, the same as `**text**` in Markdown
* `a(text, href='url')` for creating hyperlinks (anchors), the same as `[text](url)` in Markdown
* `img(text, src='url')` for including images, the same as `` in Markdown
There are many other methods as well, see [this tutorial lesson](http://shiny.rstudio.com/tutorial/lesson2/) for a list. If you are [familiar with HTML](https://info343-au16.github.io/#/tutorials/html), then these methods will seem familiar; you can also write content in HTML directly using the `tag()` content function.
#### 16\.1\.2\.1 Control Widgets and Reactive Outputs
It is more common to include **control widgets** as content elements in your UI layout. Widgets are *dynamic* (changing) control elements that the user can interact with. Each stores a **value** that the user has entered, whether by typing into a box, moving a slider, or checking a button. When the user changes their input, the stored *value* automatically changes as well.
Examples of control widgets (image from shiny.rstudio.com).
Like other content elements, widgets are created by calling an appropriate function. For example:
* `textInput()` creates a box in which the user can enter text
* `sliderInput()` creates a slider
* `selectInput()` creates a dropdown menu the user can choose from
* `checkboxInput()` creates a box the user can check (using `checkboxGroupInput()` to group them)
* `radioButtons()` creates “radio” buttons (which the user can select only one of at a time)
See [the documentation](http://shiny.rstudio.com/reference/shiny/latest/), and [this tutorial lesson](http://shiny.rstudio.com/tutorial/lesson3/) for a complete list.
All widget functions take at least two arguments:
* A **name** (as a string) for the widget’s value. This will be the **“key”** that will allow the server to be able to access the value the user has input (think: the key in the `input` *list*).
* A **label** (a string or content element described above) that will be shown alongside the widget and tell the user what the value represents. Note that this can be an empty string (`""`) if you don’t want to show anything.
Other arguments may be required by a particular widget—for example, a slider’s `min` and `max` values:
```
# this function would be nested in a layout function (e.g., `fluidPage()`)
sliderInput(
"age", # key this value will be assigned to
"Age of subjects", # label
min = 18, # minimum slider value
max = 80, # maximum slider value
value = 42 # starting value
)
```
Widgets are used to provide **inputs *to*** the Server; see the below section for how to use these inputs, as well as examples from [the gallery](http://shiny.rstudio.com/gallery/).
In order to display **outputs *from*** the Server, you include a **reactive output** element in your UI layout. These are elements similar to the basic content elements, but instead of just displaying *static* (unchanging) content they can display *dynamic* (changing) content produced by the Server.
As with other content elements, reactive outputs are creating by calling an appropriate function. For example:
* `textOutput()` displays output as plain text (note this output can be nested in a content element for formatting)
* `tableOutput()` displays output as a data table (similar to `kable()` in R Markdown). See also `dataTableOutput()` for an interactive version!
* `plotOutput()` displays a graphical plot, such as one created with `ggplot2`
Each of these functions takes as an argument the **name** (as a string) of the value that will be displayed. This is the **“key”** that allows it to access the value the Server is outputting. Note that the functions may take additional arguments as well (e.g., to specify the size of a plot); see [the documentation](http://shiny.rstudio.com/reference/shiny/latest/) for details.
#### 16\.1\.2\.1 Control Widgets and Reactive Outputs
It is more common to include **control widgets** as content elements in your UI layout. Widgets are *dynamic* (changing) control elements that the user can interact with. Each stores a **value** that the user has entered, whether by typing into a box, moving a slider, or checking a button. When the user changes their input, the stored *value* automatically changes as well.
Examples of control widgets (image from shiny.rstudio.com).
Like other content elements, widgets are created by calling an appropriate function. For example:
* `textInput()` creates a box in which the user can enter text
* `sliderInput()` creates a slider
* `selectInput()` creates a dropdown menu the user can choose from
* `checkboxInput()` creates a box the user can check (using `checkboxGroupInput()` to group them)
* `radioButtons()` creates “radio” buttons (which the user can select only one of at a time)
See [the documentation](http://shiny.rstudio.com/reference/shiny/latest/), and [this tutorial lesson](http://shiny.rstudio.com/tutorial/lesson3/) for a complete list.
All widget functions take at least two arguments:
* A **name** (as a string) for the widget’s value. This will be the **“key”** that will allow the server to be able to access the value the user has input (think: the key in the `input` *list*).
* A **label** (a string or content element described above) that will be shown alongside the widget and tell the user what the value represents. Note that this can be an empty string (`""`) if you don’t want to show anything.
Other arguments may be required by a particular widget—for example, a slider’s `min` and `max` values:
```
# this function would be nested in a layout function (e.g., `fluidPage()`)
sliderInput(
"age", # key this value will be assigned to
"Age of subjects", # label
min = 18, # minimum slider value
max = 80, # maximum slider value
value = 42 # starting value
)
```
Widgets are used to provide **inputs *to*** the Server; see the below section for how to use these inputs, as well as examples from [the gallery](http://shiny.rstudio.com/gallery/).
In order to display **outputs *from*** the Server, you include a **reactive output** element in your UI layout. These are elements similar to the basic content elements, but instead of just displaying *static* (unchanging) content they can display *dynamic* (changing) content produced by the Server.
As with other content elements, reactive outputs are creating by calling an appropriate function. For example:
* `textOutput()` displays output as plain text (note this output can be nested in a content element for formatting)
* `tableOutput()` displays output as a data table (similar to `kable()` in R Markdown). See also `dataTableOutput()` for an interactive version!
* `plotOutput()` displays a graphical plot, such as one created with `ggplot2`
Each of these functions takes as an argument the **name** (as a string) of the value that will be displayed. This is the **“key”** that allows it to access the value the Server is outputting. Note that the functions may take additional arguments as well (e.g., to specify the size of a plot); see [the documentation](http://shiny.rstudio.com/reference/shiny/latest/) for details.
### 16\.1\.3 The Server
The Server defines how the data input by the user will be used to create the output displayed by the app—that is, how the *control widgets* and *reactive outputs* will be connected. You create a Server by *defining a new function* (not calling a provided one):
```
server <- function(input, output) {
# assign values to `output` here
}
```
Note that this is *just a normal function* that happens two take **lists** as arguments. That means you can include the same kinds of code as you normally would—though that code will only be run once (when the application is first started) unless defined as part of a reactive expression.
The first argument is a list of any values defined by the *control widgets*: each **name** in a control widget will be a **key** in this list. For example, using the above `sliderInput()` example would cause the list to have an `age` key (referenced as `input$age`). This allows the Server to access any data that the user has input, using the key names defined in the UI. Note that the values in this list *will change as the user interacts with the UI’s control widgets*.
The purpose of the Server function is to assign new *values* to the `output` argument list (each with an appropriate *key*). These values will then be displayed by the *reactive outputs* defined in the UI. To make it so that the values can actually be displayed by by the UI, the values assigned to this list need to be the results of **Render Functions**. Similar to creating widgets or reactive outputs, different functions are associated with different types of output the server should produce. For example:
* `renderText()` will produce text (character strings) that can be displayed (i.e., by `textOutput()` in the UI)
* `renderTable()` will produce a table that can be displayed (i.e., by `tableOutput()` in the UI)
* `renderPlot()` will produce a graphical plot that can be displayed (i.e., by `plotOutput()` in the UI)
Render functions take as an argument a **Reactive Expression**. This is a lot like a function: it is a **block** of code (in braces **`{}`**) that **returns** the value which should be rendered. For example:
```
output$msg <- renderText({
# code goes here, just like any other function
my_greeting <- "Hello"
# code should always draw upon a key from the `input` variable
message <- paste(my_greeting, input$username)
# return the variable that will be rendered
return(message)
})
```
The only difference between writing a *reactive expression* and a function is that you only include the *block* (the braces and the code inside of them): you don’t use the keyword `function` and don’t specify a set of arguments.
This technically defines a *closure*, which is a programming concept used to encapsulate functions and the context for those functions.
These *reactive expressions* will be “re\-run” **every time** one of the `input` values that it references changes. So if the user interacts with the `username` control widget (and thereby changes the value of the `input` list), the expression in the above `renderText()` will be executed again, returning a new value that will be assigned to `output$msg`. And since `output$msg` has now changed, any *reactive output* in the UI (e.g., a `textOutput()`) will update to show the latest value. This makes the app interactive!
#### 16\.1\.3\.1 Multiple Views
It is quite common in a Shiny app to produce *lots* of output variables, and thus to have multiple reactive expressions. For example:
```
server <- function(input, output) {
# render a histogram plot
output$hist <- renderPlot({
uniform_nums <- runif(input$num, 1, 10) # random nums between 1 and 10
return(hist(uniform_nums)) # built-in plotting for simplicity
})
# render the counts
output$counts <- renderPrint({
uniform_nums <- runif(input$num, 1, 10) # random nums between 1 and 10
counts <- factor(cut(uniform_nums, breaks=1:10)) # factor
return(summary(counts)) # simple vector of counts
})
}
```
If you look at the above example though, you’ll notice that each render function produces a set of random numbers… which means each will produce a *different* set of numbers! The histogram and the table won’t match!
This is an example of where you want to share a single piece of data (a single **model**) between multiple different renditions (multiple **views**). Effectively, you want to define a shared variable (the `uniform_nums`) that can be referenced by both render functions. But since you need that shared variable to be able to *update* whenever the `input` changes, you need to make it be a *reactive expression* itself. You can do this by using the **`reactive()`** function:
```
server <- function(input, output) {
# define a reactive variable
uniform_nums <- reactive({
return(runif(input$num, 1, 10)) # just like for a render function
})
# render a histogram plot
output$hist <- renderPlot({
return(hist(uniform_nums())) # call the reactive variable AS A FUNCTION
})
# render the counts
output$counts <- renderPrint({
counts <- factor(cut(uniform_nums(), breaks=1:10)) # call the reactive variable AS A FUNCTION
return(summary(counts))
})
}
```
The `reactive()` function lets you define a single “variable” that is a *reactive function* which can be called from within the render functions. Importantly, the value returned by this function (the `uniform_nums()`) only changes **when a referenced `input` changes**. Thus as long as `input$num` stays the same, `uniform_nums()` will return the same value.
This is very powerful for allowing multiple **views** of a single piece of data: you can have a single source of data displayed both graphically and textually, with both views linked off of the same processed data table. Additionally, it can help keep your code more organized and readable, and avoid needing to duplicate any processing.
#### 16\.1\.3\.1 Multiple Views
It is quite common in a Shiny app to produce *lots* of output variables, and thus to have multiple reactive expressions. For example:
```
server <- function(input, output) {
# render a histogram plot
output$hist <- renderPlot({
uniform_nums <- runif(input$num, 1, 10) # random nums between 1 and 10
return(hist(uniform_nums)) # built-in plotting for simplicity
})
# render the counts
output$counts <- renderPrint({
uniform_nums <- runif(input$num, 1, 10) # random nums between 1 and 10
counts <- factor(cut(uniform_nums, breaks=1:10)) # factor
return(summary(counts)) # simple vector of counts
})
}
```
If you look at the above example though, you’ll notice that each render function produces a set of random numbers… which means each will produce a *different* set of numbers! The histogram and the table won’t match!
This is an example of where you want to share a single piece of data (a single **model**) between multiple different renditions (multiple **views**). Effectively, you want to define a shared variable (the `uniform_nums`) that can be referenced by both render functions. But since you need that shared variable to be able to *update* whenever the `input` changes, you need to make it be a *reactive expression* itself. You can do this by using the **`reactive()`** function:
```
server <- function(input, output) {
# define a reactive variable
uniform_nums <- reactive({
return(runif(input$num, 1, 10)) # just like for a render function
})
# render a histogram plot
output$hist <- renderPlot({
return(hist(uniform_nums())) # call the reactive variable AS A FUNCTION
})
# render the counts
output$counts <- renderPrint({
counts <- factor(cut(uniform_nums(), breaks=1:10)) # call the reactive variable AS A FUNCTION
return(summary(counts))
})
}
```
The `reactive()` function lets you define a single “variable” that is a *reactive function* which can be called from within the render functions. Importantly, the value returned by this function (the `uniform_nums()`) only changes **when a referenced `input` changes**. Thus as long as `input$num` stays the same, `uniform_nums()` will return the same value.
This is very powerful for allowing multiple **views** of a single piece of data: you can have a single source of data displayed both graphically and textually, with both views linked off of the same processed data table. Additionally, it can help keep your code more organized and readable, and avoid needing to duplicate any processing.
16\.2 Publishing Shiny Apps
---------------------------
Sharing a Shiny App with the world is a bit more involved than simply pushing the code to GitHub. You can’t just use GitHub pages to host the code because, in addition to the HTML UI, you need an R interpreter session to run the Server that the UI can connect to (and GitHub does not provide R interpreters)!
While there are a few different ways of “hosting” Shiny Apps, in this course you’ll use the simplest one: hosting through [**shinyapps.io**](https://www.shinyapps.io). shinyapps.io is a platform for hosting and running Shiny Apps; while large applications cost money, anyone can deploy a simple app (like the ones you’ll create in this course) for free.
In order to host your app on shinyapps.io, you’ll need to [create a free account](https://www.shinyapps.io/admin/#/signup). Note that you can sign up with GitHub or your Google/UW account. Follow the site’s instructions to
1. Select an account name (use something professional, like you used when signing up with GitHub)
2. Install the required `rsconnect` package (may be included with RStudio)
3. Set your authorization token (“password”). Just click the green “Copy to Clipboard” button, and then paste that into the **Console** in RStudio. You should only need to do this once.
Don’t worry about “Step 3 \- Deploy”; you’ll do that through RStudio directly!
After you’ve set up an account, you can *Run* your application (as above) and hit the **Publish** button in the upper\-right corner:
How to publish a running Shiny App to shinyapps.io.
This will put your app online, available at
```
https://USERNAME.shinyapps.io/APPNAME/
```
**Important** Publishing to shinyapps.io is one of the major “pain points” in working with Shiny. For the best experience, be sure to:
1. Always test and debug your app *locally* (e.g., on your own computer, by running the App through RStudio). Make sure it works on your machine before you try to put it online.
2. Use correct folder structures and *relative paths*. All of your app should be in a single folder (usually named after the project). Make sure any `.csv` or `.R` files referenced are inside the app folder, and that you use relative paths to refer to them. Do not include any `setwd()` statements in your code; you should only set the working directory through RStudio (because shinyapps.io will have its own working directory).
3. It is possible to [see the logs for your deployed app](http://docs.rstudio.com/shinyapps.io/applications.html#logging), which may include errors explaining any problems that arise when you deploy your app.
For more options and details, see [the shinyapps.io documentation](http://docs.rstudio.com/shinyapps.io/index.html).
Resources
---------
* [Shiny Documentation](http://shiny.rstudio.com/articles/)
* [Shiny Basics Article](http://shiny.rstudio.com/articles/basics.html)
* [Shiny Tutorial](http://shiny.rstudio.com/tutorial/) (video; links to text at bottom)
* [Shiny Cheatsheet](https://www.rstudio.com/wp-content/uploads/2016/01/shiny-cheatsheet.pdf)
* [Shiny Example Gallery](http://shiny.rstudio.com/gallery/)
* [shinyapps.io User Guide](http://docs.rstudio.com/shinyapps.io/index.html)
* [Interactive Plots with Shiny](http://shiny.rstudio.com/articles/plot-interaction.html) (see also [here](https://blog.rstudio.org/2015/06/16/shiny-0-12-interactive-plots-with-ggplot2/))
* [Interactive Docs with Shiny](https://shiny.rstudio.com/articles/interactive-docs.html)
| Field Specific |
info201.github.io | https://info201.github.io/plotly.html |
A Plotly
========
In this module, you’ll start building visualizations using the Plotly API. Plotly is a visualization software that recently open\-sourced it’s API to JavaScript, MatLab, Python, and R, making it quite valuable to learn. Plotly graphs are fairly customizable, and (by default) have a variety of interactive methods with each chart (i.e., hover, brush to zoom, pan, etc.). Many of these events are fairly cumbersome to build programmatically, which makes a library like Plotly quite attractive.
A.1 Getting Started
-------------------
The Plotly API is an R package that you’ll use to build interactive graphics. Like other open\-source that we’ve used in this course, we’ll load this API as a R package as follows:
```
# Install package
install.packages("plotly")
# Load library
library(plotly)
```
Then, the `plot_ly` object will be accessible to you to build graphics on your page.
Note: sometimes RStudio fails to show your plotly graph in a website preview when you use `plotly` in an `RMarkdown` documnent. However, if you click on the **Open in Browser** button, you should be able to interact with your chart as it will show in a web browser. This isn’t your fault, and doesn’t need to be de\-bugged.
A.2 Basic Charts
----------------
One of the best ways to start building charts with Plotly is to take a look at a [basic example](https://plot.ly/r/#basic-charts) of your choice, and explore the syntax. In general, to build a Plotly object (graph) you’ll pass a **dataframe** into the `plot_ly` function, then **adjust the parameters** to specify what you want to visualize. For example, here is the basic example of a scatterplot from the documentation:
```
# Make a scatterplot of the iris data
plot_ly(data = iris, x = ~Sepal.Length, y = ~Petal.Length, mode = "markers", type = "scatter")
```
The approach seems pretty straightforward – in fact, if you exclude `type = "scatter"` and `mode = "markers"`, Plotly will make an educated guess about what type of plot you want (and in this case, it will in fact create a scatterplot!). The only syntax that looks a bit strange is the tilde character (`~`). In R, the tilde designates a variable as a **formula**, which was a design choice of the developers of the API.
A.3 Layout
----------
While the `plot_ly` function controls the data that is being visualized, additional chart options such as *titles and axes* are controlled by the `layout` function. The `layout` function accepts as a parameter **a plotly object**, and *manipulates that object*. Again, I think a great place to start is an example in the [documentation](https://plot.ly/r/line-and-scatter/):
```
# From documentation
# Create a plotly object `p`
p <- plot_ly(data = iris, x = ~Sepal.Length, y = ~Petal.Length, type = "scatter", mode = "markers",
marker = list(size = 10,
color = 'rgba(255, 182, 193, .9)',
line = list(color = 'rgba(152, 0, 0, .8)',
width = 2))) %>%
# Use pipe function to pass the plotly object into the `layout` function
layout(title = 'Styled Scatter',
yaxis = list(zeroline = FALSE),
xaxis = list(zeroline = FALSE))
# Show chart
p
```
This example uses the pipe operator (`%>%`) to pass the plotly object *into* the `layout` function as the first argument. We can then infer the structure of the other parameters, which you can read about more in the [API Documentation](https://plot.ly/r/reference/#Layout_and_layout_style_objects):
A.4 Hovers
----------
By default, plotly will provide information on each element when you hover over it (which is **awesome**). To manipulate the information in a hover, you can modify the `text` attribute in the `plot_ly` function, and you can **use your data** to populate the information on hover:
```
# From documentation
# Create data
d <- diamonds[sample(nrow(diamonds), 1000), ]
# Create plot, specifying hover text
p <- plot_ly(
d, x = ~carat, y = ~price, mode = "markers", type = "scatter",
# Hover text:
text = ~paste0("Price:$", price, '<br>Cut: ', cut),
color = ~carat, size = ~carat
)
# Show chart
p
```
Note, plotly allows you to specify HTML syntax in the text formula. In this case, it uses a line break `<br>` to improve the layout.
Resources
---------
* [Plotly Website](https://plot.ly/)
* [Plotly R API](https://plot.ly/r/)
* [Getting Started with Plotly for R](https://plot.ly/r/getting-started/)
* [Plotly Cheatsheet](https://images.plot.ly/plotly-documentation/images/r_cheat_sheet.pdf)
* [Plotly book (extensive documentation)](https://cpsievert.github.io/plotly_book/)
A.1 Getting Started
-------------------
The Plotly API is an R package that you’ll use to build interactive graphics. Like other open\-source that we’ve used in this course, we’ll load this API as a R package as follows:
```
# Install package
install.packages("plotly")
# Load library
library(plotly)
```
Then, the `plot_ly` object will be accessible to you to build graphics on your page.
Note: sometimes RStudio fails to show your plotly graph in a website preview when you use `plotly` in an `RMarkdown` documnent. However, if you click on the **Open in Browser** button, you should be able to interact with your chart as it will show in a web browser. This isn’t your fault, and doesn’t need to be de\-bugged.
A.2 Basic Charts
----------------
One of the best ways to start building charts with Plotly is to take a look at a [basic example](https://plot.ly/r/#basic-charts) of your choice, and explore the syntax. In general, to build a Plotly object (graph) you’ll pass a **dataframe** into the `plot_ly` function, then **adjust the parameters** to specify what you want to visualize. For example, here is the basic example of a scatterplot from the documentation:
```
# Make a scatterplot of the iris data
plot_ly(data = iris, x = ~Sepal.Length, y = ~Petal.Length, mode = "markers", type = "scatter")
```
The approach seems pretty straightforward – in fact, if you exclude `type = "scatter"` and `mode = "markers"`, Plotly will make an educated guess about what type of plot you want (and in this case, it will in fact create a scatterplot!). The only syntax that looks a bit strange is the tilde character (`~`). In R, the tilde designates a variable as a **formula**, which was a design choice of the developers of the API.
A.3 Layout
----------
While the `plot_ly` function controls the data that is being visualized, additional chart options such as *titles and axes* are controlled by the `layout` function. The `layout` function accepts as a parameter **a plotly object**, and *manipulates that object*. Again, I think a great place to start is an example in the [documentation](https://plot.ly/r/line-and-scatter/):
```
# From documentation
# Create a plotly object `p`
p <- plot_ly(data = iris, x = ~Sepal.Length, y = ~Petal.Length, type = "scatter", mode = "markers",
marker = list(size = 10,
color = 'rgba(255, 182, 193, .9)',
line = list(color = 'rgba(152, 0, 0, .8)',
width = 2))) %>%
# Use pipe function to pass the plotly object into the `layout` function
layout(title = 'Styled Scatter',
yaxis = list(zeroline = FALSE),
xaxis = list(zeroline = FALSE))
# Show chart
p
```
This example uses the pipe operator (`%>%`) to pass the plotly object *into* the `layout` function as the first argument. We can then infer the structure of the other parameters, which you can read about more in the [API Documentation](https://plot.ly/r/reference/#Layout_and_layout_style_objects):
A.4 Hovers
----------
By default, plotly will provide information on each element when you hover over it (which is **awesome**). To manipulate the information in a hover, you can modify the `text` attribute in the `plot_ly` function, and you can **use your data** to populate the information on hover:
```
# From documentation
# Create data
d <- diamonds[sample(nrow(diamonds), 1000), ]
# Create plot, specifying hover text
p <- plot_ly(
d, x = ~carat, y = ~price, mode = "markers", type = "scatter",
# Hover text:
text = ~paste0("Price:$", price, '<br>Cut: ', cut),
color = ~carat, size = ~carat
)
# Show chart
p
```
Note, plotly allows you to specify HTML syntax in the text formula. In this case, it uses a line break `<br>` to improve the layout.
Resources
---------
* [Plotly Website](https://plot.ly/)
* [Plotly R API](https://plot.ly/r/)
* [Getting Started with Plotly for R](https://plot.ly/r/getting-started/)
* [Plotly Cheatsheet](https://images.plot.ly/plotly-documentation/images/r_cheat_sheet.pdf)
* [Plotly book (extensive documentation)](https://cpsievert.github.io/plotly_book/)
| Field Specific |
info201.github.io | https://info201.github.io/control-structures.html |
B R Language Control Structures
===============================
In chapter [Functions](functions.html#functions) we briefly described conditional execution
using the `if`/`else`
construct. Here we introduce the other main control structures, and
cover the if/else statement in more detail.
B.1 Loops
---------
One of the common problems that computers are good at, is to execute
similar tasks repeatedly many times. This can be achieved through
*loops*. R implements three loops: for\-loop, while\-loop, and
repeat\-loop, these are among the most popular loops in many modern
programming languages. Loops are often advised agains in case of R. While it
is true that loops in R are more demanding than in several other
languages, and can often be replaced by more efficient constructs,
they nevertheless have a very important role in R. Good usage of loops is
easy, fast, and improves the readability of your code.
### B.1\.1 For Loop
For\-loop is a good choice if you want to run some statements
repeatedly, each time picking a different value for a variable from a
sequence;
or just repeat some
statements a given number of times. The syntax of for\-loop is the
following:
```
for(variable in sequence) {
# do something many times. Each time 'variable' has
# a different value, taken from 'sequence'
}
```
The statement block inside of the curly braces {} is
repeated through the sequence.
Despite the curly braces, the loop body is still evaluated in the same
environment as the rest of the code. This means the variables
generated before entering the loop are visible from inside, and the
variables created inside will be visible after the loop ends. This is
in contrast to functions where the function body, also enclosed in curly
braces, is a separate scope.
A simple and perhaps the most common usage of for loop is just to run
some code a given number of times:
```
for(i in 1:5)
print(i)
```
```
## [1] 1
## [1] 2
## [1] 3
## [1] 4
## [1] 5
```
Through the loop the variable `i` takes all the values from the
sequence `1:5`. Note: we do not need curly braces if the loop block
only contains a single statement.
But the sequence does not have to be a simple sequence of numbers. It
may as well be a list:
```
data <- list("Xiaotian", 25, c(2061234567, 2069876543))
for(field in data) {
cat("field length:", length(field), "\n")
}
```
```
## field length: 1
## field length: 1
## field length: 2
```
As another less common example, we may run different functions over
the same data:
```
data <- 1:4
for(fun in c(sqrt, exp, sin)) {
print(fun(data))
}
```
```
## [1] 1.000000 1.414214 1.732051 2.000000
## [1] 2.718282 7.389056 20.085537 54.598150
## [1] 0.8414710 0.9092974 0.1411200 -0.7568025
```
### B.1\.2 While\-loop
While\-loop let’s the programmer to decide explicitly if we want to execute the
statements one more time:
```
while(condition) {
# do something if the 'condition' is true
}
```
For instance, we can repeat something a given number of times
(although this is more of a task for for\-loop):
```
i <- 0
while(i < 5) {
print(i)
i <- i + 1
}
```
```
## [1] 0
## [1] 1
## [1] 2
## [1] 3
## [1] 4
```
Note that in case the condition is not true to begin with, the loop never
executes and control is immediately passed to the code after the loop:
```
i <- 10
while(i < 5) {
print(i)
i <- i + 1
}
cat("done\n")
```
```
## done
```
While\-loop is great for handling repeated attempts to get input in
shape. One can write something like this:
```
data <- getData()
while(!dataIsGood(data)) {
data <- getData(tryReallyHard = TRUE)
}
```
This will run `getData()` until data is good. If it is not good at
the first try, the code tries really hard next time(s).
While can
also easily do infinite loops with:
```
while(TRUE) { ... }
```
will execute until something breaks the loop. Another somewhat
interesting usage of `while` is to disable execution of part of your
code:
```
while(FALSE) {
## code you don't want to be executed
}
```
### B.1\.3 repeat\-loop
The R way of implementing repeat\-loop does not include any condition
and hence it just keeps running infdefinitely, or until something breaks
the execution (see below):
```
repeat {
## just do something indefinitely many times.
## if you want to stop, use `break` statement.
}
```
Repeat is useful in many similar situations as while. For instance,
we can write a modified version of getting data using `repeat`:
```
repeat {
data <- getData()
if(dataIsGood())
break
}
```
The main difference between this, and the previous `while`\-version is
that we a) don’t have to write `getData` function twice, and b) on
each try we attempt to get data in the same way.
Exactly as in the case of *for* and *while* loop, enclosing the loop body in curly braces
is only necessary if it contains more than one statement. For instance, if you want to close all the graphics
devices left open by buggy code, you can use the following one\-liner
from console:
```
repeat dev.off()
```
The loop will be broken by error when attempting to
close the last null device.
It should also be noted that the loop does not create a separate
environment (scope). All the variables from outside of the loop are
visible inside, and variables created inside of the loop remain
accessible after the loop terminates.
### B.1\.4 Leaving Early: `break` and `next`
A straightforward way to leave a loop is the `break` statement. It just leaves
the loop and transfers control to the code immediately after the loop
body. It breaks all three loops discussed here (*for*, *while*, and *repeat*\-loops)
but not some of the other types of loops implemented outside of base R. For instance:
```
for(i in 1:10) {
cat(i, "\n")
if(i > 4)
break
}
```
```
## 1
## 2
## 3
## 4
## 5
```
```
cat("done\n")
```
```
## done
```
As the loop body will use the same environment (the same variable
values) as the rest of the code, the loop variables can be used right
afterwards. For instance, we can see that `i = 5`:
```
print(i)
```
```
## [1] 5
```
Opposite to `break`, `next` will throw the execution flow back to the
head of the loop without running the commands following `next`. In
case of for\-loop, this means a new value is picked from the sequence,
in case of while\-loop the condition is evaluated again; the head of
repeat loop does not do any calculations. For instance, we can print
only even numbers:
```
for(i in 1:10) {
if(i %% 2 != 0)
next
print(i)
}
```
```
## [1] 2
## [1] 4
## [1] 6
## [1] 8
## [1] 10
```
`next` jumps over printing where modulo of division by 2 is not 0\.
### B.1\.5 When (Not) To Use Loops In R?
In general, use loops if it improves code readability, makes it easier
to extend the code, and does not cause too big inefficiencies. In practice, it
means use loops for repeating something slow.
Don’t use loops if you can use vectorized operators instead.
For instance, look at the examples above where we were reading data
until it was good. There is probably very little you can achieve by
avoiding loops–except making your code messy. First, the
input will probably attempted a few times only (otherwise re\-consider
how you get your data), and second, data input is probably a far
slower process than the loop overhead. Just use the loop.
Similar arguments hold if you consider to loop over input files to read, output
files to create, figures to plot, webpage to scrape… In all these cases
the loop overheads are negligible compared to the task performed
inside the loops. And there are no real vectorized alternatives
anyway.
The opposite is the case where good alternatives exist. Never do this
kind of computations in R:
```
res <- numeric()
for(i in 1:1000) {
res <- c(res, sqrt(i))
}
```
This loop introduces three inefficiencies. First, and most
importantly, you are growing your results vector `res` when adding new values to it. This involves
creating new and longer vectors again and again when the old one is
filled up. This is very slow. Second, as the true vectorized
alternatives exist here, one can easily speed up the code by a
magnitude or more by just switching to the vectorized code. Third, and
this is a very important point again, this code is much harder
to read than the pure vectorized expression:
```
res <- sqrt(1:1000)
```
We can easily demonstrate how much faster the vectorized expressions work.
Let’s do this using *microbenchmark* library (it provides nanosecond
timings). Let’s wrap these expressions in functions for easier
handling by *microbenchmark*. We also include
a middle version where we use a loop but avoid the terribly slow
vector re\-allocation by creating the right\-sized result in
advance:
```
seq <- 1:1000 # global variables are easier to handle with microbenchmark
baad <- function() {
r <- numeric()
for(i in seq)
r <- c(r, sqrt(i))
}
soso <- function() {
r <- numeric(length(seq))
for(i in seq)
r[i] <- sqrt(i)
}
good <- function()
sqrt(seq)
microbenchmark::microbenchmark(baad(), soso(), good())
```
```
## Unit: microseconds
## expr min lq mean median uq max neval
## baad() 826.782 836.8505 1121.75340 849.7555 876.0790 3884.545 100
## soso() 42.018 42.5170 58.71166 43.0100 44.0435 1576.462 100
## good() 2.259 2.4470 8.64512 2.6815 3.1675 549.440 100
```
When comparing the median values, we can see that the middle example
is 10x slower than the vectorized example, and the first version is
almost 300x slower!
The above holds for “true” vectorized operations. R provides many
pseudo\-vectorizers (the lapply family) that may be handy in many
circumstances, but not in case you need speed. Under the hood these
functions just use a loop. We can demonstrate this by adding one more function to our family:
```
soso2 <- function() {
sapply(seq, sqrt)
}
microbenchmark::microbenchmark(baad(), soso(), soso2(), good())
```
```
## Unit: microseconds
## expr min lq mean median uq max neval
## baad() 823.078 853.0670 1039.05146 862.1650 874.8660 3547.314 100
## soso() 41.704 42.9815 43.67443 43.3550 43.7230 68.680 100
## soso2() 219.362 222.0515 235.62589 223.8685 230.2925 1023.763 100
## good() 2.359 2.5435 2.93111 2.8205 3.0625 4.675 100
```
As it turns out, `sapply` is much slower than plain for\-loop.
Lapply\-family is not a substitute for true vectorization. However,
one may argue that `soso2` function is easier to understand than
explicit loop in `soso`. In any case, it is easier to type when
working on the interactive console.
Finally, it should be noted that when performing truly vectorized
operations on long vectors, the overhead of R interpreter becomes
negligible and the resulting speed is almost equalt to that of the
corresponding C code. And optimized R code can easily beat
non\-optimized C.
B.2 More about `if` and `else`
------------------------------
The basics of if\-else blocks were covered in Chapter [Functions](functions.html#functions).
Here we discuss some more advanced aspects.
### B.2\.1 Where To Put Else
Normally R wants to understand if a line of code belongs to an earlier
statement, or is it beginning of a fresh one. For
instance,
```
if(condition) {
## do something
}
else {
## do something else
}
```
may fail. R finishes the if\-block on line 3 and thinks line 4 is
beginning of a new statement. And gets confused by the unaccompanied
`else` as it has already forgotten
about the `if` above. However, this code works if used inside a function, or
more generally, inside a {…} block.
But else may always stay on the same line as the the statement of the
if\-block. You are already familiar with the form
```
if(condition) {
## do something
} else {
## do something else
}
```
but this applies more generally. For instance:
```
if(condition)
x <- 1 else {
## do something else
}
```
or
```
if(condition) { ... } else {
## do something else
}
```
or
```
if(condition) { x <- 1; y <- 2 } else z <- 3
```
the last one\-liner uses the fact that we can write several statements on a
single line with `;`. Arguably, such style is often a bad idea, but
it has it’s place, for instance when creating anonymous functions
for `lapply` and friends.
### B.2\.2 Return Value
As most commands in R, if\-else block has a return value. This is the
last value evaluated by the block, and it can be assigned to
variables. If the condition is false, and there is no else block, `if` returns `NULL`.
Using return values of if\-else statement is often a recipe for hard\-to\-read code but may be a good
choice in other circumstances, for instance where the multy\-line form will take
too much attention away from more crucial parts of the code. The line
```
n <- 3
parity <- if(n %% 2 == 0) "even" else "odd"
parity
```
```
## [1] "odd"
```
is rather easy to read. This is the closest general construct in R corresponding to C conditional shortcuts
`parity = n % 2 ? "even" : "odd"`.
Another good place for such compressed if\-else statements is
inside anonymous functions in `lapply` and friends. For instance, the following code
replaces `NULL`\-s and `NA`\-s in a list of characters:
```
emails <- list("ott@example.com", "xi@abc.net", NULL, "li@whitehouse.gov", NA)
n <- 0
sapply(emails, function(x) if(is.null(x) || is.na(x)) "-" else x)
```
```
## [1] "ott@example.com" "xi@abc.net" "-"
## [4] "li@whitehouse.gov" "-"
```
B.3 `switch`: Choosing Between Multiple Conditions
--------------------------------------------------
If\-else is appropriate if we have a small number of
conditions, potentially only one. Switch is a related construct that
tests a large number of equality conditions. The best way to explain
it’s syntax is through an example:
```
stock <- "AMZN"
switch(stock,
"AAPL" = "Apple Inc",
"AMZN" = "Amazon.com Inc",
"INTC" = "Intel Corp",
"unknown")
```
```
## [1] "Amazon.com Inc"
```
Switch expression attempts to match the expression, here `stock = "AMZN"`, to a
list of alternatives. If one of the alternative matches the name of
the argument, that argument is returned. If none matches, the
nameless default argument (here “unknown”) is returned. In case there
is no default argument, `switch` returns `NULL`.
Switch allows to specify the same return value for multiple cases:
```
switch(stock,
"AAPL" = "Apple Inc",
"AMZN" =, "INTC" = "something else",
"unknown")
```
```
## [1] "something else"
```
If the matching named argument is empty, the first non\-empty named
argument is evaluated. In this case this is “something else”,
corresponding to the name “INTC”.
As in case of `if`, `switch` only accepts length\-1 expressions.
Switch also allows to use integers instead of character expressions,
in that case the return value list should be without names.
B.1 Loops
---------
One of the common problems that computers are good at, is to execute
similar tasks repeatedly many times. This can be achieved through
*loops*. R implements three loops: for\-loop, while\-loop, and
repeat\-loop, these are among the most popular loops in many modern
programming languages. Loops are often advised agains in case of R. While it
is true that loops in R are more demanding than in several other
languages, and can often be replaced by more efficient constructs,
they nevertheless have a very important role in R. Good usage of loops is
easy, fast, and improves the readability of your code.
### B.1\.1 For Loop
For\-loop is a good choice if you want to run some statements
repeatedly, each time picking a different value for a variable from a
sequence;
or just repeat some
statements a given number of times. The syntax of for\-loop is the
following:
```
for(variable in sequence) {
# do something many times. Each time 'variable' has
# a different value, taken from 'sequence'
}
```
The statement block inside of the curly braces {} is
repeated through the sequence.
Despite the curly braces, the loop body is still evaluated in the same
environment as the rest of the code. This means the variables
generated before entering the loop are visible from inside, and the
variables created inside will be visible after the loop ends. This is
in contrast to functions where the function body, also enclosed in curly
braces, is a separate scope.
A simple and perhaps the most common usage of for loop is just to run
some code a given number of times:
```
for(i in 1:5)
print(i)
```
```
## [1] 1
## [1] 2
## [1] 3
## [1] 4
## [1] 5
```
Through the loop the variable `i` takes all the values from the
sequence `1:5`. Note: we do not need curly braces if the loop block
only contains a single statement.
But the sequence does not have to be a simple sequence of numbers. It
may as well be a list:
```
data <- list("Xiaotian", 25, c(2061234567, 2069876543))
for(field in data) {
cat("field length:", length(field), "\n")
}
```
```
## field length: 1
## field length: 1
## field length: 2
```
As another less common example, we may run different functions over
the same data:
```
data <- 1:4
for(fun in c(sqrt, exp, sin)) {
print(fun(data))
}
```
```
## [1] 1.000000 1.414214 1.732051 2.000000
## [1] 2.718282 7.389056 20.085537 54.598150
## [1] 0.8414710 0.9092974 0.1411200 -0.7568025
```
### B.1\.2 While\-loop
While\-loop let’s the programmer to decide explicitly if we want to execute the
statements one more time:
```
while(condition) {
# do something if the 'condition' is true
}
```
For instance, we can repeat something a given number of times
(although this is more of a task for for\-loop):
```
i <- 0
while(i < 5) {
print(i)
i <- i + 1
}
```
```
## [1] 0
## [1] 1
## [1] 2
## [1] 3
## [1] 4
```
Note that in case the condition is not true to begin with, the loop never
executes and control is immediately passed to the code after the loop:
```
i <- 10
while(i < 5) {
print(i)
i <- i + 1
}
cat("done\n")
```
```
## done
```
While\-loop is great for handling repeated attempts to get input in
shape. One can write something like this:
```
data <- getData()
while(!dataIsGood(data)) {
data <- getData(tryReallyHard = TRUE)
}
```
This will run `getData()` until data is good. If it is not good at
the first try, the code tries really hard next time(s).
While can
also easily do infinite loops with:
```
while(TRUE) { ... }
```
will execute until something breaks the loop. Another somewhat
interesting usage of `while` is to disable execution of part of your
code:
```
while(FALSE) {
## code you don't want to be executed
}
```
### B.1\.3 repeat\-loop
The R way of implementing repeat\-loop does not include any condition
and hence it just keeps running infdefinitely, or until something breaks
the execution (see below):
```
repeat {
## just do something indefinitely many times.
## if you want to stop, use `break` statement.
}
```
Repeat is useful in many similar situations as while. For instance,
we can write a modified version of getting data using `repeat`:
```
repeat {
data <- getData()
if(dataIsGood())
break
}
```
The main difference between this, and the previous `while`\-version is
that we a) don’t have to write `getData` function twice, and b) on
each try we attempt to get data in the same way.
Exactly as in the case of *for* and *while* loop, enclosing the loop body in curly braces
is only necessary if it contains more than one statement. For instance, if you want to close all the graphics
devices left open by buggy code, you can use the following one\-liner
from console:
```
repeat dev.off()
```
The loop will be broken by error when attempting to
close the last null device.
It should also be noted that the loop does not create a separate
environment (scope). All the variables from outside of the loop are
visible inside, and variables created inside of the loop remain
accessible after the loop terminates.
### B.1\.4 Leaving Early: `break` and `next`
A straightforward way to leave a loop is the `break` statement. It just leaves
the loop and transfers control to the code immediately after the loop
body. It breaks all three loops discussed here (*for*, *while*, and *repeat*\-loops)
but not some of the other types of loops implemented outside of base R. For instance:
```
for(i in 1:10) {
cat(i, "\n")
if(i > 4)
break
}
```
```
## 1
## 2
## 3
## 4
## 5
```
```
cat("done\n")
```
```
## done
```
As the loop body will use the same environment (the same variable
values) as the rest of the code, the loop variables can be used right
afterwards. For instance, we can see that `i = 5`:
```
print(i)
```
```
## [1] 5
```
Opposite to `break`, `next` will throw the execution flow back to the
head of the loop without running the commands following `next`. In
case of for\-loop, this means a new value is picked from the sequence,
in case of while\-loop the condition is evaluated again; the head of
repeat loop does not do any calculations. For instance, we can print
only even numbers:
```
for(i in 1:10) {
if(i %% 2 != 0)
next
print(i)
}
```
```
## [1] 2
## [1] 4
## [1] 6
## [1] 8
## [1] 10
```
`next` jumps over printing where modulo of division by 2 is not 0\.
### B.1\.5 When (Not) To Use Loops In R?
In general, use loops if it improves code readability, makes it easier
to extend the code, and does not cause too big inefficiencies. In practice, it
means use loops for repeating something slow.
Don’t use loops if you can use vectorized operators instead.
For instance, look at the examples above where we were reading data
until it was good. There is probably very little you can achieve by
avoiding loops–except making your code messy. First, the
input will probably attempted a few times only (otherwise re\-consider
how you get your data), and second, data input is probably a far
slower process than the loop overhead. Just use the loop.
Similar arguments hold if you consider to loop over input files to read, output
files to create, figures to plot, webpage to scrape… In all these cases
the loop overheads are negligible compared to the task performed
inside the loops. And there are no real vectorized alternatives
anyway.
The opposite is the case where good alternatives exist. Never do this
kind of computations in R:
```
res <- numeric()
for(i in 1:1000) {
res <- c(res, sqrt(i))
}
```
This loop introduces three inefficiencies. First, and most
importantly, you are growing your results vector `res` when adding new values to it. This involves
creating new and longer vectors again and again when the old one is
filled up. This is very slow. Second, as the true vectorized
alternatives exist here, one can easily speed up the code by a
magnitude or more by just switching to the vectorized code. Third, and
this is a very important point again, this code is much harder
to read than the pure vectorized expression:
```
res <- sqrt(1:1000)
```
We can easily demonstrate how much faster the vectorized expressions work.
Let’s do this using *microbenchmark* library (it provides nanosecond
timings). Let’s wrap these expressions in functions for easier
handling by *microbenchmark*. We also include
a middle version where we use a loop but avoid the terribly slow
vector re\-allocation by creating the right\-sized result in
advance:
```
seq <- 1:1000 # global variables are easier to handle with microbenchmark
baad <- function() {
r <- numeric()
for(i in seq)
r <- c(r, sqrt(i))
}
soso <- function() {
r <- numeric(length(seq))
for(i in seq)
r[i] <- sqrt(i)
}
good <- function()
sqrt(seq)
microbenchmark::microbenchmark(baad(), soso(), good())
```
```
## Unit: microseconds
## expr min lq mean median uq max neval
## baad() 826.782 836.8505 1121.75340 849.7555 876.0790 3884.545 100
## soso() 42.018 42.5170 58.71166 43.0100 44.0435 1576.462 100
## good() 2.259 2.4470 8.64512 2.6815 3.1675 549.440 100
```
When comparing the median values, we can see that the middle example
is 10x slower than the vectorized example, and the first version is
almost 300x slower!
The above holds for “true” vectorized operations. R provides many
pseudo\-vectorizers (the lapply family) that may be handy in many
circumstances, but not in case you need speed. Under the hood these
functions just use a loop. We can demonstrate this by adding one more function to our family:
```
soso2 <- function() {
sapply(seq, sqrt)
}
microbenchmark::microbenchmark(baad(), soso(), soso2(), good())
```
```
## Unit: microseconds
## expr min lq mean median uq max neval
## baad() 823.078 853.0670 1039.05146 862.1650 874.8660 3547.314 100
## soso() 41.704 42.9815 43.67443 43.3550 43.7230 68.680 100
## soso2() 219.362 222.0515 235.62589 223.8685 230.2925 1023.763 100
## good() 2.359 2.5435 2.93111 2.8205 3.0625 4.675 100
```
As it turns out, `sapply` is much slower than plain for\-loop.
Lapply\-family is not a substitute for true vectorization. However,
one may argue that `soso2` function is easier to understand than
explicit loop in `soso`. In any case, it is easier to type when
working on the interactive console.
Finally, it should be noted that when performing truly vectorized
operations on long vectors, the overhead of R interpreter becomes
negligible and the resulting speed is almost equalt to that of the
corresponding C code. And optimized R code can easily beat
non\-optimized C.
### B.1\.1 For Loop
For\-loop is a good choice if you want to run some statements
repeatedly, each time picking a different value for a variable from a
sequence;
or just repeat some
statements a given number of times. The syntax of for\-loop is the
following:
```
for(variable in sequence) {
# do something many times. Each time 'variable' has
# a different value, taken from 'sequence'
}
```
The statement block inside of the curly braces {} is
repeated through the sequence.
Despite the curly braces, the loop body is still evaluated in the same
environment as the rest of the code. This means the variables
generated before entering the loop are visible from inside, and the
variables created inside will be visible after the loop ends. This is
in contrast to functions where the function body, also enclosed in curly
braces, is a separate scope.
A simple and perhaps the most common usage of for loop is just to run
some code a given number of times:
```
for(i in 1:5)
print(i)
```
```
## [1] 1
## [1] 2
## [1] 3
## [1] 4
## [1] 5
```
Through the loop the variable `i` takes all the values from the
sequence `1:5`. Note: we do not need curly braces if the loop block
only contains a single statement.
But the sequence does not have to be a simple sequence of numbers. It
may as well be a list:
```
data <- list("Xiaotian", 25, c(2061234567, 2069876543))
for(field in data) {
cat("field length:", length(field), "\n")
}
```
```
## field length: 1
## field length: 1
## field length: 2
```
As another less common example, we may run different functions over
the same data:
```
data <- 1:4
for(fun in c(sqrt, exp, sin)) {
print(fun(data))
}
```
```
## [1] 1.000000 1.414214 1.732051 2.000000
## [1] 2.718282 7.389056 20.085537 54.598150
## [1] 0.8414710 0.9092974 0.1411200 -0.7568025
```
### B.1\.2 While\-loop
While\-loop let’s the programmer to decide explicitly if we want to execute the
statements one more time:
```
while(condition) {
# do something if the 'condition' is true
}
```
For instance, we can repeat something a given number of times
(although this is more of a task for for\-loop):
```
i <- 0
while(i < 5) {
print(i)
i <- i + 1
}
```
```
## [1] 0
## [1] 1
## [1] 2
## [1] 3
## [1] 4
```
Note that in case the condition is not true to begin with, the loop never
executes and control is immediately passed to the code after the loop:
```
i <- 10
while(i < 5) {
print(i)
i <- i + 1
}
cat("done\n")
```
```
## done
```
While\-loop is great for handling repeated attempts to get input in
shape. One can write something like this:
```
data <- getData()
while(!dataIsGood(data)) {
data <- getData(tryReallyHard = TRUE)
}
```
This will run `getData()` until data is good. If it is not good at
the first try, the code tries really hard next time(s).
While can
also easily do infinite loops with:
```
while(TRUE) { ... }
```
will execute until something breaks the loop. Another somewhat
interesting usage of `while` is to disable execution of part of your
code:
```
while(FALSE) {
## code you don't want to be executed
}
```
### B.1\.3 repeat\-loop
The R way of implementing repeat\-loop does not include any condition
and hence it just keeps running infdefinitely, or until something breaks
the execution (see below):
```
repeat {
## just do something indefinitely many times.
## if you want to stop, use `break` statement.
}
```
Repeat is useful in many similar situations as while. For instance,
we can write a modified version of getting data using `repeat`:
```
repeat {
data <- getData()
if(dataIsGood())
break
}
```
The main difference between this, and the previous `while`\-version is
that we a) don’t have to write `getData` function twice, and b) on
each try we attempt to get data in the same way.
Exactly as in the case of *for* and *while* loop, enclosing the loop body in curly braces
is only necessary if it contains more than one statement. For instance, if you want to close all the graphics
devices left open by buggy code, you can use the following one\-liner
from console:
```
repeat dev.off()
```
The loop will be broken by error when attempting to
close the last null device.
It should also be noted that the loop does not create a separate
environment (scope). All the variables from outside of the loop are
visible inside, and variables created inside of the loop remain
accessible after the loop terminates.
### B.1\.4 Leaving Early: `break` and `next`
A straightforward way to leave a loop is the `break` statement. It just leaves
the loop and transfers control to the code immediately after the loop
body. It breaks all three loops discussed here (*for*, *while*, and *repeat*\-loops)
but not some of the other types of loops implemented outside of base R. For instance:
```
for(i in 1:10) {
cat(i, "\n")
if(i > 4)
break
}
```
```
## 1
## 2
## 3
## 4
## 5
```
```
cat("done\n")
```
```
## done
```
As the loop body will use the same environment (the same variable
values) as the rest of the code, the loop variables can be used right
afterwards. For instance, we can see that `i = 5`:
```
print(i)
```
```
## [1] 5
```
Opposite to `break`, `next` will throw the execution flow back to the
head of the loop without running the commands following `next`. In
case of for\-loop, this means a new value is picked from the sequence,
in case of while\-loop the condition is evaluated again; the head of
repeat loop does not do any calculations. For instance, we can print
only even numbers:
```
for(i in 1:10) {
if(i %% 2 != 0)
next
print(i)
}
```
```
## [1] 2
## [1] 4
## [1] 6
## [1] 8
## [1] 10
```
`next` jumps over printing where modulo of division by 2 is not 0\.
### B.1\.5 When (Not) To Use Loops In R?
In general, use loops if it improves code readability, makes it easier
to extend the code, and does not cause too big inefficiencies. In practice, it
means use loops for repeating something slow.
Don’t use loops if you can use vectorized operators instead.
For instance, look at the examples above where we were reading data
until it was good. There is probably very little you can achieve by
avoiding loops–except making your code messy. First, the
input will probably attempted a few times only (otherwise re\-consider
how you get your data), and second, data input is probably a far
slower process than the loop overhead. Just use the loop.
Similar arguments hold if you consider to loop over input files to read, output
files to create, figures to plot, webpage to scrape… In all these cases
the loop overheads are negligible compared to the task performed
inside the loops. And there are no real vectorized alternatives
anyway.
The opposite is the case where good alternatives exist. Never do this
kind of computations in R:
```
res <- numeric()
for(i in 1:1000) {
res <- c(res, sqrt(i))
}
```
This loop introduces three inefficiencies. First, and most
importantly, you are growing your results vector `res` when adding new values to it. This involves
creating new and longer vectors again and again when the old one is
filled up. This is very slow. Second, as the true vectorized
alternatives exist here, one can easily speed up the code by a
magnitude or more by just switching to the vectorized code. Third, and
this is a very important point again, this code is much harder
to read than the pure vectorized expression:
```
res <- sqrt(1:1000)
```
We can easily demonstrate how much faster the vectorized expressions work.
Let’s do this using *microbenchmark* library (it provides nanosecond
timings). Let’s wrap these expressions in functions for easier
handling by *microbenchmark*. We also include
a middle version where we use a loop but avoid the terribly slow
vector re\-allocation by creating the right\-sized result in
advance:
```
seq <- 1:1000 # global variables are easier to handle with microbenchmark
baad <- function() {
r <- numeric()
for(i in seq)
r <- c(r, sqrt(i))
}
soso <- function() {
r <- numeric(length(seq))
for(i in seq)
r[i] <- sqrt(i)
}
good <- function()
sqrt(seq)
microbenchmark::microbenchmark(baad(), soso(), good())
```
```
## Unit: microseconds
## expr min lq mean median uq max neval
## baad() 826.782 836.8505 1121.75340 849.7555 876.0790 3884.545 100
## soso() 42.018 42.5170 58.71166 43.0100 44.0435 1576.462 100
## good() 2.259 2.4470 8.64512 2.6815 3.1675 549.440 100
```
When comparing the median values, we can see that the middle example
is 10x slower than the vectorized example, and the first version is
almost 300x slower!
The above holds for “true” vectorized operations. R provides many
pseudo\-vectorizers (the lapply family) that may be handy in many
circumstances, but not in case you need speed. Under the hood these
functions just use a loop. We can demonstrate this by adding one more function to our family:
```
soso2 <- function() {
sapply(seq, sqrt)
}
microbenchmark::microbenchmark(baad(), soso(), soso2(), good())
```
```
## Unit: microseconds
## expr min lq mean median uq max neval
## baad() 823.078 853.0670 1039.05146 862.1650 874.8660 3547.314 100
## soso() 41.704 42.9815 43.67443 43.3550 43.7230 68.680 100
## soso2() 219.362 222.0515 235.62589 223.8685 230.2925 1023.763 100
## good() 2.359 2.5435 2.93111 2.8205 3.0625 4.675 100
```
As it turns out, `sapply` is much slower than plain for\-loop.
Lapply\-family is not a substitute for true vectorization. However,
one may argue that `soso2` function is easier to understand than
explicit loop in `soso`. In any case, it is easier to type when
working on the interactive console.
Finally, it should be noted that when performing truly vectorized
operations on long vectors, the overhead of R interpreter becomes
negligible and the resulting speed is almost equalt to that of the
corresponding C code. And optimized R code can easily beat
non\-optimized C.
B.2 More about `if` and `else`
------------------------------
The basics of if\-else blocks were covered in Chapter [Functions](functions.html#functions).
Here we discuss some more advanced aspects.
### B.2\.1 Where To Put Else
Normally R wants to understand if a line of code belongs to an earlier
statement, or is it beginning of a fresh one. For
instance,
```
if(condition) {
## do something
}
else {
## do something else
}
```
may fail. R finishes the if\-block on line 3 and thinks line 4 is
beginning of a new statement. And gets confused by the unaccompanied
`else` as it has already forgotten
about the `if` above. However, this code works if used inside a function, or
more generally, inside a {…} block.
But else may always stay on the same line as the the statement of the
if\-block. You are already familiar with the form
```
if(condition) {
## do something
} else {
## do something else
}
```
but this applies more generally. For instance:
```
if(condition)
x <- 1 else {
## do something else
}
```
or
```
if(condition) { ... } else {
## do something else
}
```
or
```
if(condition) { x <- 1; y <- 2 } else z <- 3
```
the last one\-liner uses the fact that we can write several statements on a
single line with `;`. Arguably, such style is often a bad idea, but
it has it’s place, for instance when creating anonymous functions
for `lapply` and friends.
### B.2\.2 Return Value
As most commands in R, if\-else block has a return value. This is the
last value evaluated by the block, and it can be assigned to
variables. If the condition is false, and there is no else block, `if` returns `NULL`.
Using return values of if\-else statement is often a recipe for hard\-to\-read code but may be a good
choice in other circumstances, for instance where the multy\-line form will take
too much attention away from more crucial parts of the code. The line
```
n <- 3
parity <- if(n %% 2 == 0) "even" else "odd"
parity
```
```
## [1] "odd"
```
is rather easy to read. This is the closest general construct in R corresponding to C conditional shortcuts
`parity = n % 2 ? "even" : "odd"`.
Another good place for such compressed if\-else statements is
inside anonymous functions in `lapply` and friends. For instance, the following code
replaces `NULL`\-s and `NA`\-s in a list of characters:
```
emails <- list("ott@example.com", "xi@abc.net", NULL, "li@whitehouse.gov", NA)
n <- 0
sapply(emails, function(x) if(is.null(x) || is.na(x)) "-" else x)
```
```
## [1] "ott@example.com" "xi@abc.net" "-"
## [4] "li@whitehouse.gov" "-"
```
### B.2\.1 Where To Put Else
Normally R wants to understand if a line of code belongs to an earlier
statement, or is it beginning of a fresh one. For
instance,
```
if(condition) {
## do something
}
else {
## do something else
}
```
may fail. R finishes the if\-block on line 3 and thinks line 4 is
beginning of a new statement. And gets confused by the unaccompanied
`else` as it has already forgotten
about the `if` above. However, this code works if used inside a function, or
more generally, inside a {…} block.
But else may always stay on the same line as the the statement of the
if\-block. You are already familiar with the form
```
if(condition) {
## do something
} else {
## do something else
}
```
but this applies more generally. For instance:
```
if(condition)
x <- 1 else {
## do something else
}
```
or
```
if(condition) { ... } else {
## do something else
}
```
or
```
if(condition) { x <- 1; y <- 2 } else z <- 3
```
the last one\-liner uses the fact that we can write several statements on a
single line with `;`. Arguably, such style is often a bad idea, but
it has it’s place, for instance when creating anonymous functions
for `lapply` and friends.
### B.2\.2 Return Value
As most commands in R, if\-else block has a return value. This is the
last value evaluated by the block, and it can be assigned to
variables. If the condition is false, and there is no else block, `if` returns `NULL`.
Using return values of if\-else statement is often a recipe for hard\-to\-read code but may be a good
choice in other circumstances, for instance where the multy\-line form will take
too much attention away from more crucial parts of the code. The line
```
n <- 3
parity <- if(n %% 2 == 0) "even" else "odd"
parity
```
```
## [1] "odd"
```
is rather easy to read. This is the closest general construct in R corresponding to C conditional shortcuts
`parity = n % 2 ? "even" : "odd"`.
Another good place for such compressed if\-else statements is
inside anonymous functions in `lapply` and friends. For instance, the following code
replaces `NULL`\-s and `NA`\-s in a list of characters:
```
emails <- list("ott@example.com", "xi@abc.net", NULL, "li@whitehouse.gov", NA)
n <- 0
sapply(emails, function(x) if(is.null(x) || is.na(x)) "-" else x)
```
```
## [1] "ott@example.com" "xi@abc.net" "-"
## [4] "li@whitehouse.gov" "-"
```
B.3 `switch`: Choosing Between Multiple Conditions
--------------------------------------------------
If\-else is appropriate if we have a small number of
conditions, potentially only one. Switch is a related construct that
tests a large number of equality conditions. The best way to explain
it’s syntax is through an example:
```
stock <- "AMZN"
switch(stock,
"AAPL" = "Apple Inc",
"AMZN" = "Amazon.com Inc",
"INTC" = "Intel Corp",
"unknown")
```
```
## [1] "Amazon.com Inc"
```
Switch expression attempts to match the expression, here `stock = "AMZN"`, to a
list of alternatives. If one of the alternative matches the name of
the argument, that argument is returned. If none matches, the
nameless default argument (here “unknown”) is returned. In case there
is no default argument, `switch` returns `NULL`.
Switch allows to specify the same return value for multiple cases:
```
switch(stock,
"AAPL" = "Apple Inc",
"AMZN" =, "INTC" = "something else",
"unknown")
```
```
## [1] "something else"
```
If the matching named argument is empty, the first non\-empty named
argument is evaluated. In this case this is “something else”,
corresponding to the name “INTC”.
As in case of `if`, `switch` only accepts length\-1 expressions.
Switch also allows to use integers instead of character expressions,
in that case the return value list should be without names.
| Field Specific |
info201.github.io | https://info201.github.io/data-tables.html |
C Thinking Big: Data Tables
===========================
Data frames are core elements for data handling in R. However, they
suffer from several limitations. One of the major issue with data
frames is that they are memory hungry and slow. This is not an issue
when working with relatively small datasets (say, up to 100,000
rows). However, when your dataset size exceed gigabytes, dataframes
may be infeasibly slow and too memory hungry.
C.1 Background: Passing By Value And Passing By Reference
---------------------------------------------------------
R is (mostly) a pass\-by\-value language. This means that when you
modify the data, at every step a new copy of the complete modified
object is created, stored in memory, and the former object is freed
(carbage\-collected) if not in use any more.
The main advantage of this approach is consistency: we have the
guarantee that functions do not modify their inputs. However, in case
of large objects, copying may be slow, and even more, it requires
at least twice as much memory before the old object is freed. In case
of more complex process pipelines, the memory consumption may be even more
than twice of the size of the original object.
Data tables implement a number of pass\-by\-reference functions. In
pass\-by\-reference, the function is not given a fresh copy of the
inputs, but is instead told where the object is in memory. Instead of
copying gigabytes of data, only a single tiny memory pointer is
passed. But this also means the function now is accessing and
modifying the original object, not a copy of it. This may sometimes
lead to bugs and unexpected behavior, but professional use of
pass\-by\-reference approach may improve the speed and lower the memory
footprint substantially.
C.2 Data Tables: Introduction
-----------------------------
Data tables and most of the related goodies live in *data.table*
library, so you either have to load the library or specify the
namespace when using the functions.
### C.2\.1 Replacement for Data Frames (Sort of)
Data tables are designed to be largely a replacement to data frames.
The syntax is similar and they are largely replaceable. For instance,
we can create and play with a data table as
```
library(data.table)
dt <- data.table(id=1:5, x=rnorm(5), y=runif(5))
dt
```
```
## id x y
## 1: 1 -0.1759081 0.3587206
## 2: 2 -1.0397919 0.7036122
## 3: 3 1.3415712 0.6285581
## 4: 4 -0.7327195 0.6126032
## 5: 5 -1.7042843 0.7579853
```
The result looks almost identical to a similar data frame (the only
difference are the colons after the row numbers). Behind the scenes
these objects are almost identical too–both objects are lists of
vectors. This structural similarity allows to use data tables
as drop\-in replacements for dataframes, at least in some
circumstances. For instance, we can extract variables with `$`:
```
dt$x
```
```
## [1] -0.1759081 -1.0397919 1.3415712 -0.7327195 -1.7042843
```
or rows with row indices:
```
dt[c(2,4),]
```
```
## id x y
## 1: 2 -1.0397919 0.7036122
## 2: 4 -0.7327195 0.6126032
```
However, data tables use unquoted variables names (like *dplyr*) by
default:
```
dt[,x]
```
```
## [1] -0.1759081 -1.0397919 1.3415712 -0.7327195 -1.7042843
```
In case we need to store the variable name into another variable, with
have to use the additional argument `with`:
```
var <- "x"
dt[, var, with=FALSE]
```
```
## x
## 1: -0.1759081
## 2: -1.0397919
## 3: 1.3415712
## 4: -0.7327195
## 5: -1.7042843
```
Note also that instead of getting a vector, now we get a data.table
with a single column “x” in the first. This behavior is the main culprit that when
replacing data frames with data tables one may need to change quite a
bit of code.
### C.2\.2 Fast Reading and Writing
Many data frame users may appreciate the fact that the data
input\-output function `fread` and `fwrite` run at least a magnitude
faster on large files. These are largely replacement for `read.table`
and `write.table`, however they syntax differs noticeably in places.
In particular, `fread` accepts either a file name, http\-url, or a *shell command
that prints output*; it automatically detects the column separator,
but it
does not automatically open compressed files. The latter is not a big
deal when using unix where one can just issue
```
data <- fread("bzcat data.csv.bz2")
```
However, the decompression is not that simple on windows and hence it
is hard to write platform\-independent code that opens compressed
files.[2](#fn2)
If your computer has enough memory and speed is not an issue, your
interest for data tables may end here. You can just transform data
table into a data frame with `setDF` (and the other way around with `setDT`). Let’s transform our data table to data
frame:
```
setDF(dt)
dt
```
```
## id x y
## 1 1 -0.1759081 0.3587206
## 2 2 -1.0397919 0.7036122
## 3 3 1.3415712 0.6285581
## 4 4 -0.7327195 0.6126032
## 5 5 -1.7042843 0.7579853
```
Do you see that the colons after row names are gone? This means `dt`
now is a data frame.
Note that this function behaves very differently from what we have
learned earlier: it modifies the object *in place* (by reference). We
do not have to assign the result into a new variable using a construct
like `df <- setDF(dt)` (but we still can write like this, handy when
using magrittr pipes). This is a manifestation of the power of
data.tables: the object is not copied but the same object is modified
in memory instead. `setDF` and `setDT` are very efficient, even huge
tables are converted instantly with virtually no need for any
additional memory.
However, big powers come hand\-in\-hand with big responsibility:
it is easy to forget that `setDF` modifies the function argument.
C.3 Indexing: The Major Powerhorse of Data Tables
-------------------------------------------------
Data tables’ indexing is much more powerful than that of data frames.
The single\-bracket indexing is a powerful (albeit confusing) set of
functions. It’s general syntax is as follows:
```
dt[i, j, by]
```
where `i` specifies what to do with rows (for instance, select certain
rows), `j` tells what to do with columns (such as select columns,
compute new columns, aggregate columns), and `by` contains the
grouping variables.
Let’s demonstrate this with the *flights* data from *nycflights13*
package. We load the data and transform it into data.table:
```
data(flights, package="nycflights13")
setDT(flights)
head(flights)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## 4: 2013 1 1 544 545 -1 1004
## 5: 2013 1 1 554 600 -6 812
## 6: 2013 1 1 554 558 -4 740
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## 4: 1022 -18 B6 725 N804JB JFK BQN 183
## 5: 837 -25 DL 461 N668DN LGA ATL 116
## 6: 728 12 UA 1696 N39463 EWR ORD 150
## distance hour minute time_hour
## 1: 1400 5 15 2013-01-01 05:00:00
## 2: 1416 5 29 2013-01-01 05:00:00
## 3: 1089 5 40 2013-01-01 05:00:00
## 4: 1576 5 45 2013-01-01 05:00:00
## 5: 762 6 0 2013-01-01 06:00:00
## 6: 719 5 58 2013-01-01 05:00:00
```
### C.3\.1 i: Select Observations
Obviously, we can always just tell which observations we want:
```
flights[c(1:3),]
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour
## 1: 1400 5 15 2013-01-01 05:00:00
## 2: 1416 5 29 2013-01-01 05:00:00
## 3: 1089 5 40 2013-01-01 05:00:00
```
picks the first three lines from the data. Maybe more interestingly,
we can use the special variable `.N` (the number of rows), to get the
penultimate row:
```
flights[.N-1,]
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 9 30 NA 1159 NA NA
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 1344 NA MQ 3572 N511MQ LGA CLE NA
## distance hour minute time_hour
## 1: 419 11 59 2013-09-30 11:00:00
```
We can select observations with logical index vector in the same way as in data frames:
```
head(flights[origin == "EWR" & dest == "SEA",], 3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 724 725 -1 1020
## 2: 2013 1 1 857 851 6 1157
## 3: 2013 1 1 1418 1419 -1 1726
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 1030 -10 AS 11 N594AS EWR SEA 338
## 2: 1222 -25 UA 1670 N45440 EWR SEA 343
## 3: 1732 -6 UA 16 N37464 EWR SEA 348
## distance hour minute time_hour
## 1: 2402 7 25 2013-01-01 07:00:00
## 2: 2402 8 51 2013-01-01 08:00:00
## 3: 2402 14 19 2013-01-01 14:00:00
```
will create a new data table including only flights from Newark to
Seattle. However, note that we just use `origin`, and not
`flights$origin` as were the case with data frames. Data tables
evaluate the arguments as if inside `with`\-function.
The first, integer indexing corresponds to dplyr’s `slice` function
while the other one is equivalent to `filter`.
### C.3\.2 j: Work with Columns
`j` is perhaps the most powerful (and most confusing) of all arguments
for data table indexing. It allows both to select and do more complex
tasks. Lets start with selection:
```
head(flights[, dest], 3)
```
```
## [1] "IAH" "IAH" "MIA"
```
selects only the `dest` variable from the data. Note this results in
a vector, not in a single\-variable data table. If you want to get
that, you can do
```
head(flights[, .(dest)], 3)
```
```
## dest
## 1: IAH
## 2: IAH
## 3: MIA
```
`.()` is just an alias for `list()`, encoded differently in data
tables to improve readability and make it easier to type. If we want to select
more that one variable, we can use the latter syntax:
```
head(flights[, .(origin, dest)], 3)
```
```
## origin dest
## 1: EWR IAH
## 2: LGA IAH
## 3: JFK MIA
```
Selection supports a number of goodies, such as ranges of variables
with `:` (for instance, `dep_time:arr_delay`) and excluding variables
with `!` or `-` (for instance, `-year`).
Obviously we can combine both `i` and `j`: let’s select origin and
departure delay for flights to Seattle:
```
head(flights[dest == "SEA", .(origin, dep_delay)], 3)
```
```
## origin dep_delay
## 1: EWR -1
## 2: JFK 13
## 3: EWR 6
```
The example so far broadly corresponds to dplyr’s `select`.
But `j` is not just for selecting. It is also for computing. Let’s
find the mean arrival delay for flights to Seattle:
```
flights[dest == "SEA", mean(arr_delay, na.rm=TRUE)]
```
```
## [1] -1.099099
```
Several variables can be returned by wrapping, and optionally named,
these in `.()`. For instance, find the average departure and arrival
delay for all flights to Seattle, given the flight was delayed on
arrival, and name these `dep` and `arr`:
```
flights[dest == "SEA" & arr_delay > 0,
.(dep = mean(dep_delay, na.rm=TRUE), arr = mean(arr_delay, na.rm=TRUE))]
```
```
## dep arr
## 1: 33.98266 39.79984
```
The result is a data table with two variables.
We can use the special variable `.N` to count the rows:
```
flights[dest == "SEA" & arr_delay > 0, .N]
```
```
## [1] 1269
```
will tell us how many flights to Seattle were delayed at arrival.
Handling the case where the variable names are stored in other
variables is not that hard, but still adds a layer of complexity. We
can specify the variables in `.SDcols` parameter. This
parameter determines which columns go into `.SD` (\=Subset Data)
special variable. Afterwards we make an `lapply` expression in `j`:
```
flights[dest == "SEA" & arr_delay > 0,
lapply(.SD, function(x) mean(x, na.rm=TRUE)),
.SDcols = c("arr_delay", "dep_delay")]
```
```
## arr_delay dep_delay
## 1: 39.79984 33.98266
```
Let’s repeat: `.SDcols` determines which variables will go into the
special `.SD` list (default: all). `lapply` in `j` computes mean
values of each of the variables in the `.SD` list. This procedure
feels complex, although it is internally optimized.
These examples correspond to dplyr’s `aggregate` function. One can
argue, however, that data tables’ syntax is more confusing and harder
to read. Note also that the functionality data tables offer here is
optimized for speed and memory efficiency but still return a new
object. Aggregation does not work by reference.
### C.3\.3 Group in `by`
Finally, all of the above can by computed by groups using `by`. Let’s
compute the average delays above by carrier and origin:
```
flights[dest == "SEA" & arr_delay > 0,
.(dep = mean(dep_delay, na.rm=TRUE), arr = mean(arr_delay, na.rm=TRUE)),
by = .(carrier, origin)]
```
```
## carrier origin dep arr
## 1: DL JFK 29.82373 39.49831
## 2: B6 JFK 28.49767 40.32093
## 3: UA EWR 40.24053 41.85078
## 4: AS EWR 31.80952 34.36508
## 5: AA JFK 34.04132 40.48760
```
We just had to specify the `by` argument that lists the grouping
variables. If more than one, these should be wrapped in a list with
`.()` function.
We can use the `.N` variable to get the group size. How many flights
did each carrier from each origin?
```
flights[, .N, by=.(carrier, origin)] %>%
head(3)
```
```
## carrier origin N
## 1: UA EWR 46087
## 2: UA LGA 8044
## 3: AA JFK 13783
```
Finally, we can also use quoted variables for grouping too just be
replacing `.()` with `c()`:
```
flights[, .N, by=c("carrier", "origin")] %>%
head(3)
```
```
## carrier origin N
## 1: UA EWR 46087
## 2: UA LGA 8044
## 3: AA JFK 13783
```
In dplyr context, the examples here include `group_by` and `summarize` verbs.
Read more about the basic usage in data.table the vignette [Data analysis using data.table](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-intro.html).
C.4 `:=`–Create variables by reference
--------------------------------------
While summarizing we compute values in `j`, these will always create a
new data.table. Reducing operations are not possible to do in\-place.
But computing new variables can be done in\-place.
In place variable computations (without summarizing) can be done with
`:=` assignment operator in `j`. Let’s compute a new
variable–speed–for each flight. We can do this as follows:
```
flights[, speed := distance/(air_time/60)]
flights %>% head(3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour speed
## 1: 1400 5 15 2013-01-01 05:00:00 370.0441
## 2: 1416 5 29 2013-01-01 05:00:00 374.2731
## 3: 1089 5 40 2013-01-01 05:00:00 408.3750
```
We see the new variable, speed, included as the last variable in the
data. Note we did this operation *by reference*, i.e. we did not
assign the result to a new data table. The existing table was
modified in place.
The same assignment operator also permits us to remove variables by
setting these to `NULL`. Let’s remove speed:
```
flights[, speed := NULL]
flights %>% head(3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour
## 1: 1400 5 15 2013-01-01 05:00:00
## 2: 1416 5 29 2013-01-01 05:00:00
## 3: 1089 5 40 2013-01-01 05:00:00
```
Indeed, there is no speed any more.
Assigning more that one variable by reference may feel somewhat more
intimidating:
```
flights[, c("speed", "meanDelay") := .(distance/(air_time/60), (arr_delay + dep_delay)/2)]
flights %>% head(3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour speed meanDelay
## 1: 1400 5 15 2013-01-01 05:00:00 370.0441 6.5
## 2: 1416 5 29 2013-01-01 05:00:00 374.2731 12.0
## 3: 1089 5 40 2013-01-01 05:00:00 408.3750 17.5
```
Assignment works together with both selection and grouping. For
instance, we may want to replace negative delay by zeros:
```
flights[ arr_delay < 0, arr_delay := 0][, arr_delay] %>%
head(20)
```
```
## [1] 11 20 33 0 0 12 19 0 0 8 0 0 7 0 31 0 0 0 12 0
```
Indeed, we only see positive numbers and zeros. But be careful: now
we have overwritten the `arr_delay` in the original data. We cannot
restore the previous state any more without re\-loading the dataset.
As an example of
groupings, let’s compute the maximum departure delay by origin:
```
flights[, maxDelay := max(dep_delay, na.rm=TRUE), by=origin] %>%
head(4)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## 4: 2013 1 1 544 545 -1 1004
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## 4: 1022 0 B6 725 N804JB JFK BQN 183
## distance hour minute time_hour speed meanDelay maxDelay
## 1: 1400 5 15 2013-01-01 05:00:00 370.0441 6.5 1126
## 2: 1416 5 29 2013-01-01 05:00:00 374.2731 12.0 911
## 3: 1089 5 40 2013-01-01 05:00:00 408.3750 17.5 1301
## 4: 1576 5 45 2013-01-01 05:00:00 516.7213 -9.5 1301
```
We can see that `by` caused the delay to be computed for each group,
however, the data is not summarized, just the max delay is added to
every single row.
Finally, if you *do not* want to modify the original data, you should
use `copy` function. This makes a deep copy of the data, and you can
modify the copy afterwards:
```
fl <- copy(flights)
fl <- fl[, .(origin, dest)]
head(fl, 3)
```
```
## origin dest
## 1: EWR IAH
## 2: LGA IAH
## 3: JFK MIA
```
```
head(flights, 3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour speed meanDelay maxDelay
## 1: 1400 5 15 2013-01-01 05:00:00 370.0441 6.5 1126
## 2: 1416 5 29 2013-01-01 05:00:00 374.2731 12.0 911
## 3: 1089 5 40 2013-01-01 05:00:00 408.3750 17.5 1301
```
As you see, the `flights` data has not changed.
These operations correspond to the dplyr’s `mutate` verb. However,
`mutate` always makes a copy of the original dataset, something that
may well make your analysis slow and sluggish with large data.
Read more in vignette [Data.table reference semantics](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-reference-semantics.html)
C.5 keys
--------
Data tables allow fast lookup based on *key*. In it’s simplest
version, a key is a column (or several columns) which is used to
pre\-sort the data table. Pre\-sorting makes it much faster to look up
certain values, perform grouping operations and merges. As data can
only be sorted according to one rule at a time, there can only be one
key on data.table (but a key may be based on several variables).
Let’s set origin and destination as keys for the data table:
```
data(flights, pkg="nycflights13")
```
```
## Warning in data(flights, pkg = "nycflights13"): data set 'flights' not
## found
```
```
## Warning in data(flights, pkg = "nycflights13"): data set 'nycflights13' not
## found
```
```
setDT(flights, key=c("origin", "dest"))
fl <- flights[,.(origin, dest, arr_delay)]
# focus on a few variables only
head(fl, 3)
```
```
## origin dest arr_delay
## 1: EWR ALB 0
## 2: EWR ALB 40
## 3: EWR ALB 44
```
We see that both origin and destination are alphabetically ordered.
Note that when selecting variables, the resulting data table `fl` will
have the same keys as the original one.
When set, we can easily subset by key by just feeding the key values
in `i`:
```
fl["LGA"] %>%
head(5)
```
```
## origin dest arr_delay
## 1: LGA ATL 0
## 2: LGA ATL 12
## 3: LGA ATL 5
## 4: LGA ATL 0
## 5: LGA ATL 17
```
will extract all LaGuardia\-originating flights. In terms of output,
this is equivalent to `fl[origin == "LGA"]`, just much more
efficient. When you want to
extract flights based on origin\-destination pair, you can just add
both key columns:
```
fl[.("EWR", "SEA")] %>%
head(4)
```
```
## origin dest arr_delay
## 1: EWR SEA 0
## 2: EWR SEA 0
## 3: EWR SEA 0
## 4: EWR SEA 0
```
Again, this can be achieved in other ways, just keys are more
efficient. Finally, if we want to extract based on the second key,
the syntax is more confusing:
```
fl[.(unique(origin), "SEA")] %>%
head(4)
```
```
## origin dest arr_delay
## 1: EWR SEA 0
## 2: EWR SEA 0
## 3: EWR SEA 0
## 4: EWR SEA 0
```
We have to tell the `[` that we want to extract all observations
where the first key is everything, and the second one is “SEA”.
Read more in the vignette [Keys and fast binary search based subset](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-keys-fast-subset.html).
C.6 Resources
-------------
* [Data Table CRAN page](https://cran.r-project.org/web/packages/data.table/index.html).
Vignettes are a very valuable source of information.
C.1 Background: Passing By Value And Passing By Reference
---------------------------------------------------------
R is (mostly) a pass\-by\-value language. This means that when you
modify the data, at every step a new copy of the complete modified
object is created, stored in memory, and the former object is freed
(carbage\-collected) if not in use any more.
The main advantage of this approach is consistency: we have the
guarantee that functions do not modify their inputs. However, in case
of large objects, copying may be slow, and even more, it requires
at least twice as much memory before the old object is freed. In case
of more complex process pipelines, the memory consumption may be even more
than twice of the size of the original object.
Data tables implement a number of pass\-by\-reference functions. In
pass\-by\-reference, the function is not given a fresh copy of the
inputs, but is instead told where the object is in memory. Instead of
copying gigabytes of data, only a single tiny memory pointer is
passed. But this also means the function now is accessing and
modifying the original object, not a copy of it. This may sometimes
lead to bugs and unexpected behavior, but professional use of
pass\-by\-reference approach may improve the speed and lower the memory
footprint substantially.
C.2 Data Tables: Introduction
-----------------------------
Data tables and most of the related goodies live in *data.table*
library, so you either have to load the library or specify the
namespace when using the functions.
### C.2\.1 Replacement for Data Frames (Sort of)
Data tables are designed to be largely a replacement to data frames.
The syntax is similar and they are largely replaceable. For instance,
we can create and play with a data table as
```
library(data.table)
dt <- data.table(id=1:5, x=rnorm(5), y=runif(5))
dt
```
```
## id x y
## 1: 1 -0.1759081 0.3587206
## 2: 2 -1.0397919 0.7036122
## 3: 3 1.3415712 0.6285581
## 4: 4 -0.7327195 0.6126032
## 5: 5 -1.7042843 0.7579853
```
The result looks almost identical to a similar data frame (the only
difference are the colons after the row numbers). Behind the scenes
these objects are almost identical too–both objects are lists of
vectors. This structural similarity allows to use data tables
as drop\-in replacements for dataframes, at least in some
circumstances. For instance, we can extract variables with `$`:
```
dt$x
```
```
## [1] -0.1759081 -1.0397919 1.3415712 -0.7327195 -1.7042843
```
or rows with row indices:
```
dt[c(2,4),]
```
```
## id x y
## 1: 2 -1.0397919 0.7036122
## 2: 4 -0.7327195 0.6126032
```
However, data tables use unquoted variables names (like *dplyr*) by
default:
```
dt[,x]
```
```
## [1] -0.1759081 -1.0397919 1.3415712 -0.7327195 -1.7042843
```
In case we need to store the variable name into another variable, with
have to use the additional argument `with`:
```
var <- "x"
dt[, var, with=FALSE]
```
```
## x
## 1: -0.1759081
## 2: -1.0397919
## 3: 1.3415712
## 4: -0.7327195
## 5: -1.7042843
```
Note also that instead of getting a vector, now we get a data.table
with a single column “x” in the first. This behavior is the main culprit that when
replacing data frames with data tables one may need to change quite a
bit of code.
### C.2\.2 Fast Reading and Writing
Many data frame users may appreciate the fact that the data
input\-output function `fread` and `fwrite` run at least a magnitude
faster on large files. These are largely replacement for `read.table`
and `write.table`, however they syntax differs noticeably in places.
In particular, `fread` accepts either a file name, http\-url, or a *shell command
that prints output*; it automatically detects the column separator,
but it
does not automatically open compressed files. The latter is not a big
deal when using unix where one can just issue
```
data <- fread("bzcat data.csv.bz2")
```
However, the decompression is not that simple on windows and hence it
is hard to write platform\-independent code that opens compressed
files.[2](#fn2)
If your computer has enough memory and speed is not an issue, your
interest for data tables may end here. You can just transform data
table into a data frame with `setDF` (and the other way around with `setDT`). Let’s transform our data table to data
frame:
```
setDF(dt)
dt
```
```
## id x y
## 1 1 -0.1759081 0.3587206
## 2 2 -1.0397919 0.7036122
## 3 3 1.3415712 0.6285581
## 4 4 -0.7327195 0.6126032
## 5 5 -1.7042843 0.7579853
```
Do you see that the colons after row names are gone? This means `dt`
now is a data frame.
Note that this function behaves very differently from what we have
learned earlier: it modifies the object *in place* (by reference). We
do not have to assign the result into a new variable using a construct
like `df <- setDF(dt)` (but we still can write like this, handy when
using magrittr pipes). This is a manifestation of the power of
data.tables: the object is not copied but the same object is modified
in memory instead. `setDF` and `setDT` are very efficient, even huge
tables are converted instantly with virtually no need for any
additional memory.
However, big powers come hand\-in\-hand with big responsibility:
it is easy to forget that `setDF` modifies the function argument.
### C.2\.1 Replacement for Data Frames (Sort of)
Data tables are designed to be largely a replacement to data frames.
The syntax is similar and they are largely replaceable. For instance,
we can create and play with a data table as
```
library(data.table)
dt <- data.table(id=1:5, x=rnorm(5), y=runif(5))
dt
```
```
## id x y
## 1: 1 -0.1759081 0.3587206
## 2: 2 -1.0397919 0.7036122
## 3: 3 1.3415712 0.6285581
## 4: 4 -0.7327195 0.6126032
## 5: 5 -1.7042843 0.7579853
```
The result looks almost identical to a similar data frame (the only
difference are the colons after the row numbers). Behind the scenes
these objects are almost identical too–both objects are lists of
vectors. This structural similarity allows to use data tables
as drop\-in replacements for dataframes, at least in some
circumstances. For instance, we can extract variables with `$`:
```
dt$x
```
```
## [1] -0.1759081 -1.0397919 1.3415712 -0.7327195 -1.7042843
```
or rows with row indices:
```
dt[c(2,4),]
```
```
## id x y
## 1: 2 -1.0397919 0.7036122
## 2: 4 -0.7327195 0.6126032
```
However, data tables use unquoted variables names (like *dplyr*) by
default:
```
dt[,x]
```
```
## [1] -0.1759081 -1.0397919 1.3415712 -0.7327195 -1.7042843
```
In case we need to store the variable name into another variable, with
have to use the additional argument `with`:
```
var <- "x"
dt[, var, with=FALSE]
```
```
## x
## 1: -0.1759081
## 2: -1.0397919
## 3: 1.3415712
## 4: -0.7327195
## 5: -1.7042843
```
Note also that instead of getting a vector, now we get a data.table
with a single column “x” in the first. This behavior is the main culprit that when
replacing data frames with data tables one may need to change quite a
bit of code.
### C.2\.2 Fast Reading and Writing
Many data frame users may appreciate the fact that the data
input\-output function `fread` and `fwrite` run at least a magnitude
faster on large files. These are largely replacement for `read.table`
and `write.table`, however they syntax differs noticeably in places.
In particular, `fread` accepts either a file name, http\-url, or a *shell command
that prints output*; it automatically detects the column separator,
but it
does not automatically open compressed files. The latter is not a big
deal when using unix where one can just issue
```
data <- fread("bzcat data.csv.bz2")
```
However, the decompression is not that simple on windows and hence it
is hard to write platform\-independent code that opens compressed
files.[2](#fn2)
If your computer has enough memory and speed is not an issue, your
interest for data tables may end here. You can just transform data
table into a data frame with `setDF` (and the other way around with `setDT`). Let’s transform our data table to data
frame:
```
setDF(dt)
dt
```
```
## id x y
## 1 1 -0.1759081 0.3587206
## 2 2 -1.0397919 0.7036122
## 3 3 1.3415712 0.6285581
## 4 4 -0.7327195 0.6126032
## 5 5 -1.7042843 0.7579853
```
Do you see that the colons after row names are gone? This means `dt`
now is a data frame.
Note that this function behaves very differently from what we have
learned earlier: it modifies the object *in place* (by reference). We
do not have to assign the result into a new variable using a construct
like `df <- setDF(dt)` (but we still can write like this, handy when
using magrittr pipes). This is a manifestation of the power of
data.tables: the object is not copied but the same object is modified
in memory instead. `setDF` and `setDT` are very efficient, even huge
tables are converted instantly with virtually no need for any
additional memory.
However, big powers come hand\-in\-hand with big responsibility:
it is easy to forget that `setDF` modifies the function argument.
C.3 Indexing: The Major Powerhorse of Data Tables
-------------------------------------------------
Data tables’ indexing is much more powerful than that of data frames.
The single\-bracket indexing is a powerful (albeit confusing) set of
functions. It’s general syntax is as follows:
```
dt[i, j, by]
```
where `i` specifies what to do with rows (for instance, select certain
rows), `j` tells what to do with columns (such as select columns,
compute new columns, aggregate columns), and `by` contains the
grouping variables.
Let’s demonstrate this with the *flights* data from *nycflights13*
package. We load the data and transform it into data.table:
```
data(flights, package="nycflights13")
setDT(flights)
head(flights)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## 4: 2013 1 1 544 545 -1 1004
## 5: 2013 1 1 554 600 -6 812
## 6: 2013 1 1 554 558 -4 740
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## 4: 1022 -18 B6 725 N804JB JFK BQN 183
## 5: 837 -25 DL 461 N668DN LGA ATL 116
## 6: 728 12 UA 1696 N39463 EWR ORD 150
## distance hour minute time_hour
## 1: 1400 5 15 2013-01-01 05:00:00
## 2: 1416 5 29 2013-01-01 05:00:00
## 3: 1089 5 40 2013-01-01 05:00:00
## 4: 1576 5 45 2013-01-01 05:00:00
## 5: 762 6 0 2013-01-01 06:00:00
## 6: 719 5 58 2013-01-01 05:00:00
```
### C.3\.1 i: Select Observations
Obviously, we can always just tell which observations we want:
```
flights[c(1:3),]
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour
## 1: 1400 5 15 2013-01-01 05:00:00
## 2: 1416 5 29 2013-01-01 05:00:00
## 3: 1089 5 40 2013-01-01 05:00:00
```
picks the first three lines from the data. Maybe more interestingly,
we can use the special variable `.N` (the number of rows), to get the
penultimate row:
```
flights[.N-1,]
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 9 30 NA 1159 NA NA
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 1344 NA MQ 3572 N511MQ LGA CLE NA
## distance hour minute time_hour
## 1: 419 11 59 2013-09-30 11:00:00
```
We can select observations with logical index vector in the same way as in data frames:
```
head(flights[origin == "EWR" & dest == "SEA",], 3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 724 725 -1 1020
## 2: 2013 1 1 857 851 6 1157
## 3: 2013 1 1 1418 1419 -1 1726
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 1030 -10 AS 11 N594AS EWR SEA 338
## 2: 1222 -25 UA 1670 N45440 EWR SEA 343
## 3: 1732 -6 UA 16 N37464 EWR SEA 348
## distance hour minute time_hour
## 1: 2402 7 25 2013-01-01 07:00:00
## 2: 2402 8 51 2013-01-01 08:00:00
## 3: 2402 14 19 2013-01-01 14:00:00
```
will create a new data table including only flights from Newark to
Seattle. However, note that we just use `origin`, and not
`flights$origin` as were the case with data frames. Data tables
evaluate the arguments as if inside `with`\-function.
The first, integer indexing corresponds to dplyr’s `slice` function
while the other one is equivalent to `filter`.
### C.3\.2 j: Work with Columns
`j` is perhaps the most powerful (and most confusing) of all arguments
for data table indexing. It allows both to select and do more complex
tasks. Lets start with selection:
```
head(flights[, dest], 3)
```
```
## [1] "IAH" "IAH" "MIA"
```
selects only the `dest` variable from the data. Note this results in
a vector, not in a single\-variable data table. If you want to get
that, you can do
```
head(flights[, .(dest)], 3)
```
```
## dest
## 1: IAH
## 2: IAH
## 3: MIA
```
`.()` is just an alias for `list()`, encoded differently in data
tables to improve readability and make it easier to type. If we want to select
more that one variable, we can use the latter syntax:
```
head(flights[, .(origin, dest)], 3)
```
```
## origin dest
## 1: EWR IAH
## 2: LGA IAH
## 3: JFK MIA
```
Selection supports a number of goodies, such as ranges of variables
with `:` (for instance, `dep_time:arr_delay`) and excluding variables
with `!` or `-` (for instance, `-year`).
Obviously we can combine both `i` and `j`: let’s select origin and
departure delay for flights to Seattle:
```
head(flights[dest == "SEA", .(origin, dep_delay)], 3)
```
```
## origin dep_delay
## 1: EWR -1
## 2: JFK 13
## 3: EWR 6
```
The example so far broadly corresponds to dplyr’s `select`.
But `j` is not just for selecting. It is also for computing. Let’s
find the mean arrival delay for flights to Seattle:
```
flights[dest == "SEA", mean(arr_delay, na.rm=TRUE)]
```
```
## [1] -1.099099
```
Several variables can be returned by wrapping, and optionally named,
these in `.()`. For instance, find the average departure and arrival
delay for all flights to Seattle, given the flight was delayed on
arrival, and name these `dep` and `arr`:
```
flights[dest == "SEA" & arr_delay > 0,
.(dep = mean(dep_delay, na.rm=TRUE), arr = mean(arr_delay, na.rm=TRUE))]
```
```
## dep arr
## 1: 33.98266 39.79984
```
The result is a data table with two variables.
We can use the special variable `.N` to count the rows:
```
flights[dest == "SEA" & arr_delay > 0, .N]
```
```
## [1] 1269
```
will tell us how many flights to Seattle were delayed at arrival.
Handling the case where the variable names are stored in other
variables is not that hard, but still adds a layer of complexity. We
can specify the variables in `.SDcols` parameter. This
parameter determines which columns go into `.SD` (\=Subset Data)
special variable. Afterwards we make an `lapply` expression in `j`:
```
flights[dest == "SEA" & arr_delay > 0,
lapply(.SD, function(x) mean(x, na.rm=TRUE)),
.SDcols = c("arr_delay", "dep_delay")]
```
```
## arr_delay dep_delay
## 1: 39.79984 33.98266
```
Let’s repeat: `.SDcols` determines which variables will go into the
special `.SD` list (default: all). `lapply` in `j` computes mean
values of each of the variables in the `.SD` list. This procedure
feels complex, although it is internally optimized.
These examples correspond to dplyr’s `aggregate` function. One can
argue, however, that data tables’ syntax is more confusing and harder
to read. Note also that the functionality data tables offer here is
optimized for speed and memory efficiency but still return a new
object. Aggregation does not work by reference.
### C.3\.3 Group in `by`
Finally, all of the above can by computed by groups using `by`. Let’s
compute the average delays above by carrier and origin:
```
flights[dest == "SEA" & arr_delay > 0,
.(dep = mean(dep_delay, na.rm=TRUE), arr = mean(arr_delay, na.rm=TRUE)),
by = .(carrier, origin)]
```
```
## carrier origin dep arr
## 1: DL JFK 29.82373 39.49831
## 2: B6 JFK 28.49767 40.32093
## 3: UA EWR 40.24053 41.85078
## 4: AS EWR 31.80952 34.36508
## 5: AA JFK 34.04132 40.48760
```
We just had to specify the `by` argument that lists the grouping
variables. If more than one, these should be wrapped in a list with
`.()` function.
We can use the `.N` variable to get the group size. How many flights
did each carrier from each origin?
```
flights[, .N, by=.(carrier, origin)] %>%
head(3)
```
```
## carrier origin N
## 1: UA EWR 46087
## 2: UA LGA 8044
## 3: AA JFK 13783
```
Finally, we can also use quoted variables for grouping too just be
replacing `.()` with `c()`:
```
flights[, .N, by=c("carrier", "origin")] %>%
head(3)
```
```
## carrier origin N
## 1: UA EWR 46087
## 2: UA LGA 8044
## 3: AA JFK 13783
```
In dplyr context, the examples here include `group_by` and `summarize` verbs.
Read more about the basic usage in data.table the vignette [Data analysis using data.table](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-intro.html).
### C.3\.1 i: Select Observations
Obviously, we can always just tell which observations we want:
```
flights[c(1:3),]
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour
## 1: 1400 5 15 2013-01-01 05:00:00
## 2: 1416 5 29 2013-01-01 05:00:00
## 3: 1089 5 40 2013-01-01 05:00:00
```
picks the first three lines from the data. Maybe more interestingly,
we can use the special variable `.N` (the number of rows), to get the
penultimate row:
```
flights[.N-1,]
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 9 30 NA 1159 NA NA
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 1344 NA MQ 3572 N511MQ LGA CLE NA
## distance hour minute time_hour
## 1: 419 11 59 2013-09-30 11:00:00
```
We can select observations with logical index vector in the same way as in data frames:
```
head(flights[origin == "EWR" & dest == "SEA",], 3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 724 725 -1 1020
## 2: 2013 1 1 857 851 6 1157
## 3: 2013 1 1 1418 1419 -1 1726
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 1030 -10 AS 11 N594AS EWR SEA 338
## 2: 1222 -25 UA 1670 N45440 EWR SEA 343
## 3: 1732 -6 UA 16 N37464 EWR SEA 348
## distance hour minute time_hour
## 1: 2402 7 25 2013-01-01 07:00:00
## 2: 2402 8 51 2013-01-01 08:00:00
## 3: 2402 14 19 2013-01-01 14:00:00
```
will create a new data table including only flights from Newark to
Seattle. However, note that we just use `origin`, and not
`flights$origin` as were the case with data frames. Data tables
evaluate the arguments as if inside `with`\-function.
The first, integer indexing corresponds to dplyr’s `slice` function
while the other one is equivalent to `filter`.
### C.3\.2 j: Work with Columns
`j` is perhaps the most powerful (and most confusing) of all arguments
for data table indexing. It allows both to select and do more complex
tasks. Lets start with selection:
```
head(flights[, dest], 3)
```
```
## [1] "IAH" "IAH" "MIA"
```
selects only the `dest` variable from the data. Note this results in
a vector, not in a single\-variable data table. If you want to get
that, you can do
```
head(flights[, .(dest)], 3)
```
```
## dest
## 1: IAH
## 2: IAH
## 3: MIA
```
`.()` is just an alias for `list()`, encoded differently in data
tables to improve readability and make it easier to type. If we want to select
more that one variable, we can use the latter syntax:
```
head(flights[, .(origin, dest)], 3)
```
```
## origin dest
## 1: EWR IAH
## 2: LGA IAH
## 3: JFK MIA
```
Selection supports a number of goodies, such as ranges of variables
with `:` (for instance, `dep_time:arr_delay`) and excluding variables
with `!` or `-` (for instance, `-year`).
Obviously we can combine both `i` and `j`: let’s select origin and
departure delay for flights to Seattle:
```
head(flights[dest == "SEA", .(origin, dep_delay)], 3)
```
```
## origin dep_delay
## 1: EWR -1
## 2: JFK 13
## 3: EWR 6
```
The example so far broadly corresponds to dplyr’s `select`.
But `j` is not just for selecting. It is also for computing. Let’s
find the mean arrival delay for flights to Seattle:
```
flights[dest == "SEA", mean(arr_delay, na.rm=TRUE)]
```
```
## [1] -1.099099
```
Several variables can be returned by wrapping, and optionally named,
these in `.()`. For instance, find the average departure and arrival
delay for all flights to Seattle, given the flight was delayed on
arrival, and name these `dep` and `arr`:
```
flights[dest == "SEA" & arr_delay > 0,
.(dep = mean(dep_delay, na.rm=TRUE), arr = mean(arr_delay, na.rm=TRUE))]
```
```
## dep arr
## 1: 33.98266 39.79984
```
The result is a data table with two variables.
We can use the special variable `.N` to count the rows:
```
flights[dest == "SEA" & arr_delay > 0, .N]
```
```
## [1] 1269
```
will tell us how many flights to Seattle were delayed at arrival.
Handling the case where the variable names are stored in other
variables is not that hard, but still adds a layer of complexity. We
can specify the variables in `.SDcols` parameter. This
parameter determines which columns go into `.SD` (\=Subset Data)
special variable. Afterwards we make an `lapply` expression in `j`:
```
flights[dest == "SEA" & arr_delay > 0,
lapply(.SD, function(x) mean(x, na.rm=TRUE)),
.SDcols = c("arr_delay", "dep_delay")]
```
```
## arr_delay dep_delay
## 1: 39.79984 33.98266
```
Let’s repeat: `.SDcols` determines which variables will go into the
special `.SD` list (default: all). `lapply` in `j` computes mean
values of each of the variables in the `.SD` list. This procedure
feels complex, although it is internally optimized.
These examples correspond to dplyr’s `aggregate` function. One can
argue, however, that data tables’ syntax is more confusing and harder
to read. Note also that the functionality data tables offer here is
optimized for speed and memory efficiency but still return a new
object. Aggregation does not work by reference.
### C.3\.3 Group in `by`
Finally, all of the above can by computed by groups using `by`. Let’s
compute the average delays above by carrier and origin:
```
flights[dest == "SEA" & arr_delay > 0,
.(dep = mean(dep_delay, na.rm=TRUE), arr = mean(arr_delay, na.rm=TRUE)),
by = .(carrier, origin)]
```
```
## carrier origin dep arr
## 1: DL JFK 29.82373 39.49831
## 2: B6 JFK 28.49767 40.32093
## 3: UA EWR 40.24053 41.85078
## 4: AS EWR 31.80952 34.36508
## 5: AA JFK 34.04132 40.48760
```
We just had to specify the `by` argument that lists the grouping
variables. If more than one, these should be wrapped in a list with
`.()` function.
We can use the `.N` variable to get the group size. How many flights
did each carrier from each origin?
```
flights[, .N, by=.(carrier, origin)] %>%
head(3)
```
```
## carrier origin N
## 1: UA EWR 46087
## 2: UA LGA 8044
## 3: AA JFK 13783
```
Finally, we can also use quoted variables for grouping too just be
replacing `.()` with `c()`:
```
flights[, .N, by=c("carrier", "origin")] %>%
head(3)
```
```
## carrier origin N
## 1: UA EWR 46087
## 2: UA LGA 8044
## 3: AA JFK 13783
```
In dplyr context, the examples here include `group_by` and `summarize` verbs.
Read more about the basic usage in data.table the vignette [Data analysis using data.table](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-intro.html).
C.4 `:=`–Create variables by reference
--------------------------------------
While summarizing we compute values in `j`, these will always create a
new data.table. Reducing operations are not possible to do in\-place.
But computing new variables can be done in\-place.
In place variable computations (without summarizing) can be done with
`:=` assignment operator in `j`. Let’s compute a new
variable–speed–for each flight. We can do this as follows:
```
flights[, speed := distance/(air_time/60)]
flights %>% head(3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour speed
## 1: 1400 5 15 2013-01-01 05:00:00 370.0441
## 2: 1416 5 29 2013-01-01 05:00:00 374.2731
## 3: 1089 5 40 2013-01-01 05:00:00 408.3750
```
We see the new variable, speed, included as the last variable in the
data. Note we did this operation *by reference*, i.e. we did not
assign the result to a new data table. The existing table was
modified in place.
The same assignment operator also permits us to remove variables by
setting these to `NULL`. Let’s remove speed:
```
flights[, speed := NULL]
flights %>% head(3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour
## 1: 1400 5 15 2013-01-01 05:00:00
## 2: 1416 5 29 2013-01-01 05:00:00
## 3: 1089 5 40 2013-01-01 05:00:00
```
Indeed, there is no speed any more.
Assigning more that one variable by reference may feel somewhat more
intimidating:
```
flights[, c("speed", "meanDelay") := .(distance/(air_time/60), (arr_delay + dep_delay)/2)]
flights %>% head(3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour speed meanDelay
## 1: 1400 5 15 2013-01-01 05:00:00 370.0441 6.5
## 2: 1416 5 29 2013-01-01 05:00:00 374.2731 12.0
## 3: 1089 5 40 2013-01-01 05:00:00 408.3750 17.5
```
Assignment works together with both selection and grouping. For
instance, we may want to replace negative delay by zeros:
```
flights[ arr_delay < 0, arr_delay := 0][, arr_delay] %>%
head(20)
```
```
## [1] 11 20 33 0 0 12 19 0 0 8 0 0 7 0 31 0 0 0 12 0
```
Indeed, we only see positive numbers and zeros. But be careful: now
we have overwritten the `arr_delay` in the original data. We cannot
restore the previous state any more without re\-loading the dataset.
As an example of
groupings, let’s compute the maximum departure delay by origin:
```
flights[, maxDelay := max(dep_delay, na.rm=TRUE), by=origin] %>%
head(4)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## 4: 2013 1 1 544 545 -1 1004
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## 4: 1022 0 B6 725 N804JB JFK BQN 183
## distance hour minute time_hour speed meanDelay maxDelay
## 1: 1400 5 15 2013-01-01 05:00:00 370.0441 6.5 1126
## 2: 1416 5 29 2013-01-01 05:00:00 374.2731 12.0 911
## 3: 1089 5 40 2013-01-01 05:00:00 408.3750 17.5 1301
## 4: 1576 5 45 2013-01-01 05:00:00 516.7213 -9.5 1301
```
We can see that `by` caused the delay to be computed for each group,
however, the data is not summarized, just the max delay is added to
every single row.
Finally, if you *do not* want to modify the original data, you should
use `copy` function. This makes a deep copy of the data, and you can
modify the copy afterwards:
```
fl <- copy(flights)
fl <- fl[, .(origin, dest)]
head(fl, 3)
```
```
## origin dest
## 1: EWR IAH
## 2: LGA IAH
## 3: JFK MIA
```
```
head(flights, 3)
```
```
## year month day dep_time sched_dep_time dep_delay arr_time
## 1: 2013 1 1 517 515 2 830
## 2: 2013 1 1 533 529 4 850
## 3: 2013 1 1 542 540 2 923
## sched_arr_time arr_delay carrier flight tailnum origin dest air_time
## 1: 819 11 UA 1545 N14228 EWR IAH 227
## 2: 830 20 UA 1714 N24211 LGA IAH 227
## 3: 850 33 AA 1141 N619AA JFK MIA 160
## distance hour minute time_hour speed meanDelay maxDelay
## 1: 1400 5 15 2013-01-01 05:00:00 370.0441 6.5 1126
## 2: 1416 5 29 2013-01-01 05:00:00 374.2731 12.0 911
## 3: 1089 5 40 2013-01-01 05:00:00 408.3750 17.5 1301
```
As you see, the `flights` data has not changed.
These operations correspond to the dplyr’s `mutate` verb. However,
`mutate` always makes a copy of the original dataset, something that
may well make your analysis slow and sluggish with large data.
Read more in vignette [Data.table reference semantics](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-reference-semantics.html)
C.5 keys
--------
Data tables allow fast lookup based on *key*. In it’s simplest
version, a key is a column (or several columns) which is used to
pre\-sort the data table. Pre\-sorting makes it much faster to look up
certain values, perform grouping operations and merges. As data can
only be sorted according to one rule at a time, there can only be one
key on data.table (but a key may be based on several variables).
Let’s set origin and destination as keys for the data table:
```
data(flights, pkg="nycflights13")
```
```
## Warning in data(flights, pkg = "nycflights13"): data set 'flights' not
## found
```
```
## Warning in data(flights, pkg = "nycflights13"): data set 'nycflights13' not
## found
```
```
setDT(flights, key=c("origin", "dest"))
fl <- flights[,.(origin, dest, arr_delay)]
# focus on a few variables only
head(fl, 3)
```
```
## origin dest arr_delay
## 1: EWR ALB 0
## 2: EWR ALB 40
## 3: EWR ALB 44
```
We see that both origin and destination are alphabetically ordered.
Note that when selecting variables, the resulting data table `fl` will
have the same keys as the original one.
When set, we can easily subset by key by just feeding the key values
in `i`:
```
fl["LGA"] %>%
head(5)
```
```
## origin dest arr_delay
## 1: LGA ATL 0
## 2: LGA ATL 12
## 3: LGA ATL 5
## 4: LGA ATL 0
## 5: LGA ATL 17
```
will extract all LaGuardia\-originating flights. In terms of output,
this is equivalent to `fl[origin == "LGA"]`, just much more
efficient. When you want to
extract flights based on origin\-destination pair, you can just add
both key columns:
```
fl[.("EWR", "SEA")] %>%
head(4)
```
```
## origin dest arr_delay
## 1: EWR SEA 0
## 2: EWR SEA 0
## 3: EWR SEA 0
## 4: EWR SEA 0
```
Again, this can be achieved in other ways, just keys are more
efficient. Finally, if we want to extract based on the second key,
the syntax is more confusing:
```
fl[.(unique(origin), "SEA")] %>%
head(4)
```
```
## origin dest arr_delay
## 1: EWR SEA 0
## 2: EWR SEA 0
## 3: EWR SEA 0
## 4: EWR SEA 0
```
We have to tell the `[` that we want to extract all observations
where the first key is everything, and the second one is “SEA”.
Read more in the vignette [Keys and fast binary search based subset](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-keys-fast-subset.html).
C.6 Resources
-------------
* [Data Table CRAN page](https://cran.r-project.org/web/packages/data.table/index.html).
Vignettes are a very valuable source of information.
| Field Specific |
info201.github.io | https://info201.github.io/remote-server.html |
D Using Remote Server
=====================
Sooner\-or\-later you are in a situation where you have to work on a
**distant networked computer**. There are many reasons for this,
either your laptop is to weak for certain tasks, or certain
data is not allowed to be taken out from where it is, or you are
expected to use the same computer as your teammates. Or maybe you want to set up a website and your laptop, obviously, travels around with you instead of staying in one place with reliable internet connection all the time. The server you use may
be a standalone box located in a rack in your employer’s server room,
or it may be a virtual machine in a cloud like Amazon EC2\. You may
also want to set up your own server, or your own virtual machine.
D.1 Server Setup
----------------
There are many ways one can set up a distant machine. It may be
Windows or linux (or any of the other unixes). It may or may not have
graphical user interface (GUI) installed or otherwise accessible (many
unix programs can display nice windows on your laptop while still
running on the server). It may or may not have RStudio
available over web browser. Here we discuss a barebone option
with no access to GUI and no web access to RStudio. We assume this server is already set up for you and do not discuss installation here.
This is a fairly common setup, for instance when dealing with
sensitive data, in organizations where computer skills and
sysadmin’s time is limited, or when you rent your own cheap but limited
server (graphical user interface takes a lot of memory).
D.2 Connecting to the Remote Server
-----------------------------------
Given the server is already running, your first task is to connect to it. Here it means
that you will enter commands on your laptop, but those command are
actually run on the server.
The most common way to connect to remote server is via *ssh*. ssh
stands for “secure shell” and means that all
communication between you and the remote computer is encrypted. You connect to the server
as
```
ssh myserver.somewhere.com
```
*ssh* is nowadays pretty much the industry standard for such connections, it comes pre\-installed on macs and it is included with *gitbash* too.
When you ssh to the remote server, it asks for your password and opens remote shell
connection. If this is your first time to connect from this particular laptop, you may also be asked to accept it’s fingerprint. This is an additional security measure to ensure that you are actually talking to the computer you think you are talking to.
The remote machine will offer you a similar bash shell environment as you are using
on your computer but most likely you see a different prompt, one that
contains the server’s name. You may also see some login
messages. Now all the commands you are issuing are
running on the remote machine. So `pwd` shows your working
directory on the server, which in general is not the same as on the
local machine, and `ls` shows the files on the server, not on your
laptop. Now you can use `mkdir` to create the project folder on the
server.
Note: when entering your password, it usually does not
print anything in response, not even asterisks. It feels as if your
keyboard is not working. But it is working, and when you finish and press enter, you will be logged in.
By default, ssh attempts to login with your local username. If your
username on the server differs from that on your laptop, you want to add it to the ssh command:
```
ssh username@myserver.somewhere.com
```
Local and remote shell window
The screenshot above shows two command line windows, the upper one connecting remotely on *info201*, and the lower one running locally at a computer called *is\-otoometd5060*. In the upper one, you can see the login command `ssh otoomet@info201.ischool.uw.edu` and various start\-up messages. The `pwd` command shows the current working directory being */home/otoomet*, and `ls` shows there are for objects there. Below, we are on the local computer *is\-otoometd5060*. Current working directory has the same name, but on the local computer it contains rather more entries.
Finally, when done, you want to get out. The polite way to close the
connection is with
command
```
exit
```
that waits until all open connections are safely closed. But usually you
can as well just close the terminal.
D.3 Copying Files
-----------------
Before you can run your R scripts, or build a website on the server, you have to get your code and data copied over. There are several possibilities.
### D.3\.1 scp
The most straightforward approach is `scp`, **s**ecure **c**o**p**y. It comes pre\-installed on mac and gitbash and it works in a similar fashion as `cp` for the local files, just `scp` can copy
files between your machine and a remote computer. Under the hood it uses ssh
connection, just like `ssh` command itself, so the bad guys out there cannot easily see what you are doing. It syntax is rather
similar to that of `cp`:
```
scp user1@host1:file1 user2@host2:file2
```
This copies “file1” from the server “host1” under username “user1” to
the other server. Passwords are asked for as needed. The “host” part
of the file must be understood as the full hostname including dots,
such as “hyak.washington.edu”. “file” is the full path to file,
relative to home directory, such as `Desktop/info201/myscript.R`.
When accessing local files, you may omit the “[user@host](mailto:user@host):” part. So,
for instance, in order to copy your `myscript.R` from folder
`info201` on your laptop’s Desktop to the folder `scripts` in
your home folder on the server, you may issue
```
scp Desktop/info201/myscript.R myusername@server.ischool.edu:scripts/
```
(here we assume that the working directory of your laptop is the one above
`Desktop`.)
Note that exactly as with `cp`, you may omit the destination file name
if the destination is a directory: it simply copies the file into that
directory while preserving its name.
`scp` in action. The upper shell window, running locally, depicts *scp* in action, copying file *startServer.R* from directory *api* to the remote server into *api* directory (while retaining the same name). The lower window shows the remote machine: first, `ls` command shows we have an *api* folder in our home directory, and second `ls -l api` shows the content of the *api* directory in long form. *startServer.R* is copied over there.
After running your script, you may want to copy your results back to
your laptop. For instance, if you need to get the file
`figure.png` out of the server, you can do
```
scp myusername@server.ischool.edu:scripts/figure.png Desktop/info201/
```
As above, this copies a file from the given directory, and drops it
into the `info201` folder on your Desktop.
Always issue `scp` command locally on your laptop. This is because your laptop can access the server but usually not the way around. In order to be connected via *ssh* (and *scp*), a computer must have public ip\-address, and ssh server up and running. It is unlikely you have configured your laptop in this way.
### D.3\.2 rsync
`rsync` is a more advanced approach to `scp`. It works in many ways
like `scp`, just it is smart enough to understand which files
are updated, and copy the updated parts of the files only. It is the
recommended way for working with small updates in large files.
Its syntax is rather similar to that of `scp`. To copy `file` to the
remote server as `file2` (in the home directory), we do
```
rsync file user2@host2:file2
```
and in order to copy a `file1` from server as local `file` (in the
current working directory):
```
rsync file user1@host1:file1 file
```
I also recommend to
explore some of its many options, for instance `-v` (verbose) reports
what it’s doing.
The example above with your code and figure might now look like that:
```
rsync -v Desktop/info201/myscript.R myusername@server.ischool.edu:scripts/
# now run the script on the remote machine
rsync -v myusername@server.ischool.edu:scripts/figure.pdf Desktop/info201/
```
Maybe the easiest way to copy your files is to copy (or rather update) the whole
directories. For instance, instead of the code above, you can do
```
# copy all files to server:
rsync -v Desktop/info201/* myusername@server.ischool.edu:scripts/
# now run the script on the remote machine
# ... and copy the results back:
rsync -v myusername@server.ischool.edu:scripts/* Desktop/info201/
```
Here `*` means *all files in this directory*. Hence, instead of
copying the files individually between the computers, we just copy
all
of them. Even better, we actually do not copy but just update. Huge
files that do not change do not take any bandwidth.
### D.3\.3 Graphical Frontends
Instead on relying on command line tools, one can also use graphical
front\-ends. For instance, “WinSCP” is a nice Norton Commander\-Style
frontend for copying files between the local and a remote machine over scp
for Windows. It provides a split window representing files on the
local and the remote end, and one can move, copy\-and\-paste and interact
with the mouse on these panes. On Mac you may take a look at
“Cyberduck”.
### D.3\.4 Remote Editing
Besides copying your files, many text editors also offer a “remote
editing” option. From the user perspective this looks as if directly
working on the remote server’s hard disk. Under the hood, the files
are copied back and forth with scp, rsync or one of their friends.
Emacs and vi do it out\-of\-the box, VSCode, Atom and sublime require a
plugin. AFAIK it is not possible with RStudio.
It is also possible to mount (attach) the harddisk of the remote
server to your laptop as if it were a local disk. Look yourself for
more information if you are interested.
D.4 R and Rscript
-----------------
When your code has been transferred to the server, your next task is to
run it. But before you can do it, you may want to install the
packages you need. For instance, you may want to install the *ggplot2* and *dplyr*. This must be done from R console using
`install.packages()`. You start R interactively by the command
```
R
```
It opens an R session, not unlike what you see inside of RStudio, just
here you have no RStudio to handrail you through the session. Now all loading,
saving, inspecting files, etc must be done through R commands.
The first time you do it, R complains about
non\-writeable system\-wide library and proposes to install and create
your personal libary. You should answer “yes” to these prompts. As
Linux systems typically compile the packages during installations, installation is slow and you see many messages (including warnings) in the
process. But it works, given that the necessary system libraries are available. You may alo open another terminal and ssh to the server from there while the packages are compiling in the other window.
Now you can finally run your R code. I strongly recommend to do it
from the directory where you intend to run the project before starting
R (`cd scripts` if you follow the example directory setup above). There are two options: either start R
interactively, or run it as a script.
If you do it from an interactive R session, you have to *source* your script:
```
source("myscript.R")
```
The script will run, and the first attempt most likely ends with an error message. You have
to correct the error either on your laptop and copy the file over to
the server again, or directly on the server, and
re\-run it again. Note that you don’t have to exit from the R session when
copying the files between your laptop and the server. Edit it, copy it over
from your laptop (using `scp` or
other tools), and just re\-source the file from
within the R session. If you need an open R session on the server, you may want to have several terminals connected to the server at the same time: in one, you have the R session, in another you may want to copy/move/edit files, and it may also be handy to have a window with `htop` too see how your running code is doing (see below).
Three terminals connecting to a remote server at the same time. The top one has been used for file management, the middle one shows tha active processes by user *otoomet*, and the bottom one has open R session for package installations. Multiple open connections is often a convenient way to switch frequently between different tasks.
Opening a separate R session may be useful for installing packages.
For running your scripts, I recommend you to run it entirely from
command line, either as
```
R CMD BATCH myscript.R
```
or
```
Rscript myscript.R
```
The first version produces a little more informative error messages,
the other one handles the environment in a little more consistent and
efficient manner.
### D.4\.1 Graphics Output with No GUI
If the server does not have any graphics capabilities, you have to
save your figures as files. For instance, to save the image in a pdf
file, you may use the following code in your R program:
```
pdf(file="figure1.pdf", width=12, height=8)
# width and height in inches
# check also out jpeg() and png() devices.
# do your plotting here
plot(1:10, rnorm(10))
# done plotting
dev.off()
# saves the image to disk and closes the file.
```
Afterwards you will have to copy the image file *figure1\.pdf* to your laptop for future use. Note that the file will be saved in the current working directory (unless you specify another folder) for the R session. This is normally the folder where you execute the `Rscript` command.
Besides of pdf graphics, R can also output jpg, png, svg and other formats. Check out the corresponding devices `jpeg`, `png`, `svg` and so forth. Additionally, *ggplot* has it’s own dedicated way of saving plots using `ggsave` although the base R graphics devices, such as `pdf` will work too.
D.5 Life on Server
------------------
The servers operate the same in many ways as the command line
on your own computer. However, there are a number of differences.
### D.5\.1 Be Social!
While you laptop is yours, and you are free to exploit all its
resources for your own good, this is not true for the server. The server is a
multiuser system, potentially doing good work for many people at the
same time. So
the first rule is: **Don’t take more resources than what you need!**
This that means don’t let the system run, grab memory, or occupy disk space
just for fun. Try to keep your R workspace clean (check out `rm()`
function) and
close R as soon as it has finished (this happens automatically if you
run your script through `Rscript` from command line). Don’t copy the dataset without a
good reason, and keep your copies in a compressed form. R can open
gzip and bzip2 files on the fly, so usually you don’t even need to
decompress these. Avoid costly recalculations of something you
already calculated. All this is even more important the last days before the deadline
when many people are running using the server.
Servers are typically well configured to tame misbehaving programs.
You may sometimes see your script stopping with a message “killed”.
This most likely means that it occupied too much memory, and the system
just killed it. Deal with this.
### D.5\.2 Useful Things to Do
There are several useful commands you can experiment with while on the
server.
```
htop
```
(press `q` to quit) tells you which programs run on the server, how much memory and cpu do
these take, and who are their owners (the corresponding users). It
also permits you to kill your misbehaving processes (press `k` and
select `SIGKILL`). Read more with `man htop`.
```
w
```
(**w**ho) prints the current logged\-in users of the server.
```
df -h
```
(**d**isplay **f**ree in **h**uman\-readable units) shows the free and
occupied disk space. You are mainly influenced by what is going on in the file system
`/home`.
### D.5\.3 Permissions and ownership
Unix systems are very strict about ownership and permissions. You are
a normal user with limited privileges. In particular, you cannot
modify or delete files that you don’t own. In a similar fashion, you
cannot kill processes you did not start. Feel free to attempt. It
won’t work.
In case you need to do something with elevated privileges (as
“superuser”), you have to contact the system administrator. In practice,
their responsiveness and willingness to accommodate your requests will
vary.
### D.5\.4 More than One Connection
It perfectly possible to log onto the server through multiple terminals at the
same time. You just open several terminals and log onto the
server from each of these. You can use one terminal to observe how your script is
doing (with `htop`), the other one to run the script, and the third one to inspect
output. If you find such approach useful, I recommend you to
familiarize yourself with gnu screen (command `screen` that includes
many related goodies.)
D.6 Advanced Usage
------------------
### D.6\.1 ssh keys, .ssh/config
Without further configuration, every time you open a ssh connection,
you have to type your password. Instead of re\-entering it
over and over again—this may not be particularly secure and it is definitely not convenient—you can configure
your ssh keys and copy it to the server. Next time, you will be
automatically authenticated with the key and you don’t have to type
the password any more. Note: this is the same ssh key that is used by GitHub if
you use ssh connection to GitHub.
As the first step, you have to create the key
with `ssh-keygen` (you may choose an empty passphrase) unless you
already have created one. Thereafter copy
your public key to the server with `ssh-copy-id`. Next time you log
onto the server, no password is needed. A good source for help with creating
and managing ssh keys is
[GitHub help](https://help.github.com/articles/connecting-to-github-with-ssh/).
You can also configure your ssh to recognize abbreviated
server names and your corresponding user names. This allows you to
connect to server with a simple command like `ssh info201`. This
information is stored in the file
`~/.ssh/config`, and should contain lines like
```
Host info201
User <your username>
Hostname info201.ischool.uw.edu
```
The `Host` keyword is followed by the abbreviated name of the server,
the following lines contain your username and the publicly visible
hostname for the server. Seek out more information if you are interested.
### D.6\.2 More about command line: pipes and shell patterns
*bash* is a powerful programming language. It is not particularly well suited to peform calculations or produce graphs, but it is excellent in glueing together other programs and their output.
One very powerful construct are *pipes*. These are in many ways similar to *magrittr* pipes in R, or perhaps we should say the other way around as shell pipes were introduced in 1973, a quarter of century before R was created. Pipes connect output of one command into input of another command. For instance, let’s take commands `ls -s` and `head`. The former lists the files (in long form) and the latter prints out a few first lines of a text file. But `head` is not just for printing files, it can print the first few lines of whatever you feed it. Look, for instance, the following command (actually a compound command):
```
ls -l | head
```
`ls -l` creates the file listing (in long form). But instead of printing it on screen, it will now send it over pipe `|` to the `head` utility. That one will extract the first lines and print only those.
Example of `ls -l` command that prints a number of files (above). Below, the same command is piped through `head -3` that retains only the three first lines (and prints those). Note that the first line is not a file, but a total size of files in this directory (in kilobytes).
Pipes are not limited to two commands only. You can pipe as many commands together as you with. For instance, you may want to see a few first lines in a large compressed csv file that contain the word *Zilong*. We use the following commands:
* **bzcat** prints bzip\-compressed data (you normally invoke it like `bzcat file.txt`). But it just prints and does not do anything else with the output.
* **grep** searches for a pattern in text. This can be used as `grep pattern file`, for instance grep `salary business-report.txt`. Note that *pattern* is a regular expression (rather similar as in R `gsub` and `grep` functions), so `grep` can search for a wide range of patterns. However, it cannot open compressed files, and neither can it limit the output to just a few lines.
* **head** prints first few lines of text. You can print out the first *n* lines of a file as `head -n file.txt`, but again–this does not work with compressed files.
We pipe the commands together as
```
bzcat data.csv.bz2 | grep Zilong | head
```
and achieve the result we want. So pipes are an excellent way to join small commands, each of which is good at only a single task, into complex compound tasks.
Another handy (albeit much less powerful) tool in shell is the shell patterns. These are a little bit like regular expressions for file names, just much simpler. There are two special characters in file names:
* **\*** means any number of any characters. For instance, `a.*` means all files like `a.`, `a.c`, `a.txt`, `a.txt.old`, `a...` and so on. It is just any number of any characters, including none at all, and “any” also means dots. However, the pattern does not cover `ba.c`.
* **?** means a single character, so `a.?` can stand for `a.c` and `a.R` but not for `a.txt`.
Shell patterns are useful for file manipulations where you have quicly sort though some sort of fine name patterns. These are handled by shell and not by individual commands, so they may not work if you are not at shell prompt but running another program, such as R or a text editor.
For instance, let’s list all *jpg* files in the current directory:
```
ls *.jpg
```
This lists all files of patten `*.jpg`, i.e. everything that has *.jpg* at it’s end.
Now let us copy all *png* files from server to the current directory:
```
scp user@server.com:*.png .
```
This copies all files in the form `*.png` from the server here, i.e. all files that end with *.png*.
### D.6\.3 Running RScript in ssh Session
Passwordless ssh connection gives you new wonderful possibilities.
First, you don’t even have to log into the server explicitly. You can
run a one\-command ssh session on the server directly from your
laptop. Namely, ssh accepts commands to be run on the remote
machine. If invoked as
```
ssh myusername@server.ischool.edu "Rscript myscript.R"
```
It does not open a remote shell but runs `Rscript script.R` instead.
Your command sequence for the whole process will accordingly look something like:
```
rsync -v Desktop/info201/* myusername@server.ischool.edu:scripts/
ssh myusername@server.ischool.edu "Rscript scripts/myscript.R"
rsync -v myusername@server.ischool.edu:scripts/* Desktop/info201/
```
All these command are issued on your laptop. You can also save these
to a text file and run all three together as a single **shell
script**!
Further, you don’t even need the shell. Instead, you may explain R on
your laptop how to start R on the
remote server over ssh. In this way you can turn your laptop and
server combination
into a high\-performance\-computing cluster! This allows you
to copy the script and run it on the server directly from within your
R program that runs
on your laptop. Cluster computing is out of scope of this chapter, but if you
are interested, look up the **makePSOCKcluster()** function in **parallel**
package.
D.1 Server Setup
----------------
There are many ways one can set up a distant machine. It may be
Windows or linux (or any of the other unixes). It may or may not have
graphical user interface (GUI) installed or otherwise accessible (many
unix programs can display nice windows on your laptop while still
running on the server). It may or may not have RStudio
available over web browser. Here we discuss a barebone option
with no access to GUI and no web access to RStudio. We assume this server is already set up for you and do not discuss installation here.
This is a fairly common setup, for instance when dealing with
sensitive data, in organizations where computer skills and
sysadmin’s time is limited, or when you rent your own cheap but limited
server (graphical user interface takes a lot of memory).
D.2 Connecting to the Remote Server
-----------------------------------
Given the server is already running, your first task is to connect to it. Here it means
that you will enter commands on your laptop, but those command are
actually run on the server.
The most common way to connect to remote server is via *ssh*. ssh
stands for “secure shell” and means that all
communication between you and the remote computer is encrypted. You connect to the server
as
```
ssh myserver.somewhere.com
```
*ssh* is nowadays pretty much the industry standard for such connections, it comes pre\-installed on macs and it is included with *gitbash* too.
When you ssh to the remote server, it asks for your password and opens remote shell
connection. If this is your first time to connect from this particular laptop, you may also be asked to accept it’s fingerprint. This is an additional security measure to ensure that you are actually talking to the computer you think you are talking to.
The remote machine will offer you a similar bash shell environment as you are using
on your computer but most likely you see a different prompt, one that
contains the server’s name. You may also see some login
messages. Now all the commands you are issuing are
running on the remote machine. So `pwd` shows your working
directory on the server, which in general is not the same as on the
local machine, and `ls` shows the files on the server, not on your
laptop. Now you can use `mkdir` to create the project folder on the
server.
Note: when entering your password, it usually does not
print anything in response, not even asterisks. It feels as if your
keyboard is not working. But it is working, and when you finish and press enter, you will be logged in.
By default, ssh attempts to login with your local username. If your
username on the server differs from that on your laptop, you want to add it to the ssh command:
```
ssh username@myserver.somewhere.com
```
Local and remote shell window
The screenshot above shows two command line windows, the upper one connecting remotely on *info201*, and the lower one running locally at a computer called *is\-otoometd5060*. In the upper one, you can see the login command `ssh otoomet@info201.ischool.uw.edu` and various start\-up messages. The `pwd` command shows the current working directory being */home/otoomet*, and `ls` shows there are for objects there. Below, we are on the local computer *is\-otoometd5060*. Current working directory has the same name, but on the local computer it contains rather more entries.
Finally, when done, you want to get out. The polite way to close the
connection is with
command
```
exit
```
that waits until all open connections are safely closed. But usually you
can as well just close the terminal.
D.3 Copying Files
-----------------
Before you can run your R scripts, or build a website on the server, you have to get your code and data copied over. There are several possibilities.
### D.3\.1 scp
The most straightforward approach is `scp`, **s**ecure **c**o**p**y. It comes pre\-installed on mac and gitbash and it works in a similar fashion as `cp` for the local files, just `scp` can copy
files between your machine and a remote computer. Under the hood it uses ssh
connection, just like `ssh` command itself, so the bad guys out there cannot easily see what you are doing. It syntax is rather
similar to that of `cp`:
```
scp user1@host1:file1 user2@host2:file2
```
This copies “file1” from the server “host1” under username “user1” to
the other server. Passwords are asked for as needed. The “host” part
of the file must be understood as the full hostname including dots,
such as “hyak.washington.edu”. “file” is the full path to file,
relative to home directory, such as `Desktop/info201/myscript.R`.
When accessing local files, you may omit the “[user@host](mailto:user@host):” part. So,
for instance, in order to copy your `myscript.R` from folder
`info201` on your laptop’s Desktop to the folder `scripts` in
your home folder on the server, you may issue
```
scp Desktop/info201/myscript.R myusername@server.ischool.edu:scripts/
```
(here we assume that the working directory of your laptop is the one above
`Desktop`.)
Note that exactly as with `cp`, you may omit the destination file name
if the destination is a directory: it simply copies the file into that
directory while preserving its name.
`scp` in action. The upper shell window, running locally, depicts *scp* in action, copying file *startServer.R* from directory *api* to the remote server into *api* directory (while retaining the same name). The lower window shows the remote machine: first, `ls` command shows we have an *api* folder in our home directory, and second `ls -l api` shows the content of the *api* directory in long form. *startServer.R* is copied over there.
After running your script, you may want to copy your results back to
your laptop. For instance, if you need to get the file
`figure.png` out of the server, you can do
```
scp myusername@server.ischool.edu:scripts/figure.png Desktop/info201/
```
As above, this copies a file from the given directory, and drops it
into the `info201` folder on your Desktop.
Always issue `scp` command locally on your laptop. This is because your laptop can access the server but usually not the way around. In order to be connected via *ssh* (and *scp*), a computer must have public ip\-address, and ssh server up and running. It is unlikely you have configured your laptop in this way.
### D.3\.2 rsync
`rsync` is a more advanced approach to `scp`. It works in many ways
like `scp`, just it is smart enough to understand which files
are updated, and copy the updated parts of the files only. It is the
recommended way for working with small updates in large files.
Its syntax is rather similar to that of `scp`. To copy `file` to the
remote server as `file2` (in the home directory), we do
```
rsync file user2@host2:file2
```
and in order to copy a `file1` from server as local `file` (in the
current working directory):
```
rsync file user1@host1:file1 file
```
I also recommend to
explore some of its many options, for instance `-v` (verbose) reports
what it’s doing.
The example above with your code and figure might now look like that:
```
rsync -v Desktop/info201/myscript.R myusername@server.ischool.edu:scripts/
# now run the script on the remote machine
rsync -v myusername@server.ischool.edu:scripts/figure.pdf Desktop/info201/
```
Maybe the easiest way to copy your files is to copy (or rather update) the whole
directories. For instance, instead of the code above, you can do
```
# copy all files to server:
rsync -v Desktop/info201/* myusername@server.ischool.edu:scripts/
# now run the script on the remote machine
# ... and copy the results back:
rsync -v myusername@server.ischool.edu:scripts/* Desktop/info201/
```
Here `*` means *all files in this directory*. Hence, instead of
copying the files individually between the computers, we just copy
all
of them. Even better, we actually do not copy but just update. Huge
files that do not change do not take any bandwidth.
### D.3\.3 Graphical Frontends
Instead on relying on command line tools, one can also use graphical
front\-ends. For instance, “WinSCP” is a nice Norton Commander\-Style
frontend for copying files between the local and a remote machine over scp
for Windows. It provides a split window representing files on the
local and the remote end, and one can move, copy\-and\-paste and interact
with the mouse on these panes. On Mac you may take a look at
“Cyberduck”.
### D.3\.4 Remote Editing
Besides copying your files, many text editors also offer a “remote
editing” option. From the user perspective this looks as if directly
working on the remote server’s hard disk. Under the hood, the files
are copied back and forth with scp, rsync or one of their friends.
Emacs and vi do it out\-of\-the box, VSCode, Atom and sublime require a
plugin. AFAIK it is not possible with RStudio.
It is also possible to mount (attach) the harddisk of the remote
server to your laptop as if it were a local disk. Look yourself for
more information if you are interested.
### D.3\.1 scp
The most straightforward approach is `scp`, **s**ecure **c**o**p**y. It comes pre\-installed on mac and gitbash and it works in a similar fashion as `cp` for the local files, just `scp` can copy
files between your machine and a remote computer. Under the hood it uses ssh
connection, just like `ssh` command itself, so the bad guys out there cannot easily see what you are doing. It syntax is rather
similar to that of `cp`:
```
scp user1@host1:file1 user2@host2:file2
```
This copies “file1” from the server “host1” under username “user1” to
the other server. Passwords are asked for as needed. The “host” part
of the file must be understood as the full hostname including dots,
such as “hyak.washington.edu”. “file” is the full path to file,
relative to home directory, such as `Desktop/info201/myscript.R`.
When accessing local files, you may omit the “[user@host](mailto:user@host):” part. So,
for instance, in order to copy your `myscript.R` from folder
`info201` on your laptop’s Desktop to the folder `scripts` in
your home folder on the server, you may issue
```
scp Desktop/info201/myscript.R myusername@server.ischool.edu:scripts/
```
(here we assume that the working directory of your laptop is the one above
`Desktop`.)
Note that exactly as with `cp`, you may omit the destination file name
if the destination is a directory: it simply copies the file into that
directory while preserving its name.
`scp` in action. The upper shell window, running locally, depicts *scp* in action, copying file *startServer.R* from directory *api* to the remote server into *api* directory (while retaining the same name). The lower window shows the remote machine: first, `ls` command shows we have an *api* folder in our home directory, and second `ls -l api` shows the content of the *api* directory in long form. *startServer.R* is copied over there.
After running your script, you may want to copy your results back to
your laptop. For instance, if you need to get the file
`figure.png` out of the server, you can do
```
scp myusername@server.ischool.edu:scripts/figure.png Desktop/info201/
```
As above, this copies a file from the given directory, and drops it
into the `info201` folder on your Desktop.
Always issue `scp` command locally on your laptop. This is because your laptop can access the server but usually not the way around. In order to be connected via *ssh* (and *scp*), a computer must have public ip\-address, and ssh server up and running. It is unlikely you have configured your laptop in this way.
### D.3\.2 rsync
`rsync` is a more advanced approach to `scp`. It works in many ways
like `scp`, just it is smart enough to understand which files
are updated, and copy the updated parts of the files only. It is the
recommended way for working with small updates in large files.
Its syntax is rather similar to that of `scp`. To copy `file` to the
remote server as `file2` (in the home directory), we do
```
rsync file user2@host2:file2
```
and in order to copy a `file1` from server as local `file` (in the
current working directory):
```
rsync file user1@host1:file1 file
```
I also recommend to
explore some of its many options, for instance `-v` (verbose) reports
what it’s doing.
The example above with your code and figure might now look like that:
```
rsync -v Desktop/info201/myscript.R myusername@server.ischool.edu:scripts/
# now run the script on the remote machine
rsync -v myusername@server.ischool.edu:scripts/figure.pdf Desktop/info201/
```
Maybe the easiest way to copy your files is to copy (or rather update) the whole
directories. For instance, instead of the code above, you can do
```
# copy all files to server:
rsync -v Desktop/info201/* myusername@server.ischool.edu:scripts/
# now run the script on the remote machine
# ... and copy the results back:
rsync -v myusername@server.ischool.edu:scripts/* Desktop/info201/
```
Here `*` means *all files in this directory*. Hence, instead of
copying the files individually between the computers, we just copy
all
of them. Even better, we actually do not copy but just update. Huge
files that do not change do not take any bandwidth.
### D.3\.3 Graphical Frontends
Instead on relying on command line tools, one can also use graphical
front\-ends. For instance, “WinSCP” is a nice Norton Commander\-Style
frontend for copying files between the local and a remote machine over scp
for Windows. It provides a split window representing files on the
local and the remote end, and one can move, copy\-and\-paste and interact
with the mouse on these panes. On Mac you may take a look at
“Cyberduck”.
### D.3\.4 Remote Editing
Besides copying your files, many text editors also offer a “remote
editing” option. From the user perspective this looks as if directly
working on the remote server’s hard disk. Under the hood, the files
are copied back and forth with scp, rsync or one of their friends.
Emacs and vi do it out\-of\-the box, VSCode, Atom and sublime require a
plugin. AFAIK it is not possible with RStudio.
It is also possible to mount (attach) the harddisk of the remote
server to your laptop as if it were a local disk. Look yourself for
more information if you are interested.
D.4 R and Rscript
-----------------
When your code has been transferred to the server, your next task is to
run it. But before you can do it, you may want to install the
packages you need. For instance, you may want to install the *ggplot2* and *dplyr*. This must be done from R console using
`install.packages()`. You start R interactively by the command
```
R
```
It opens an R session, not unlike what you see inside of RStudio, just
here you have no RStudio to handrail you through the session. Now all loading,
saving, inspecting files, etc must be done through R commands.
The first time you do it, R complains about
non\-writeable system\-wide library and proposes to install and create
your personal libary. You should answer “yes” to these prompts. As
Linux systems typically compile the packages during installations, installation is slow and you see many messages (including warnings) in the
process. But it works, given that the necessary system libraries are available. You may alo open another terminal and ssh to the server from there while the packages are compiling in the other window.
Now you can finally run your R code. I strongly recommend to do it
from the directory where you intend to run the project before starting
R (`cd scripts` if you follow the example directory setup above). There are two options: either start R
interactively, or run it as a script.
If you do it from an interactive R session, you have to *source* your script:
```
source("myscript.R")
```
The script will run, and the first attempt most likely ends with an error message. You have
to correct the error either on your laptop and copy the file over to
the server again, or directly on the server, and
re\-run it again. Note that you don’t have to exit from the R session when
copying the files between your laptop and the server. Edit it, copy it over
from your laptop (using `scp` or
other tools), and just re\-source the file from
within the R session. If you need an open R session on the server, you may want to have several terminals connected to the server at the same time: in one, you have the R session, in another you may want to copy/move/edit files, and it may also be handy to have a window with `htop` too see how your running code is doing (see below).
Three terminals connecting to a remote server at the same time. The top one has been used for file management, the middle one shows tha active processes by user *otoomet*, and the bottom one has open R session for package installations. Multiple open connections is often a convenient way to switch frequently between different tasks.
Opening a separate R session may be useful for installing packages.
For running your scripts, I recommend you to run it entirely from
command line, either as
```
R CMD BATCH myscript.R
```
or
```
Rscript myscript.R
```
The first version produces a little more informative error messages,
the other one handles the environment in a little more consistent and
efficient manner.
### D.4\.1 Graphics Output with No GUI
If the server does not have any graphics capabilities, you have to
save your figures as files. For instance, to save the image in a pdf
file, you may use the following code in your R program:
```
pdf(file="figure1.pdf", width=12, height=8)
# width and height in inches
# check also out jpeg() and png() devices.
# do your plotting here
plot(1:10, rnorm(10))
# done plotting
dev.off()
# saves the image to disk and closes the file.
```
Afterwards you will have to copy the image file *figure1\.pdf* to your laptop for future use. Note that the file will be saved in the current working directory (unless you specify another folder) for the R session. This is normally the folder where you execute the `Rscript` command.
Besides of pdf graphics, R can also output jpg, png, svg and other formats. Check out the corresponding devices `jpeg`, `png`, `svg` and so forth. Additionally, *ggplot* has it’s own dedicated way of saving plots using `ggsave` although the base R graphics devices, such as `pdf` will work too.
### D.4\.1 Graphics Output with No GUI
If the server does not have any graphics capabilities, you have to
save your figures as files. For instance, to save the image in a pdf
file, you may use the following code in your R program:
```
pdf(file="figure1.pdf", width=12, height=8)
# width and height in inches
# check also out jpeg() and png() devices.
# do your plotting here
plot(1:10, rnorm(10))
# done plotting
dev.off()
# saves the image to disk and closes the file.
```
Afterwards you will have to copy the image file *figure1\.pdf* to your laptop for future use. Note that the file will be saved in the current working directory (unless you specify another folder) for the R session. This is normally the folder where you execute the `Rscript` command.
Besides of pdf graphics, R can also output jpg, png, svg and other formats. Check out the corresponding devices `jpeg`, `png`, `svg` and so forth. Additionally, *ggplot* has it’s own dedicated way of saving plots using `ggsave` although the base R graphics devices, such as `pdf` will work too.
D.5 Life on Server
------------------
The servers operate the same in many ways as the command line
on your own computer. However, there are a number of differences.
### D.5\.1 Be Social!
While you laptop is yours, and you are free to exploit all its
resources for your own good, this is not true for the server. The server is a
multiuser system, potentially doing good work for many people at the
same time. So
the first rule is: **Don’t take more resources than what you need!**
This that means don’t let the system run, grab memory, or occupy disk space
just for fun. Try to keep your R workspace clean (check out `rm()`
function) and
close R as soon as it has finished (this happens automatically if you
run your script through `Rscript` from command line). Don’t copy the dataset without a
good reason, and keep your copies in a compressed form. R can open
gzip and bzip2 files on the fly, so usually you don’t even need to
decompress these. Avoid costly recalculations of something you
already calculated. All this is even more important the last days before the deadline
when many people are running using the server.
Servers are typically well configured to tame misbehaving programs.
You may sometimes see your script stopping with a message “killed”.
This most likely means that it occupied too much memory, and the system
just killed it. Deal with this.
### D.5\.2 Useful Things to Do
There are several useful commands you can experiment with while on the
server.
```
htop
```
(press `q` to quit) tells you which programs run on the server, how much memory and cpu do
these take, and who are their owners (the corresponding users). It
also permits you to kill your misbehaving processes (press `k` and
select `SIGKILL`). Read more with `man htop`.
```
w
```
(**w**ho) prints the current logged\-in users of the server.
```
df -h
```
(**d**isplay **f**ree in **h**uman\-readable units) shows the free and
occupied disk space. You are mainly influenced by what is going on in the file system
`/home`.
### D.5\.3 Permissions and ownership
Unix systems are very strict about ownership and permissions. You are
a normal user with limited privileges. In particular, you cannot
modify or delete files that you don’t own. In a similar fashion, you
cannot kill processes you did not start. Feel free to attempt. It
won’t work.
In case you need to do something with elevated privileges (as
“superuser”), you have to contact the system administrator. In practice,
their responsiveness and willingness to accommodate your requests will
vary.
### D.5\.4 More than One Connection
It perfectly possible to log onto the server through multiple terminals at the
same time. You just open several terminals and log onto the
server from each of these. You can use one terminal to observe how your script is
doing (with `htop`), the other one to run the script, and the third one to inspect
output. If you find such approach useful, I recommend you to
familiarize yourself with gnu screen (command `screen` that includes
many related goodies.)
### D.5\.1 Be Social!
While you laptop is yours, and you are free to exploit all its
resources for your own good, this is not true for the server. The server is a
multiuser system, potentially doing good work for many people at the
same time. So
the first rule is: **Don’t take more resources than what you need!**
This that means don’t let the system run, grab memory, or occupy disk space
just for fun. Try to keep your R workspace clean (check out `rm()`
function) and
close R as soon as it has finished (this happens automatically if you
run your script through `Rscript` from command line). Don’t copy the dataset without a
good reason, and keep your copies in a compressed form. R can open
gzip and bzip2 files on the fly, so usually you don’t even need to
decompress these. Avoid costly recalculations of something you
already calculated. All this is even more important the last days before the deadline
when many people are running using the server.
Servers are typically well configured to tame misbehaving programs.
You may sometimes see your script stopping with a message “killed”.
This most likely means that it occupied too much memory, and the system
just killed it. Deal with this.
### D.5\.2 Useful Things to Do
There are several useful commands you can experiment with while on the
server.
```
htop
```
(press `q` to quit) tells you which programs run on the server, how much memory and cpu do
these take, and who are their owners (the corresponding users). It
also permits you to kill your misbehaving processes (press `k` and
select `SIGKILL`). Read more with `man htop`.
```
w
```
(**w**ho) prints the current logged\-in users of the server.
```
df -h
```
(**d**isplay **f**ree in **h**uman\-readable units) shows the free and
occupied disk space. You are mainly influenced by what is going on in the file system
`/home`.
### D.5\.3 Permissions and ownership
Unix systems are very strict about ownership and permissions. You are
a normal user with limited privileges. In particular, you cannot
modify or delete files that you don’t own. In a similar fashion, you
cannot kill processes you did not start. Feel free to attempt. It
won’t work.
In case you need to do something with elevated privileges (as
“superuser”), you have to contact the system administrator. In practice,
their responsiveness and willingness to accommodate your requests will
vary.
### D.5\.4 More than One Connection
It perfectly possible to log onto the server through multiple terminals at the
same time. You just open several terminals and log onto the
server from each of these. You can use one terminal to observe how your script is
doing (with `htop`), the other one to run the script, and the third one to inspect
output. If you find such approach useful, I recommend you to
familiarize yourself with gnu screen (command `screen` that includes
many related goodies.)
D.6 Advanced Usage
------------------
### D.6\.1 ssh keys, .ssh/config
Without further configuration, every time you open a ssh connection,
you have to type your password. Instead of re\-entering it
over and over again—this may not be particularly secure and it is definitely not convenient—you can configure
your ssh keys and copy it to the server. Next time, you will be
automatically authenticated with the key and you don’t have to type
the password any more. Note: this is the same ssh key that is used by GitHub if
you use ssh connection to GitHub.
As the first step, you have to create the key
with `ssh-keygen` (you may choose an empty passphrase) unless you
already have created one. Thereafter copy
your public key to the server with `ssh-copy-id`. Next time you log
onto the server, no password is needed. A good source for help with creating
and managing ssh keys is
[GitHub help](https://help.github.com/articles/connecting-to-github-with-ssh/).
You can also configure your ssh to recognize abbreviated
server names and your corresponding user names. This allows you to
connect to server with a simple command like `ssh info201`. This
information is stored in the file
`~/.ssh/config`, and should contain lines like
```
Host info201
User <your username>
Hostname info201.ischool.uw.edu
```
The `Host` keyword is followed by the abbreviated name of the server,
the following lines contain your username and the publicly visible
hostname for the server. Seek out more information if you are interested.
### D.6\.2 More about command line: pipes and shell patterns
*bash* is a powerful programming language. It is not particularly well suited to peform calculations or produce graphs, but it is excellent in glueing together other programs and their output.
One very powerful construct are *pipes*. These are in many ways similar to *magrittr* pipes in R, or perhaps we should say the other way around as shell pipes were introduced in 1973, a quarter of century before R was created. Pipes connect output of one command into input of another command. For instance, let’s take commands `ls -s` and `head`. The former lists the files (in long form) and the latter prints out a few first lines of a text file. But `head` is not just for printing files, it can print the first few lines of whatever you feed it. Look, for instance, the following command (actually a compound command):
```
ls -l | head
```
`ls -l` creates the file listing (in long form). But instead of printing it on screen, it will now send it over pipe `|` to the `head` utility. That one will extract the first lines and print only those.
Example of `ls -l` command that prints a number of files (above). Below, the same command is piped through `head -3` that retains only the three first lines (and prints those). Note that the first line is not a file, but a total size of files in this directory (in kilobytes).
Pipes are not limited to two commands only. You can pipe as many commands together as you with. For instance, you may want to see a few first lines in a large compressed csv file that contain the word *Zilong*. We use the following commands:
* **bzcat** prints bzip\-compressed data (you normally invoke it like `bzcat file.txt`). But it just prints and does not do anything else with the output.
* **grep** searches for a pattern in text. This can be used as `grep pattern file`, for instance grep `salary business-report.txt`. Note that *pattern* is a regular expression (rather similar as in R `gsub` and `grep` functions), so `grep` can search for a wide range of patterns. However, it cannot open compressed files, and neither can it limit the output to just a few lines.
* **head** prints first few lines of text. You can print out the first *n* lines of a file as `head -n file.txt`, but again–this does not work with compressed files.
We pipe the commands together as
```
bzcat data.csv.bz2 | grep Zilong | head
```
and achieve the result we want. So pipes are an excellent way to join small commands, each of which is good at only a single task, into complex compound tasks.
Another handy (albeit much less powerful) tool in shell is the shell patterns. These are a little bit like regular expressions for file names, just much simpler. There are two special characters in file names:
* **\*** means any number of any characters. For instance, `a.*` means all files like `a.`, `a.c`, `a.txt`, `a.txt.old`, `a...` and so on. It is just any number of any characters, including none at all, and “any” also means dots. However, the pattern does not cover `ba.c`.
* **?** means a single character, so `a.?` can stand for `a.c` and `a.R` but not for `a.txt`.
Shell patterns are useful for file manipulations where you have quicly sort though some sort of fine name patterns. These are handled by shell and not by individual commands, so they may not work if you are not at shell prompt but running another program, such as R or a text editor.
For instance, let’s list all *jpg* files in the current directory:
```
ls *.jpg
```
This lists all files of patten `*.jpg`, i.e. everything that has *.jpg* at it’s end.
Now let us copy all *png* files from server to the current directory:
```
scp user@server.com:*.png .
```
This copies all files in the form `*.png` from the server here, i.e. all files that end with *.png*.
### D.6\.3 Running RScript in ssh Session
Passwordless ssh connection gives you new wonderful possibilities.
First, you don’t even have to log into the server explicitly. You can
run a one\-command ssh session on the server directly from your
laptop. Namely, ssh accepts commands to be run on the remote
machine. If invoked as
```
ssh myusername@server.ischool.edu "Rscript myscript.R"
```
It does not open a remote shell but runs `Rscript script.R` instead.
Your command sequence for the whole process will accordingly look something like:
```
rsync -v Desktop/info201/* myusername@server.ischool.edu:scripts/
ssh myusername@server.ischool.edu "Rscript scripts/myscript.R"
rsync -v myusername@server.ischool.edu:scripts/* Desktop/info201/
```
All these command are issued on your laptop. You can also save these
to a text file and run all three together as a single **shell
script**!
Further, you don’t even need the shell. Instead, you may explain R on
your laptop how to start R on the
remote server over ssh. In this way you can turn your laptop and
server combination
into a high\-performance\-computing cluster! This allows you
to copy the script and run it on the server directly from within your
R program that runs
on your laptop. Cluster computing is out of scope of this chapter, but if you
are interested, look up the **makePSOCKcluster()** function in **parallel**
package.
### D.6\.1 ssh keys, .ssh/config
Without further configuration, every time you open a ssh connection,
you have to type your password. Instead of re\-entering it
over and over again—this may not be particularly secure and it is definitely not convenient—you can configure
your ssh keys and copy it to the server. Next time, you will be
automatically authenticated with the key and you don’t have to type
the password any more. Note: this is the same ssh key that is used by GitHub if
you use ssh connection to GitHub.
As the first step, you have to create the key
with `ssh-keygen` (you may choose an empty passphrase) unless you
already have created one. Thereafter copy
your public key to the server with `ssh-copy-id`. Next time you log
onto the server, no password is needed. A good source for help with creating
and managing ssh keys is
[GitHub help](https://help.github.com/articles/connecting-to-github-with-ssh/).
You can also configure your ssh to recognize abbreviated
server names and your corresponding user names. This allows you to
connect to server with a simple command like `ssh info201`. This
information is stored in the file
`~/.ssh/config`, and should contain lines like
```
Host info201
User <your username>
Hostname info201.ischool.uw.edu
```
The `Host` keyword is followed by the abbreviated name of the server,
the following lines contain your username and the publicly visible
hostname for the server. Seek out more information if you are interested.
### D.6\.2 More about command line: pipes and shell patterns
*bash* is a powerful programming language. It is not particularly well suited to peform calculations or produce graphs, but it is excellent in glueing together other programs and their output.
One very powerful construct are *pipes*. These are in many ways similar to *magrittr* pipes in R, or perhaps we should say the other way around as shell pipes were introduced in 1973, a quarter of century before R was created. Pipes connect output of one command into input of another command. For instance, let’s take commands `ls -s` and `head`. The former lists the files (in long form) and the latter prints out a few first lines of a text file. But `head` is not just for printing files, it can print the first few lines of whatever you feed it. Look, for instance, the following command (actually a compound command):
```
ls -l | head
```
`ls -l` creates the file listing (in long form). But instead of printing it on screen, it will now send it over pipe `|` to the `head` utility. That one will extract the first lines and print only those.
Example of `ls -l` command that prints a number of files (above). Below, the same command is piped through `head -3` that retains only the three first lines (and prints those). Note that the first line is not a file, but a total size of files in this directory (in kilobytes).
Pipes are not limited to two commands only. You can pipe as many commands together as you with. For instance, you may want to see a few first lines in a large compressed csv file that contain the word *Zilong*. We use the following commands:
* **bzcat** prints bzip\-compressed data (you normally invoke it like `bzcat file.txt`). But it just prints and does not do anything else with the output.
* **grep** searches for a pattern in text. This can be used as `grep pattern file`, for instance grep `salary business-report.txt`. Note that *pattern* is a regular expression (rather similar as in R `gsub` and `grep` functions), so `grep` can search for a wide range of patterns. However, it cannot open compressed files, and neither can it limit the output to just a few lines.
* **head** prints first few lines of text. You can print out the first *n* lines of a file as `head -n file.txt`, but again–this does not work with compressed files.
We pipe the commands together as
```
bzcat data.csv.bz2 | grep Zilong | head
```
and achieve the result we want. So pipes are an excellent way to join small commands, each of which is good at only a single task, into complex compound tasks.
Another handy (albeit much less powerful) tool in shell is the shell patterns. These are a little bit like regular expressions for file names, just much simpler. There are two special characters in file names:
* **\*** means any number of any characters. For instance, `a.*` means all files like `a.`, `a.c`, `a.txt`, `a.txt.old`, `a...` and so on. It is just any number of any characters, including none at all, and “any” also means dots. However, the pattern does not cover `ba.c`.
* **?** means a single character, so `a.?` can stand for `a.c` and `a.R` but not for `a.txt`.
Shell patterns are useful for file manipulations where you have quicly sort though some sort of fine name patterns. These are handled by shell and not by individual commands, so they may not work if you are not at shell prompt but running another program, such as R or a text editor.
For instance, let’s list all *jpg* files in the current directory:
```
ls *.jpg
```
This lists all files of patten `*.jpg`, i.e. everything that has *.jpg* at it’s end.
Now let us copy all *png* files from server to the current directory:
```
scp user@server.com:*.png .
```
This copies all files in the form `*.png` from the server here, i.e. all files that end with *.png*.
### D.6\.3 Running RScript in ssh Session
Passwordless ssh connection gives you new wonderful possibilities.
First, you don’t even have to log into the server explicitly. You can
run a one\-command ssh session on the server directly from your
laptop. Namely, ssh accepts commands to be run on the remote
machine. If invoked as
```
ssh myusername@server.ischool.edu "Rscript myscript.R"
```
It does not open a remote shell but runs `Rscript script.R` instead.
Your command sequence for the whole process will accordingly look something like:
```
rsync -v Desktop/info201/* myusername@server.ischool.edu:scripts/
ssh myusername@server.ischool.edu "Rscript scripts/myscript.R"
rsync -v myusername@server.ischool.edu:scripts/* Desktop/info201/
```
All these command are issued on your laptop. You can also save these
to a text file and run all three together as a single **shell
script**!
Further, you don’t even need the shell. Instead, you may explain R on
your laptop how to start R on the
remote server over ssh. In this way you can turn your laptop and
server combination
into a high\-performance\-computing cluster! This allows you
to copy the script and run it on the server directly from within your
R program that runs
on your laptop. Cluster computing is out of scope of this chapter, but if you
are interested, look up the **makePSOCKcluster()** function in **parallel**
package.
| Field Specific |
nacnudus.github.io | https://nacnudus.github.io/spreadsheet-munging-strategies/packages.html |
1\.1 Packages
-------------
Here are the packages used by the code in this book. The last three are my own:
[tidyxl](https://nacnudus.github.io/tidyxl),
[unpivotr](https://nacnudus.github.io/unpivotr) and
[smungs](https://github.com/nacnudus/smungs). You will need to install the
latest versions from CRAN or GitHub.
```
library(tidyverse)
library(readxl)
library(tidyxl)
library(unpivotr)
library(smungs) # GitHub only https://github.com/nacnudus/smungs
```
| Getting Cleaning and Wrangling Data |
nacnudus.github.io | https://nacnudus.github.io/spreadsheet-munging-strategies/data.html |
1\.2 Data
---------
The examples draw from a spreadsheet of toy data, included in the
[unpivotr](https://nacnudus.github.io/unpivotr) package. It is recommended to
[download](https://github.com/nacnudus/unpivotr/raw/master/inst/extdata/worked-examples.xlsx)
the spreadsheet and have open it in a spreadsheet application while you read the
book.
```
path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr")
```
| Getting Cleaning and Wrangling Data |
nacnudus.github.io | https://nacnudus.github.io/spreadsheet-munging-strategies/tidy-clean.html |
2\.1 Clean \& tidy tables
-------------------------
If the tables in the spreadsheet are clean and tidy, then you should use a
package like [readxl](https://github.com/tidyverse/readxl). But it’s worth
knowing how to emulate readxl with tidyxl and unpivotr, because some *almost*
clean tables can be handled using these techniques.
Clean and tidy means
* One table per sheet
* A single row of column headers, or no headers
* A single data type in each column
* Only one kind of sentinel value (to be interpreted as `NA`)
* No meaningful formatting
* No data buried in formulas
* No need to refer to named ranges
Here’s the full process.
```
path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr")
xlsx_cells(path, sheet = "clean") %>%
behead("up", header) %>%
select(row, data_type, header, character, numeric) %>%
spatter(header) %>%
select(-row)
```
```
## # A tibble: 3 x 2
## Age Name
## <dbl> <chr>
## 1 1 Matilda
## 2 3 Nicholas
## 3 5 Olivia
```
`tidyxl::xlsx_cells()` imports the spreadsheet into a data frame, where each row
of the data frame describes one cell of the spreadsheet. The columns `row` and
`col` (and `address`) describe the position of the cell, and the value of the
cell is in one of the columns `error`, `logical`, `numeric`, `date`,
`character`, depending on the type of data in the cell. The column `data_type`
says which column the value is in. Other columns describe formatting and
formulas.
```
path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr")
xlsx_cells(path, sheet = "clean") %>%
select(row, col, data_type, character, numeric)
```
```
## # A tibble: 8 x 5
## row col data_type character numeric
## <int> <int> <chr> <chr> <dbl>
## 1 1 1 character Name NA
## 2 1 2 character Age NA
## 3 2 1 character Matilda NA
## 4 2 2 numeric <NA> 1
## 5 3 1 character Nicholas NA
## 6 3 2 numeric <NA> 3
## 7 4 1 character Olivia NA
## 8 4 2 numeric <NA> 5
```
`unpivotr::behead()` takes one level of headers from a pivot table and makes it
part of the data. Think of it like `tidyr::gather()`, except that it works when
there is more than one row of headers (or more than one column of row\-headers),
and it only works on tables that have first come through
`unpivotr::as_cells()` or `tidyxl::xlsx_cells()`.
```
xlsx_cells(path, sheet = "clean") %>%
select(row, col, data_type, character, numeric) %>%
behead("up", header)
```
```
## # A tibble: 6 x 6
## row col data_type character numeric header
## <int> <int> <chr> <chr> <dbl> <chr>
## 1 2 1 character Matilda NA Name
## 2 2 2 numeric <NA> 1 Age
## 3 3 1 character Nicholas NA Name
## 4 3 2 numeric <NA> 3 Age
## 5 4 1 character Olivia NA Name
## 6 4 2 numeric <NA> 5 Age
```
`unpivotr::spatter()` spreads key\-value pairs across multiple columns, like
`tidyxl::spread()`, except that it handles mixed data types. It knows which
column contains the cell value (i.e. the `character` column or the `numeric`
column), by checking the `data_type` column. Just like `tidyr::spread()`, it
can be confused by extraneous data, so it’s usually a good idea to drop the
`col` column first, and to keep the `row` column.
```
xlsx_cells(path, sheet = "clean") %>%
select(row, col, data_type, character, numeric) %>%
behead("up", header) %>%
select(-col) %>%
spatter(header) %>%
select(-row)
```
```
## # A tibble: 3 x 2
## Age Name
## <dbl> <chr>
## 1 1 Matilda
## 2 3 Nicholas
## 3 5 Olivia
```
In case the table has no column headers, you can spatter the `col` column
instead of a nonexistent `header` column.
```
xlsx_cells(path, sheet = "clean") %>%
dplyr::filter(row >= 2) %>%
select(row, col, data_type, character, numeric) %>%
spatter(col) %>%
select(-row)
```
```
## # A tibble: 3 x 2
## `1` `2`
## <chr> <dbl>
## 1 Matilda 1
## 2 Nicholas 3
## 3 Olivia 5
```
Tidyxl and unpivotr are much more complicated than readxl, and that’s the point:
tidyxl and unpivotr give you more power and complexity when you need it.
```
read_excel(path, sheet = "clean")
```
```
## # A tibble: 3 x 2
## Name Age
## <chr> <dbl>
## 1 Matilda 1
## 2 Nicholas 3
## 3 Olivia 5
```
```
read_excel(path, sheet = "clean", col_names = FALSE, skip = 1)
```
```
## New names:
## * `` -> ...1
## * `` -> ...2
```
```
## # A tibble: 3 x 2
## ...1 ...2
## <chr> <dbl>
## 1 Matilda 1
## 2 Nicholas 3
## 3 Olivia 5
```
| Getting Cleaning and Wrangling Data |
nacnudus.github.io | https://nacnudus.github.io/spreadsheet-munging-strategies/almost-tidy-tables.html |
2\.2 Almost\-tidy tables
------------------------
For tables that are already ‘tidy’ (a single row of column headers), use
packages like [readxl](http://readxl.tidyverse.org) that specialise in importing
tidy data.
For everything else, read on.
### 2\.2\.1 Transposed (headers in the first row, data extends to the right)
Most packages for importing data assume that the headers are in the first row,
and each row of data is an observation. They usually don’t support the
alternative: headers in the first column, and each column of data is an
observation.
You can hack a way around this by importing without recognising any headers,
transposing with `t()` (which outputs a matrix), placing the headers as names,
and converting back to a data frame, but this almost always results in all the
data types being converted.
```
path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr")
read_excel(path, sheet = "transposed", col_names = FALSE) %>%
t() %>%
`colnames<-`(.[1, ]) %>%
.[-1, ] %>%
as_tibble()
```
```
## New names:
## * `` -> ...1
## * `` -> ...2
## * `` -> ...3
## * `` -> ...4
```
```
## # A tibble: 3 x 2
## Name Age
## <chr> <chr>
## 1 Matilda 1
## 2 Nicholas 3
## 3 Olivia 5
```
Tidyxl and unpivotr are agnostic to the layout of tables. Importing the
transpose is the same is importing the usual layout, merely using the `"left"`
direction instead of `"up"` when beheading the headers.
```
xlsx_cells(path, sheet = "transposed") %>%
behead("left", header) %>%
select(col, data_type, header, character, numeric) %>%
spatter(header) %>%
select(Name, Age)
```
```
## # A tibble: 3 x 2
## Name Age
## <chr> <dbl>
## 1 Matilda 1
## 2 Nicholas 3
## 3 Olivia 5
```
### 2\.2\.2 Other stuff on the same sheet
It will be more complicated when the table doesn’t begin in cell A1, or if there
are non\-blank cells above, below or either side of the table.
If you know at coding time which rows and columns the table occupies, then you
can do the following.
* Blank or non\-blank cells above the table: use the `skip` argument of
`readxl::read_excel()`.
* Blank or non\-blank cells either side of the table: use the `col_types`
argument of `readxl::read_excel()` to ignore those columns.
* Blank or non\-blank cells below the table: use `n_max`
argument of `readxl::read_excel()` to ignore those rows.
```
path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr")
readxl::read_excel(path,
sheet = "notes",
skip = 2,
n_max = 33,
col_types = c("guess", "guess", "skip")) %>%
drop_na()
```
```
## # A tibble: 2 x 2
## Name Age
## <chr> <dbl>
## 1 Matilda 1
## 2 Nicholas 3
```
If you don’t know at coding time which rows and columns the table occupies (e.g.
when the latest version of the spreadsheet is published and the table has
moved), then one strategy is to read the spreadsheet with `tidyxl::xlsx_cells()`
first, and inspect the results to determine the boundaries of the table. Then
use those boundaries as the `skip`, `n_max` and `col_types` arguments to
`readxl::read_excel()`
1. Read the spreadsheet with `tidyxl::xlsx_cells()`. Filter the result for
sentinel values, e.g. the cells containing the first and final column
headers, and a cell in the final row of data.
2. Construct the arguments `skip`, `n_max` and `col_types` so that
`readxl::read_excel()` gets the exact dimensions of the table.
```
# Step 1: read the spreadsheet and filter for sentinel values to detect the
# top-left and bottom-right cells
cells <- xlsx_cells(path, sheet = "notes")
rectify(cells)
```
```
## # A tibble: 7 x 5
## `row/col` `1(A)` `2(B)` `3(C)` `4(D)`
## <int> <chr> <chr> <chr> <chr>
## 1 1 Title text <NA> <NA> <NA>
## 2 2 <NA> <NA> <NA> <NA>
## 3 3 <NA> Name Age <NA>
## 4 4 <NA> Matilda 1 <NA>
## 5 5 <NA> Nicholas 3 <NA>
## 6 6 <NA> <NA> <NA> <NA>
## 7 7 <NA> <NA> <NA> Footnote
```
```
top_left <-
dplyr::filter(cells, character == "Name") %>%
select(row, col)
top_left
```
```
## # A tibble: 1 x 2
## row col
## <int> <int>
## 1 3 2
```
```
# It can be tricky to find the bottom-right cell because you have to make some
# assumptions. Here we assume that only cells within the table are numeric.
bottom_right <-
dplyr::filter(cells, data_type == "numeric") %>%
summarise(row = max(row), col = max(col))
bottom_right
```
```
## # A tibble: 1 x 2
## row col
## <int> <int>
## 1 5 3
```
```
# Step 2: construct the arguments `skip` and `n_max` for read_excel()
skip <- top_left$row - 1L
n_rows <- bottom_right$row - skip
read_excel(path, sheet = "notes", skip = skip, n_max = n_rows)
```
```
## # A tibble: 2 x 2
## Name Age
## <chr> <dbl>
## 1 Matilda 1
## 2 Nicholas 3
```
Here’s another way using only tidyxl and unpivotr.
```
# Step 2: filter for cells between the top-left and bottom-right, and spatter
# into a table
cells %>%
dplyr::filter(between(row, top_left$row, bottom_right$row),
between(col, top_left$col, bottom_right$col)) %>%
select(row, col, data_type, character, numeric) %>%
behead("up", header) %>%
select(-col) %>%
spatter(header) %>%
select(-row)
```
```
## # A tibble: 2 x 2
## Age Name
## <dbl> <chr>
## 1 1 Matilda
## 2 3 Nicholas
```
### 2\.2\.1 Transposed (headers in the first row, data extends to the right)
Most packages for importing data assume that the headers are in the first row,
and each row of data is an observation. They usually don’t support the
alternative: headers in the first column, and each column of data is an
observation.
You can hack a way around this by importing without recognising any headers,
transposing with `t()` (which outputs a matrix), placing the headers as names,
and converting back to a data frame, but this almost always results in all the
data types being converted.
```
path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr")
read_excel(path, sheet = "transposed", col_names = FALSE) %>%
t() %>%
`colnames<-`(.[1, ]) %>%
.[-1, ] %>%
as_tibble()
```
```
## New names:
## * `` -> ...1
## * `` -> ...2
## * `` -> ...3
## * `` -> ...4
```
```
## # A tibble: 3 x 2
## Name Age
## <chr> <chr>
## 1 Matilda 1
## 2 Nicholas 3
## 3 Olivia 5
```
Tidyxl and unpivotr are agnostic to the layout of tables. Importing the
transpose is the same is importing the usual layout, merely using the `"left"`
direction instead of `"up"` when beheading the headers.
```
xlsx_cells(path, sheet = "transposed") %>%
behead("left", header) %>%
select(col, data_type, header, character, numeric) %>%
spatter(header) %>%
select(Name, Age)
```
```
## # A tibble: 3 x 2
## Name Age
## <chr> <dbl>
## 1 Matilda 1
## 2 Nicholas 3
## 3 Olivia 5
```
### 2\.2\.2 Other stuff on the same sheet
It will be more complicated when the table doesn’t begin in cell A1, or if there
are non\-blank cells above, below or either side of the table.
If you know at coding time which rows and columns the table occupies, then you
can do the following.
* Blank or non\-blank cells above the table: use the `skip` argument of
`readxl::read_excel()`.
* Blank or non\-blank cells either side of the table: use the `col_types`
argument of `readxl::read_excel()` to ignore those columns.
* Blank or non\-blank cells below the table: use `n_max`
argument of `readxl::read_excel()` to ignore those rows.
```
path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr")
readxl::read_excel(path,
sheet = "notes",
skip = 2,
n_max = 33,
col_types = c("guess", "guess", "skip")) %>%
drop_na()
```
```
## # A tibble: 2 x 2
## Name Age
## <chr> <dbl>
## 1 Matilda 1
## 2 Nicholas 3
```
If you don’t know at coding time which rows and columns the table occupies (e.g.
when the latest version of the spreadsheet is published and the table has
moved), then one strategy is to read the spreadsheet with `tidyxl::xlsx_cells()`
first, and inspect the results to determine the boundaries of the table. Then
use those boundaries as the `skip`, `n_max` and `col_types` arguments to
`readxl::read_excel()`
1. Read the spreadsheet with `tidyxl::xlsx_cells()`. Filter the result for
sentinel values, e.g. the cells containing the first and final column
headers, and a cell in the final row of data.
2. Construct the arguments `skip`, `n_max` and `col_types` so that
`readxl::read_excel()` gets the exact dimensions of the table.
```
# Step 1: read the spreadsheet and filter for sentinel values to detect the
# top-left and bottom-right cells
cells <- xlsx_cells(path, sheet = "notes")
rectify(cells)
```
```
## # A tibble: 7 x 5
## `row/col` `1(A)` `2(B)` `3(C)` `4(D)`
## <int> <chr> <chr> <chr> <chr>
## 1 1 Title text <NA> <NA> <NA>
## 2 2 <NA> <NA> <NA> <NA>
## 3 3 <NA> Name Age <NA>
## 4 4 <NA> Matilda 1 <NA>
## 5 5 <NA> Nicholas 3 <NA>
## 6 6 <NA> <NA> <NA> <NA>
## 7 7 <NA> <NA> <NA> Footnote
```
```
top_left <-
dplyr::filter(cells, character == "Name") %>%
select(row, col)
top_left
```
```
## # A tibble: 1 x 2
## row col
## <int> <int>
## 1 3 2
```
```
# It can be tricky to find the bottom-right cell because you have to make some
# assumptions. Here we assume that only cells within the table are numeric.
bottom_right <-
dplyr::filter(cells, data_type == "numeric") %>%
summarise(row = max(row), col = max(col))
bottom_right
```
```
## # A tibble: 1 x 2
## row col
## <int> <int>
## 1 5 3
```
```
# Step 2: construct the arguments `skip` and `n_max` for read_excel()
skip <- top_left$row - 1L
n_rows <- bottom_right$row - skip
read_excel(path, sheet = "notes", skip = skip, n_max = n_rows)
```
```
## # A tibble: 2 x 2
## Name Age
## <chr> <dbl>
## 1 Matilda 1
## 2 Nicholas 3
```
Here’s another way using only tidyxl and unpivotr.
```
# Step 2: filter for cells between the top-left and bottom-right, and spatter
# into a table
cells %>%
dplyr::filter(between(row, top_left$row, bottom_right$row),
between(col, top_left$col, bottom_right$col)) %>%
select(row, col, data_type, character, numeric) %>%
behead("up", header) %>%
select(-col) %>%
spatter(header) %>%
select(-row)
```
```
## # A tibble: 2 x 2
## Age Name
## <dbl> <chr>
## 1 1 Matilda
## 2 3 Nicholas
```
| Getting Cleaning and Wrangling Data |
nacnudus.github.io | https://nacnudus.github.io/spreadsheet-munging-strategies/tidy-formatted-rows.html |
2\.3 Meaningfully formatted rows
--------------------------------
As with [clean, tidy tables](clean), but with a second step to interpret the
formatting.
Sometimes whole rows in a table are highlighted by formatting them with, say, a
bright yellow fill. The highlighting could mean “this observation should be
ignored”, or “this product is no longer available”. Different colours could
mean different levels of a hierarchy, e.g. green for “pass” and red for “fail”.
There are three steps to interpreting this.
1. Import the table, taking only the cell values and ignoring the formatting.
2. Import one column of the table, taking only the formatting and not the cell
values.
3. Use `dplyr::bind_cols()` to append the column of formatting to the table of
cell values. You can then interpret the formatting however you like.
Step 1 is the same as [clean, tidy tables](clean).
Step 2 uses `tidyxl::xlsx_cells()` to load the data, `tidyxl::xlsx_formats()`,
and several tidyverse functions to link the two and filter for only one column.
Why only one column? Because if a whole row is highlighted, then you only need
to know the highlighting of one column to know the highlighting of all the
others.
This is a special case of the following section, [meaningfully formatted
cells](tidy-formatted-cells). Here `dplyr::bind_cols()` can be used as a
shortcut, because we are joining exactly `n` rows of formatting to `n` rows of
data. The following sections is a more general case that can be used instead of
this procedure.
```
# Step 1: import the table taking only cell values and ignoring the formatting
path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr")
x <- read_excel(path, sheet = "highlights")
# Step 2: import one column of the table, taking only the formatting and not the
# cell values
# `formats` is a pallette of fill colours that can be indexed by the
# `local_format_id` of a given cell to get the fill colour of that cell
fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb
# Import all the cells, filter out the header row, filter for the first column,
# and create a new column `fill_colour` of the fill colours, by looking up the
# local_format_id of each cell in the `fill_colours` pallette.
fills <-
xlsx_cells(path, sheet = "highlights") %>%
dplyr::filter(row >= 2, col == 1) %>% # Omit the header row
mutate(fill_colour = fill_colours[local_format_id]) %>%
select(fill_colour)
# Step 3: append the `fill` column to the rest of the data
bind_cols(x, fills) %>%
select(Age, Height, fill_colour)
```
```
## # A tibble: 3 x 3
## Age Height fill_colour
## <dbl> <dbl> <chr>
## 1 1 2 <NA>
## 2 3 4 FFFFFF00
## 3 5 6 <NA>
```
Note that the fill colour is expressed as an RGB value with transparency in the
first two letters, e.g. `FFFFFF00` is `FF` (opaque), with `FFFF00` (yellow).
Here’s another way using only tidyxl and unpivotr.
```
fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb
xlsx_cells(path, sheet = "highlights") %>%
mutate(fill_colour = fill_colours[local_format_id]) %>%
select(row, col, data_type, character, numeric, fill_colour) %>%
behead("up", header) %>%
select(-col, -character) %>%
spatter(header) %>%
select(-row)
```
```
## # A tibble: 3 x 3
## fill_colour Age Height
## <chr> <dbl> <dbl>
## 1 <NA> 1 2
## 2 FFFFFF00 3 4
## 3 <NA> 5 6
```
| Getting Cleaning and Wrangling Data |
nacnudus.github.io | https://nacnudus.github.io/spreadsheet-munging-strategies/tidy-formatted-cells.html |
2\.4 Meaningfully formatted cells
---------------------------------
If single cells are highlighted, rather than whole rows, then the highlights
probably indicate something about the column rather than the row. For example,
a highlighted cell in a column called “age” of a table of medical patients,
might mean “the age of this patient is uncertain”.
One way to deal with this is to create a new column in the final table for each
column in the original that has any highlighted cells. For example, if
highlighted cells mean “this value is uncertain”, and some cells in the `age`
and `height` columns are highlighted, then you could create two new columns:
`uncertain_age`, and `uncertain_height`, by following the procedure of
[meaningfully formatted rows](tidy-formatted-rows) for each column `age` and
`height`.
```
# Step 1: import the table taking only cell values and ignoring the formatting
path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr")
x <- read_excel(path, sheet = "annotations")
# Step 2: import one column of the table, taking only the formatting and not the
# cell values
# `formats` is a pallette of fill colours that can be indexed by the
# `local_format_id` of a given cell to get the fill colour of that cell
fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb
# Import all the cells, filter out the header row, filter for the first column,
# and create new columns `something_fill` of the fill colours, by looking up the
# local_format_id of each cell in the `formats` pallette.
fills <-
xlsx_cells(path, sheet = "annotations") %>%
dplyr::filter(row >= 2, col >= 2) %>% # Omit the header row and name column
mutate(fill_colour = fill_colours[local_format_id]) %>%
select(row, col, fill_colour) %>%
spread(col, fill_colour) %>%
select(-row) %>%
set_names(paste0(colnames(x)[-1], "_fill"))
fills
```
```
## # A tibble: 3 x 2
## Age_fill Height_fill
## <chr> <chr>
## 1 <NA> <NA>
## 2 FFFFFF00 <NA>
## 3 <NA> FF92D050
```
```
# Step 3: append the `fill` column to the rest of the data
bind_cols(x, fills)
```
```
## # A tibble: 3 x 5
## Name Age Height Age_fill Height_fill
## <chr> <dbl> <dbl> <chr> <chr>
## 1 Matilda 1 2 <NA> <NA>
## 2 Nicholas 3 4 FFFFFF00 <NA>
## 3 Olivia 5 6 <NA> FF92D050
```
Here’s the same thing, but using only tidyxl and unpivotr
```
fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb
cells <-
xlsx_cells(path, sheet = "annotations") %>%
mutate(fill_colour = fill_colours[local_format_id]) %>%
select(row, col, data_type, character, numeric, fill_colour)
cells
```
```
## # A tibble: 12 x 6
## row col data_type character numeric fill_colour
## <int> <int> <chr> <chr> <dbl> <chr>
## 1 1 1 character Name NA <NA>
## 2 1 2 character Age NA <NA>
## 3 1 3 character Height NA <NA>
## 4 2 1 character Matilda NA <NA>
## 5 2 2 numeric <NA> 1 <NA>
## 6 2 3 numeric <NA> 2 <NA>
## 7 3 1 character Nicholas NA <NA>
## 8 3 2 numeric <NA> 3 FFFFFF00
## 9 3 3 numeric <NA> 4 <NA>
## 10 4 1 character Olivia NA <NA>
## 11 4 2 numeric <NA> 5 <NA>
## 12 4 3 numeric <NA> 6 FF92D050
```
```
values <-
cells %>%
select(-fill_colour) %>%
behead("up", header) %>%
select(-col) %>%
spatter(header)
values
```
```
## # A tibble: 3 x 4
## row Age Height Name
## <int> <dbl> <dbl> <chr>
## 1 2 1 2 Matilda
## 2 3 3 4 Nicholas
## 3 4 5 6 Olivia
```
```
fills <-
cells %>%
behead("up", header) %>%
mutate(header = paste0(header, "_fill")) %>%
select(row, header, fill_colour) %>%
spread(header, fill_colour)
fills
```
```
## # A tibble: 3 x 4
## row Age_fill Height_fill Name_fill
## <int> <chr> <chr> <chr>
## 1 2 <NA> <NA> <NA>
## 2 3 FFFFFF00 <NA> <NA>
## 3 4 <NA> FF92D050 <NA>
```
```
left_join(values, fills, by = "row") %>%
select(-row)
```
```
## # A tibble: 3 x 6
## Age Height Name Age_fill Height_fill Name_fill
## <dbl> <dbl> <chr> <chr> <chr> <chr>
## 1 1 2 Matilda <NA> <NA> <NA>
## 2 3 4 Nicholas FFFFFF00 <NA> <NA>
## 3 5 6 Olivia <NA> FF92D050 <NA>
```
Another way would be to make the table what I call “extra\-tidy”. If it is tidy,
then each row is an observation, and each column is a variable. To make it
“extra\-tidy”, you `gather()` the variables so that each row is *one observation
of one variable*. This works best when every variable has the same data type,
otherwise the values will be coerced, probably to a character.
```
# Tidy
(x <- read_excel(path, sheet = "annotations"))
```
```
## # A tibble: 3 x 3
## Name Age Height
## <chr> <dbl> <dbl>
## 1 Matilda 1 2
## 2 Nicholas 3 4
## 3 Olivia 5 6
```
```
# Extra-tidy
extra_tidy <-
x %>%
gather(variable, value, -Name) %>%
arrange(Name, variable)
extra_tidy
```
```
## # A tibble: 6 x 3
## Name variable value
## <chr> <chr> <dbl>
## 1 Matilda Age 1
## 2 Matilda Height 2
## 3 Nicholas Age 3
## 4 Nicholas Height 4
## 5 Olivia Age 5
## 6 Olivia Height 6
```
With an extra\-tidy dataset, the formatting can now be appended to the values of
individual variables, rather than to whole observations.
```
# Extra-tidy, with row and column numbers of the original variables
extra_tidy <-
read_excel(path, sheet = "annotations") %>%
mutate(row = row_number() + 1L) %>%
gather(variable, value, -row, -Name) %>%
group_by(row) %>%
mutate(col = row_number() + 1L) %>%
ungroup() %>%
select(row, col, Name, variable, value) %>%
arrange(row, col)
extra_tidy
```
```
## # A tibble: 6 x 5
## row col Name variable value
## <int> <int> <chr> <chr> <dbl>
## 1 2 2 Matilda Age 1
## 2 2 3 Matilda Height 2
## 3 3 2 Nicholas Age 3
## 4 3 3 Nicholas Height 4
## 5 4 2 Olivia Age 5
## 6 4 3 Olivia Height 6
```
```
# `formats` is a pallette of fill colours that can be indexed by the
# `local_format_id` of a given cell to get the fill colour of that cell
fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb
# Import all the cells, filter out the header row, filter for the first column,
# and create a new column `uncertain` based on the fill colours, by looking up
# the local_format_id of each cell in the `formats` pallette.
fills <-
xlsx_cells(path, sheet = "annotations") %>%
dplyr::filter(row >= 2, col >= 2) %>% # Omit the header row and name column
mutate(fill_colour = fill_colours[local_format_id]) %>%
select(row, col, fill_colour)
fills
```
```
## # A tibble: 6 x 3
## row col fill_colour
## <int> <int> <chr>
## 1 2 2 <NA>
## 2 2 3 <NA>
## 3 3 2 FFFFFF00
## 4 3 3 <NA>
## 5 4 2 <NA>
## 6 4 3 FF92D050
```
```
# Step 3: append the `fill` column to the rest of the data
left_join(extra_tidy, fills, by = c("row", "col"))
```
```
## # A tibble: 6 x 6
## row col Name variable value fill_colour
## <int> <int> <chr> <chr> <dbl> <chr>
## 1 2 2 Matilda Age 1 <NA>
## 2 2 3 Matilda Height 2 <NA>
## 3 3 2 Nicholas Age 3 FFFFFF00
## 4 3 3 Nicholas Height 4 <NA>
## 5 4 2 Olivia Age 5 <NA>
## 6 4 3 Olivia Height 6 FF92D050
```
Here’s the same extra\-tidy version, but using only tidyxl and unpivotr.
```
fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb
xlsx_cells(path, sheet = "annotations") %>%
mutate(fill_colour = fill_colours[local_format_id]) %>%
select(row, col, data_type, character, numeric, fill_colour) %>%
behead("left", Name) %>%
behead("up", variable) %>%
select(-data_type, -character, value = numeric)
```
```
## # A tibble: 6 x 6
## row col value fill_colour Name variable
## <int> <int> <dbl> <chr> <chr> <chr>
## 1 2 2 1 <NA> Matilda Age
## 2 2 3 2 <NA> Matilda Height
## 3 3 2 3 FFFFFF00 Nicholas Age
## 4 3 3 4 <NA> Nicholas Height
## 5 4 2 5 <NA> Olivia Age
## 6 4 3 6 FF92D050 Olivia Height
```
| Getting Cleaning and Wrangling Data |
nacnudus.github.io | https://nacnudus.github.io/spreadsheet-munging-strategies/layered-formatting.html |
2\.5 Layered meaningful formatting
----------------------------------
Sometimes different kinds of formatting relate to clearly different aspects of
an observation, e.g. yellow highlight for “uncertain data” and red text for
“product no longer available”. Both yellow highlighting and red text in the
same row would indicate uncertain data and unavailability of the product at the
same time.
Deal with it by reading each kind of formatting into a separate column, e.g.
fill colour into one column, font colour into another, bold/not\-bold into a
another, etc.
```
# Step 1: import the table taking only cell values and ignoring the formatting
path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr")
x <- read_excel(path, sheet = "combined-highlights")
# Step 2: import one kind of formatting of one column of the table
# `formats` is a pallette of fill colours that can be indexed by the
# `local_format_id` of a given cell to get the fill colour of that cell
fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb
font_colours <- xlsx_formats(path)$local$font$color$rgb
# Import all the cells, filter out the header row, filter for the first column,
# and create a new column `fill` of the fill colours, by looking up the
# local_format_id of each cell in the `formats` pallette.
formats <-
xlsx_cells(path, sheet = "combined-highlights") %>%
dplyr::filter(row >= 2, col == 1) %>% # Omit the header row
mutate(fill_colour = fill_colours[local_format_id],
font_colour = font_colours[local_format_id]) %>%
select(fill_colour, font_colour)
# Step 3: append the `fill` column to the rest of the data
bind_cols(x, formats)
```
```
## # A tibble: 4 x 5
## Name Weight Price fill_colour font_colour
## <chr> <dbl> <dbl> <chr> <chr>
## 1 Knife 7 8 <NA> FF000000
## 2 Fork 5 6 FFFFFF00 FF000000
## 3 Spoon 3 4 <NA> FFFF0000
## 4 Teaspoon 1 2 FFFFFF00 FFFF0000
```
Here’s the same thing, but using only tidyxl and unpivotr.
```
fill_colours <- xlsx_formats(path)$local$fill$patternFill$fgColor$rgb
font_colours <- xlsx_formats(path)$local$font$color$rgb
cells <-
xlsx_cells(path, sheet = "combined-highlights") %>%
mutate(fill_colour = fill_colours[local_format_id],
font_colour = font_colours[local_format_id]) %>%
select(row, col, data_type, character, numeric, fill_colour, font_colour) %>%
behead("up", header) %>%
behead("left", Name) %>%
select(-col, -character)
values <-
cells %>%
select(-fill_colour, -font_colour) %>%
spread(header, numeric)
formats <- distinct(cells, row, fill_colour, font_colour)
left_join(values, formats, by = "row") %>%
select(-row)
```
```
## # A tibble: 4 x 6
## data_type Name Price Weight fill_colour font_colour
## <chr> <chr> <dbl> <dbl> <chr> <chr>
## 1 numeric Knife 8 7 <NA> FF000000
## 2 numeric Fork 6 5 FFFFFF00 <NA>
## 3 numeric Spoon 4 3 <NA> FFFF0000
## 4 numeric Teaspoon 2 1 FFFFFF00 FFFF0000
```
| Getting Cleaning and Wrangling Data |